3. Evaluate a Detection Model

1. Context

A. Pre-requisite

  • A Projecton Picsellia allowing you to host Experiment and DatasetVersion.
  • A DatasetVersionconfigured in OBJECT_DETECTION and annotated.
  • Have an Experimentwith a DatasetVersion attached to it - you can add the test alias to this DatasetVersion.
  • A Object Detection local model or a Object DetectionModelVersion.

πŸ“˜

If you want to integrate your local custom object detection model into Picsellia you can checkout this tutorial πŸ‘‰ Migrate your Models to Picsellia

B. Variables

Let's say that:

  • The Project is called Documentation Project
  • The Experimentis called my_experiment
  • The DatasetVersionattached is called test

C. Setup

You need to have a post-processing function that will return the predicted Label and it's confidence_treshold and boxes formatted in denormalized x, y, w, h.

def predict(input: Image, model: TF.Model/PT.Model):
  	
    preprocessed_input = pre_process(input)
    prediction = model(preprocessed_input)
    
    class_name, confidence_treshold, denormalized_boxes = post_process(prediction) 
    return class_name, confidence_treshold, denormalized_boxes

You should also create a script that will initialise picsellia Clientconnexion and fetch your Project,Experiment, DatasetVersion.

from picsellia import Client 

client = Client(api_token, organization_name, host='https://app.picsellia.com')

project = client.get_project(name='Documentation Project')
experiment = project.get_experiment(name='my_experiment')

testing_dataset = experiment.get_dataset('test')

We also need to create a dictionary matching class_names and the Label objects from Picsellia in order to attach the good Label. Something like that:

{
  "cat": PicselliaLabel(Python Object),
  "dog": PicselliaLabel(Python Object)
}
picsellia_labels_name = testing_dataset.list_labels()

label_matching = {k.name: k for k in picsellia_labels_name}

2. Implementing the Model Testing

Let's take a look at the Experiment add_evaluation() method:

add_evaluation(
   asset: Asset, add_type: Union[str,
   AddEvaluationType] = AddEvaluationType.REPLACE,
   rectangles: Optional[List[Tuple[int, int, int, int, Label, float]]] = None,
   polygons: Optional[List[Tuple[List[List[int]], Label, float]]] = None,
   classifications: Optional[List[Tuple[Label, float]]] = None
)

Let's dive into 3 of the arguments:

  • asset: Asset (Meaning that you can only have one evaluation by Asset)
  • add_type: It's an enum with these possibilities : (KEEP/REPLACE) the default is REPLACE. KEEP will keep the existing evaluation if it exists.
  • rectangles: it's a list of Tuple, the Tuple being ([x, y, w, h], Label, confidence_score)

Let's wrap everything together with a YOLOv8 detector from Ultralytics, here is the snippet from HuggingFace:

from ultralytics import YOLO

# Load a pretrained YOLOv8n model
model = YOLO('yolov8n.pt')

# Run inference on an image
results = model('bus.jpg')  # results list

# View results
for r in results:
    print(r.boxes)  # print the Boxes object containing the detection bounding boxes

Let's format this in order to integrate Picsellia into this:

import numpy as np
from picsellia import Client
from picsellia.sdk.asset import Asset
from picsellia.types.enums import InferenceType
from ultralytics import YOLO

client = Client(api_token="", organization_name="")
project = client.get_project(name='Documentation Project')
experiment = project.get_experiment(name='my_experiment')
testing_dataset = experiment.get_dataset('test')
picsellia_labels_name = dataset.list_labels()
label_matching = {k.name: k for k in picsellia_labels_name}


model = model = YOLO('yolov8n.pt')

def postprocess(results):
    r = results[0].cpu()
    confs = r.boxes.conf.numpy().astype(np.float)
    boxes = r.boxes.xywh.numpy().astype(int)
    classes_index = r.boxes.cls.numpy().astype(int)
    return classes_index, boxes, confs
      
for asset in dataset.list_assets():
    results = model(
    classes_idx, boxes, confs = postprocess(results)
    evaluated_rectangles = []
    for idx, box, conf in list(zip(classes_idx, boxes, confs)):
        cls_name = model.names[idx]
        evaluated_rectangles.append((box, label_matching[cls_name], conf))
    experiment.add_evaluation(asset, rectangles=evaluated_rectangles)
    
experiment.compute_evaluations_metrics(inference_type=InferenceType.OBJECT_DETECTION)