3. Object Detection - Monitor Model Predictions

If you have created a Feployment on Picsellia platform, and that Deployment is configured to be plugged to our monitoring servic_e, you can monitor Prediction by requesting our _monitoring service.

You can use the Python SDK to do this, or you can directly send a request on the monitoring service.

1. Monitor an Object Detection Prediction with Python SDK

A. Simple Example

This snippet will add a Prediction to your Picsellia Deployment: this prediction took 0.257 second to be computed, was done on image in given path at "/path/to/image.png" of size (w600*h800) and is made of 3 rectangles, first one being an object of class 1 with a score of 0.9, that is located at [top=128, left=200, bottom=159, right=240].

from picsellia import Client
from picsellia.types.schemas_prediction import DetectionPredictionFormat

api_token = "<YOUR_API_TOKEN>"
organization_name = "<ORGANIZATION_NAME>"
deployment_name = "<DEPLOYMENT_NAME>"

# You can also use your organization_id, please be sure it's an UUID if you're in a version < 6.6
client = Client(api_token=api_token, organization_name=organization_name)

deployment = client.get_deployment(deployment_name)

# Boxes are [y1, x1, y2, x2] or [top, left, bottom, right]
prediction = DetectionPredictionFormat(
  detection_classes=[1, 2, 1],
  detection_boxes=[[128, 200, 159, 240], [190, 100, 50, 90], [145, 50, 24, 76]],
  detection_scores=[0.9, 0.7, 0.4]
)

# Height and width are needed because monitoring service will not open the image 
# Latency is a float representing seconds
data = deployment.monitor(
    image_path="/path/to/image.png",
  	latency=0.257,
    height=800,
    width=600,
    prediction=prediction
)

print(data)

๐Ÿ“˜

Prediction are always stored as Data in the Datalake

Please note that once the Prediction logged in a Deployment, it will be stored as a Datain your Datalake and be linked to the related Prediction.

You can retrieve them easily by filtering on the source among your Datalake. Indeed by default, Datain the Datalakecoming from a Deployment has the source set as serving, unless you defined another customized source using the monitor() function.

B. In-Depth Example: Shadow predictions

If you want to give a shadow prediction at the same time as the champion prediction, you can give parameters shadow_latency and shadow_raw_predictions .

As parameters latency and prediction those parameters are of type float and PredictionFormat.

C. Optional parameters

Below is the signature of the monitor() function, we already saw mandatory parameters image_path, latency, height, width, prediction. Let's dig into the optional ones.

class Deployment:
  ...
    def monitor(
        self,
        image_path: Union[str, Path],
        latency: float,
        height: int,
        width: int,
        prediction: PredictionFormat,
        source: Optional[str] = None,
        tags: Optional[List[str]] = None,
        timestamp: Optional[float] = None,
        model_version: Optional[ModelVersion] = None,
        shadow_model_version: Optional[ModelVersion] = None,
        shadow_latency: Optional[float] = None,
        shadow_raw_predictions: Optional[PredictionFormat] = None,
    ) -> dict:
			pass
  • source: value of the source Metadata of the Datacreated in the Datalakeand related to the Prediction.
  • tags: the DataTag that will be attached to the Datacreated in the Datalakeand related to the Prediction.
  • timestamp: a float giving the timestamp of this prediction, if not given our monitoring service will set one at server time (UTC+0).
  • model_version: deprecated in 6.7.0 was used because we could send the model_id to monitoring service, now each Prediction use the ModelVersion set in the Deployment. If you want to speed up your workflow and you are in a version prior to 6.7.0, consider giving model_version as it will be retrieved anyway.
  • shadow_model_version : deprecated in 6.7.0 same as model_version.