2. Classification - Monitor Model Predictions

If you have created a Deployment on Picsellia platform, and that Deployment is configured to be plugged to our monitoring service, you can monitor Prediction by requesting our monitoring service.

You can use the Python SDK to do this, or you can directly send a request on the monitoring service.

1. Monitor a Classification Prediction with our Python SDK

A. Simple Example

This snippet will add a Prediction to your Picsellia Deployment: this Prediction took 0.257 second to be computed, was done on image in given path at "/path/to/image.png" of size (w600*h800) and class 1 with a confidence score of 0.8.

from picsellia import Client
from picsellia.types.schemas_prediction import ClassificationPredictionFormat

api_token = "<YOUR_API_TOKEN>"
organization_name = "<ORGANIZATION_NAME>"
deployment_name = "<DEPLOYMENT_NAME>"

# You can also use your organization_id, please be sure it's an UUID if you're in a version < 6.6
client = Client(api_token=api_token, organization_name=organization_name)

deployment = client.get_deployment(deployment_name)


# You can send multiple classes/scores but it often makes more sense to send only one
prediction = ClassificationPredictionFormat(
    detection_classes=[2]
    detection_scores=[0.8]
)

# Height and width are needed because monitoring service will not open the image 
# Latency is a float representing seconds
data = deployment.monitor(
    image_path="/path/to/image.png",
  	latency=0.257,
    height=800,
    width=600,
    prediction=prediction
)

print(data)

πŸ“˜

Prediction are always stored as Data in the Datalake

Please note that once the Prediction logged in a Deployment, it will be stored as a Datain your Datalake and be linked to the related Prediction.

You can retrieve them easily by filtering on the source among your Datalake. Indeed by default, Datain the Datalakecoming from a Deployment has the source set as serving, unless you defined another customized source using the monitor() function.

B. In-Depth Example: Shadow predictions

If you want to give a shadow prediction at the same time as the champion prediction, you can give parameters shadow_latency and shadow_raw_predictions.

As parameters latency and prediction those parameters are of type float and PredictionFormat.

C. Optional parameters

Below is the signature of the monitor() function, we already saw mandatory parameters image_path, latency, height, width, prediction. Let's dig into the optional ones.

class Deployment:
  ...
    def monitor(
        self,
        image_path: Union[str, Path],
        latency: float,
        height: int,
        width: int,
        prediction: PredictionFormat,
        source: Optional[str] = None,
        tags: Optional[List[str]] = None,
        timestamp: Optional[float] = None,
        model_version: Optional[ModelVersion] = None,
        shadow_model_version: Optional[ModelVersion] = None,
        shadow_latency: Optional[float] = None,
        shadow_raw_predictions: Optional[PredictionFormat] = None,
    ) -> dict:
			pass
  • source: value of the source Metadata of the Datacreated in the Datalakeand related to the Prediction.
  • tags: the DataTag that will be attached to the Datacreated in the Datalakeand related to the Prediction.
  • timestamp: a float giving the timestamp of this prediction, if not given our monitoring service will set one at server time (UTC+0).
  • model_version: deprecated in 6.7.0 was used because we could send the model_id to monitoring service, now each Prediction use the ModelVersion set in the Deployment. If you want to speed up your workflow and you are in a version prior to 6.7.0, consider giving model_version as it will be retrieved anyway.
  • shadow_model_version : deprecated in 6.7.0 same as model_version.