3. Object Detection - Monitor Model Predictions
If you have created a Deployment
on Picsellia platform, and that Deployment
is configured to be plugged to our monitoring service, you can monitor Prediction
by requesting our monitoring service.
You can use the Python SDK to do this, or you can directly send a request on the monitoring service.
1. Monitor an Object Detection Prediction
with Python SDK
Prediction
with Python SDKA. Simple Example
This snippet will add a Prediction
to your Picsellia Deployment
: this prediction took 0.257 second to be computed, was done on image in given path at "/path/to/image.png" of size (w600*h800) and is made of 3 rectangles, first one being an object of class 1 with a score of 0.9, that is located at [top=128, left=200, bottom=159, right=240].
from picsellia import Client
from picsellia.types.schemas_prediction import DetectionPredictionFormat
api_token = "<YOUR_API_TOKEN>"
organization_name = "<ORGANIZATION_NAME>"
deployment_name = "<DEPLOYMENT_NAME>"
# You can also use your organization_id, please be sure it's an UUID if you're in a version < 6.6
client = Client(api_token=api_token, organization_name=organization_name)
deployment = client.get_deployment(deployment_name)
# Boxes are [y1, x1, y2, x2] or [top, left, bottom, right]
prediction = DetectionPredictionFormat(
detection_classes=[1, 2, 1],
detection_boxes=[[128, 200, 159, 240], [190, 100, 50, 90], [145, 50, 24, 76]],
detection_scores=[0.9, 0.7, 0.4]
)
# Height and width are needed because monitoring service will not open the image
# Latency is a float representing seconds
data = deployment.monitor(
image_path="/path/to/image.png",
latency=0.257,
height=800,
width=600,
prediction=prediction
)
print(data)
Prediction
are always stored asData
in theDatalake
Please note that once the
Prediction
logged in aDeployment
, it will be stored as aData
in yourDatalake
and be linked to the relatedPrediction
.You can retrieve them easily by filtering on the source among your
Datalake
. Indeed by default,Data
in theDatalake
coming from aDeployment
has the source set as serving, unless you defined another customized source using themonitor()
function.
B. In-Depth Example: Shadow predictions
If you want to give a shadow prediction at the same time as the champion prediction, you can give parameters shadow_latency and shadow_raw_predictions .
As parameters latency and prediction those parameters are of type float and PredictionFormat.
C. Optional parameters
Below is the signature of the monitor()
function, we already saw mandatory parameters image_path, latency, height, width, prediction. Let's dig into the optional ones.
class Deployment:
...
def monitor(
self,
image_path: Union[str, Path],
latency: float,
height: int,
width: int,
prediction: PredictionFormat,
source: Optional[str] = None,
tags: Optional[List[str]] = None,
timestamp: Optional[float] = None,
model_version: Optional[ModelVersion] = None,
shadow_model_version: Optional[ModelVersion] = None,
shadow_latency: Optional[float] = None,
shadow_raw_predictions: Optional[PredictionFormat] = None,
) -> dict:
pass
- source: value of the source Metadata of the
Data
created in theDatalake
and related to thePrediction
. - tags: the
DataTag
that will be attached to theData
created in theDatalake
and related to thePrediction
. - timestamp: a float giving the timestamp of this prediction, if not given our monitoring service will set one at server time (UTC+0).
- model_version: deprecated in 6.7.0 was used because we could send the model_id to monitoring service, now each
Prediction
use theModelVersion
set in theDeployment
. If you want to speed up your workflow and you are in a version prior to 6.7.0, consider giving model_version as it will be retrieved anyway. - shadow_model_version : deprecated in 6.7.0 same as model_version.
Updated about 1 year ago