Deployments - Inferences
Until now, the different features related to a Deployment
have been described. So you should now have a fully and properly configured Deployment
. It is then, high time to log the result of inferences in it.
The ModelVersion
that is related to the current Deployment
can be deployed either:
1. If the ModelVersion
is deployed on Picsellia Serving Engine
ModelVersion
is deployed on Picsellia Serving EngineIf the ModelVersion
is deployed on the Picsellia Serving Engine, you need to use the Picsellia Python SDK to make this ModelVersion
make inferences on your image.
Here is the recipe to follow to make your ModelVersion
deployed on the Picsellia Serving Engine doing an inference:
The fact that the ModelVersion
is deployed on Picsellia Serving Engine is quite convenient as when the inference is done, the PredictedAsset
and associated Prediction
are automatically and properly logged by Picsellia on the associated Deployment
.
You can then visualize the PredictedAsset
and associated Prediction
using the Predictions overview and perform Reviews using the Prediciton Review tool for instance.
2. If the ModelVersion
is deployed on your own serving infrastructure
ModelVersion
is deployed on your own serving infrastructureIf the ModelVersion
is deployed on your own serving infrastructure you are in charge of making your ModelVersion
infer properly. In addition, you will have to adapt the script that is performing the inference on your infrastructure to make it log the result of the inference (i.e. the PredictedAsset
and the associated Prediction
).
Here is the procedure to apply to log the result of your inference on a dedicated Picsellia Deployment
.
Once logged, you can then visualize the PredictedAsset
and associated Prediction
using the Predictions overview and perform Reviews using the Prediciton Review tool for instance.
Updated 11 months ago