Deployment

Properties


Methods

add_tags

add_tags(
   tags: Union[Tag, list[Tag]]
)

Description

Add some tags to an object.
It can be used on Data/MultiData/Asset/MultiAsset/DatasetVersion/Dataset/Model/ModelVersion.

You can give a Tag or a list of Tag.

Examples

tag_bicycle = client.create_tag("bicycle", Target.DATA)
tag_car = client.create_tag("car", Target.DATA)
tag_truck = client.create_tag("truck", Target.DATA)

data.add_tags(tag_bicycle)
data.add_tags([tag_car, tag_truck])

get_tags

get_tags()

Description

Retrieve the tags of your deployment.

Examples

tags = deployment.get_tags()
assert tags[0].name == "cool"

Returns

A list of Tag objects


retrieve_information

retrieve_information()

Description

Retrieve some information about this deployment from service.

Examples

my_deployment.retrieve_information()

Returns

A dict with information about this deployment


update

update(
   name: Optional[str] = None, target_datalake: Optional[Datalake] = None,
   min_threshold: Optional[float] = None
)

Description

Update this deployment with a new name, another target datalake or a minimum threshold

Examples

a_tag.update(name="new name", min_threshold=0.4)

Arguments

  • name (str, optional) : New name of the deployment

  • target_datalake (Datalake, optional) : Datalake where data will be uploaded on new prediction

  • min_threshold (float, optional) : Minimum confidence threshold.
    Serving will filter detection boxes or masks that have a detection score lower than this threshold


delete

delete(
   force_delete: bool = False
)

Description


set_model

set_model(
   model_version: ModelVersion
)

Description

Set the model version to use for this deployment

Examples

model_version = client.get_model("my-model").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.set_model(model_version)

Arguments


get_model_version

get_model_version()

Description

Retrieve currently used model version

Examples

model_version = deployment.get_model_version()

Returns

A ModelVersion object


set_shadow_model

set_shadow_model(
   shadow_model_version: ModelVersion
)

Description

Set the shadow model version to use for this deployment

Examples

shadow_model_version = client.get_model("my-model").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.set_shadow_model(shadow_model_version)

Arguments


get_shadow_model

get_shadow_model()

Description

Retrieve currently used shadow model version

Examples

shadow_model = deployment.get_shadow_model()

Returns

A ModelVersion object


predict

predict(
   file_path: Union[str, Path], tags: Union[str, Tag, list[Union[Tag, str]],
   None] = None, source: Union[str, DataSource, None] = None,
   metadata: Optional[dict] = None, monitor: bool = True
)

Description

Run a prediction on our Serving platform

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.predict('image_420.png', tags=["gonna", "give"], source="camera-1")

Arguments

  • file_path (str or Path) : path to the image to predict.

  • tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data that will be created on the platform.

  • source (str or DataSource, optional) : a source to attach to the data that will be created on the platform.

  • metadata (dict, optional) : metadata to attach to the data that will be created on the platform.

  • monitor (bool, optional) : if True, will send prediction on Picsellia and our monitoring service. Defaults to True.

Returns

A (dict) with information of the prediction


predict_bytes

predict_bytes(
   filename: str, raw_image: bytes, tags: Union[str, Tag, list[Union[Tag, str]],
   None] = None, source: Union[str, DataSource, None] = None,
   metadata: Optional[dict] = None, monitor: bool = True
)

Description

Run a prediction on our Serving platform with bytes of an image

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
filename = "frame.png"
with open(filename, 'rb') as img:
    img_bytes = img.read()
deployment.predict_bytes(filename, img_bytes, tags=["tag1", "tag2"], source="camera-1")

Arguments

  • filename (str) : filename of the image.

  • raw_image (bytes) : bytes of the image to predict.

  • tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data that will be created on the platform.

  • source (str or DataSource, optional) : a source to attach to the data that will be created on the platform.

  • metadata (dict, optional) : metadata to attach to the data that will be created on the platform.

  • monitor (bool, optional) : if True, will send prediction on Picsellia and our monitoring service. Defaults to True.

Returns

A (dict) with information of the prediction


predict_cloud_image

predict_cloud_image(
   object_name: str, tags: Union[str, Tag, list[Union[Tag, str]], None] = None,
   source: Union[str, DataSource, None] = None, metadata: Optional[dict] = None,
   monitor: bool = True
)

Description

Run a prediction on our Serving platform, using object_name of a cloud object stored in your object storage.
Your image MUST be stored in the storage used in the datalake linked to this deployment (the target datalake)
If your image is already a data in your target datalake, it MUST NOT have been processed by this deployment,
also, in this case, given source and metadata won't be used.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
object_name = "directory/s3/object-name.jpeg"
deployment.predict_cloud_image(object_name, tags=["tag1", "tag2"], source="camera-1")

Arguments

  • object_name (str) : object name of the cloud image.

  • tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data that will be created on the platform.

  • source (str or DataSource, optional) : a source to attach to the data that will be created on the platform.

  • metadata (dict, optional) : metadata to attach to the data that will be created on the platform.

  • monitor (bool, optional) : if True, will send prediction on Picsellia and our monitoring service. Defaults to True.

Returns

A (dict) with information of the prediction


predict_data

predict_data(
   data: Data, tags: Union[str, Tag, list[Union[Tag, str]], None] = None,
   monitor: bool = True
)

Description

Run a prediction on our Serving platform, using data.
Your data must already be stored in the datalake used by the deployment (the target datalake).
If there already is a prediction in this deployment linked to this data, it will be dismissed.

Specified tags will be added to the ones already existing on the Data.

Examples

datalake = client.get_datalake(name="target-datalake")
data = datalake.list_data(limit=1)[0]
deployment = client.get_deployment(name="awesome-deploy")
deployment.predict_data(data, tags=["tag1", "tag2"], monitor=False)

Arguments

  • data Data : object that you want to predict on.

  • tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data

  • monitor (bool, optional) : if True, will send prediction on Picsellia and our monitoring service. Defaults to True.

Returns

A (dict) with information of the prediction


predict_shadow

predict_shadow(
   predicted_asset: PredictedAsset, monitor: bool = True
)

Description

Add a shadow prediction on a predicted asset.
It will call our Serving platform, returning predictions coming from shadow model.
If monitor is true, it will go to our monitoring service, then it will be added on the platform.

Examples

deployment = client.get_deployment(name="awesome-deploy")
predicted_asset = deployment.list_predicted_assets(limit=1)[0]
deployment.predict_shadow(predicted_asset, monitor=False)

Arguments

  • predicted_asset PredictedAsset : shadow model will predict on this asset.

  • monitor (bool, optional) : if True, will send prediction on Picsellia and our monitoring service. Defaults to True.

Returns

A (dict) with prediction shapes


setup_feedback_loop

setup_feedback_loop(
   dataset_version: Optional[DatasetVersion] = None
)

Description

Set up the Feedback Loop for a Deployment.
You can specify one Dataset Version to attach to it or use the
attach_dataset_to_feedback_loop() afterward,so you can add multiple ones.
This is a great option to increase your training set with quality data.

Examples

dataset_version = client.get_dataset("my-dataset").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_feedback_loop(dataset_version)

Arguments

  • dataset_version (DatasetVersion, optional) : This parameter is deprecated. Use attach_dataset_to_feedback_loop() instead.

attach_dataset_version_to_feedback_loop

attach_dataset_version_to_feedback_loop(
   dataset_version: DatasetVersion
)

Description

Attach a Dataset Version to a previously configured feedback-loop.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_feedback_loop()
for dataset_version in dataset_versions:
    deployment.attach_dataset_version_to_feedback_loop(dataset_version)

Arguments


detach_dataset_version_from_feedback_loop

detach_dataset_version_from_feedback_loop(
   dataset_version: DatasetVersion
)

Description

Detach a Dataset Version from a previously configured feedback-loop.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_feedback_loop()
for dataset_version in dataset_versions:
    deployment.attach_dataset_version_to_feedback_loop(dataset_version)
deployment.detach_dataset_version_from_feedback_loop(dataset_versions[0])

Arguments


list_feedback_loop_datasets

list_feedback_loop_datasets()

Description

List the Dataset Versions attached to the feedback-loop

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
dataset_versions = deployment.list_feedback_loop_datasets()

Returns

A list of DatasetVersion


check_feedback_loop_status

check_feedback_loop_status()

Description

Refresh feedback loop status of this deployment.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.check_feedback_loop_status()

disable_feedback_loop

disable_feedback_loop()

Description

Disable the Feedback Loop for a Deployment.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.disable_feedback_loop()

toggle_feedback_loop

toggle_feedback_loop(
   active: bool
)

Description

Toggle feedback loop for this deployment

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.toggle_feedback_loop(
    True
)

Arguments

  • active (bool) : (des)activate feedback loop

set_training_data

set_training_data(
   dataset_version: DatasetVersion
)

Description

This will give the training data reference to the deployment,
so we can compute metrics based on this training data distribution in our Monitoring service

Examples

dataset_version = client.get_dataset("my-dataset").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.set_training_data(dataset_version)

Arguments


check_training_data_metrics_status

check_training_data_metrics_status()

Description

Refresh the status of the metrics compute over the training data distribution.
Set up can take some time, so you can check current state with this method.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.check_training_data_metrics_status()

Returns

A string with the status of the metrics compute over the training data distribution


disable_training_data_reference

disable_training_data_reference()

Description

Disable the reference to the training data in this Deployment.
This means that you will not be able to see supervised metrics from the dashboard anymore.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.disable_training_data_reference()

setup_continuous_training

setup_continuous_training(
   project: Project, dataset_version: Optional[DatasetVersion] = None,
   model_version: Optional[ModelVersion] = None, trigger: Union[str,
   ContinuousTrainingTrigger] = None, threshold: Optional[int] = None,
   experiment_parameters: Optional[dict] = None, scan_config: Optional[dict] = None
)

Description

Initialize and activate the continuous training features of picsellia. 🥑
A Training will be triggered using the configured Dataset
and Model as base whenever your Deployment pipeline hit the trigger.

There is 2 types of continuous training different:
You can launch a continuous training via Experiment with parameter experiment_parameters

You can call attach_dataset_version_to_continuous_training() method afterward.

Examples

We want to set up a continuous training pipeline that will be trigger
every 150 new predictions reviewed by your team.
We will use the same training parameters as those used when building the first model.

deployment = client.get_deployment("awesome-deploy")
project = client.get_project(name="my-project")
dataset_version = project.get_dataset(name="my-dataset").get_version("latest")
model_version = client.get_model(name="my-model").get_version(0)
experiment = model_version.get_source_experiment()
parameters = experiment.get_log('parameters')
feedback_loop_trigger = 150
deployment.setup_continuous_training(
    project, dataset_version,
    threshold=150, experiment_parameters=experiment_parameters
)

Arguments

  • project Project : The project that will host your pipeline.

  • dataset_version (Optional[DatasetVersion], optional) : The Dataset Version that will be used as training data for your training.

  • model_version (ModelVersion, deprecated) : This parameter is deprecated and is not used anymore.

  • threshold (int) : Number of images that need to be review to trigger the training.

  • trigger (ContinuousTrainingTrigger) : Type of trigger to use when there is enough reviews.

  • experiment_parameters (Optional[dict], optional) : Training parameters. Defaults to None.


attach_dataset_version_to_continuous_training

attach_dataset_version_to_continuous_training(
   alias: str, dataset_version: DatasetVersion
)

Description

Attach a Dataset Version to a previously configured continuous training.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_continuous_training(...)
aliases = ["train", "test", "eval"]
for i, dataset_version in enumerate(dataset_versions):
    deployment.attach_dataset_version_to_continuous_training(aliases[i], dataset_version)

Arguments

  • alias (str) : Alias of attached dataset

  • dataset_version DatasetVersion : A dataset version to attach to the Continuous Training.


detach_dataset_version_from_continuous_training

detach_dataset_version_from_continuous_training(
   dataset_version: DatasetVersion
)

Description

Detach a Dataset Versions to a previously configured continuous training.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_continuous_training()
for dataset_version in dataset_versions:
    deployment.attach_dataset_version_to_continuous_training(dataset_version)
deployment.detach_dataset_version_from_continuous_training(dataset_versions[0])

Arguments


toggle_continuous_training

toggle_continuous_training(
   active: bool
)

Description

Toggle continuous training for this deployment

Examples

deployment = client.get_deployment("awesome-deploy")
deployment.toggle_continuous_training(active=False)

Arguments

  • active (bool) : (des)activate continuous training

setup_continuous_deployment

setup_continuous_deployment(
   policy: Union[ContinuousDeploymentPolicy, str]
)

Description

Set up the continuous deployment for this pipeline

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_continuous_deployment(ContinuousDeploymentPolicy.DEPLOY_MANUAL)

Arguments

  • policy (ContinuousDeploymentPolicy) : policy to use

toggle_continuous_deployment

toggle_continuous_deployment(
   active: bool
)

Description

Toggle continuous deployment for this deployment

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.toggle_continuous_deployment(
    dataset
)

Arguments

  • active (bool) : (des)activate continuous deployment

get_stats

get_stats(
   service: ServiceMetrics, model_version: Optional[ModelVersion] = None,
   from_timestamp: Optional[float] = None, to_timestamp: Optional[float] = None,
   since: Optional[int] = None, includes: Optional[list[str]] = None,
   excludes: Optional[list[str]] = None, tags: Optional[list[str]] = None
)

Description

Retrieve stats of this deployment stored in Picsellia environment.

Mandatory param is "service" an enum of type ServiceMetrics. Values possibles are :
PREDICTIONS_OUTLYING_SCORE
PREDICTIONS_DATA
REVIEWS_OBJECT_DETECTION_STATS
REVIEWS_CLASSIFICATION_STATS
REVIEWS_LABEL_DISTRIBUTION_STATS

AGGREGATED_LABEL_DISTRIBUTION
AGGREGATED_OBJECT_DETECTION_STATS
AGGREGATED_PREDICTIONS_DATA
AGGREGATED_DRIFTING_PREDICTIONS

For aggregation, computation may not have been done by the past.
You will need to force computation of these aggregations and retrieve them again.

Examples

my_deployment.get_stats(ServiceMetrics.PREDICTIONS_DATA)
my_deployment.get_stats(ServiceMetrics.AGGREGATED_DRIFTING_PREDICTIONS, since=3600)
my_deployment.get_stats(ServiceMetrics.AGGREGATED_LABEL_DISTRIBUTION, model_version=my_model)

Arguments

  • service (str) : service queried

  • model_version (ModelVersion, optional) : Model that shall be used when retrieving data.
    Defaults to None.

  • from_timestamp (float, optional) : System will only retrieve prediction data after this timestamp.
    Defaults to None.

  • to_timestamp (float, optional) : System will only retrieve prediction data before this timestamp.
    Defaults to None.

  • since (int, optional) : System will only retrieve prediction data that are in the last seconds.
    Defaults to None.

  • includes (list[str], optional) : Research will include these ids and excludes others.
    Defaults to None.

  • excludes (list[str], optional) : Research will exclude these ids.
    Defaults to None.

  • tags (list[str], optional) : Research will be done filtering by tags.
    Defaults to None.

Returns

A dict with queried statistics about the service you asked


monitor

monitor(
   image_path: Union[str, Path], latency: float, height: int, width: int,
   prediction: PredictionFormat, source: Optional[str] = None,
   tags: Optional[list[str]] = None, timestamp: Optional[float] = None,
   model_version: Optional[ModelVersion] = None,
   shadow_model_version: Optional[ModelVersion] = None,
   shadow_latency: Optional[float] = None,
   shadow_raw_predictions: Optional[PredictionFormat] = None,
   shadow_prediction: Optional[PredictionFormat] = None,
   content_type: Optional[Union[SupportedContentType, str]] = None,
   metadata: Optional[dict] = None
)

Description

Send a prediction for this deployment on our monitoring service.

⚠️ Signature of this method has been recently changed and can break some methods :

  • model_version and shadow_model_version are not used anymore : system will use what's currently being monitored in this deployment
  • shadow_raw_predictions has been renamed to shadow_prediction

Arguments

  • image_path (str or Path) : image path

  • latency (float) : latency used by model to compute your prediction

  • height (int) : height of image

  • width (int) : width of image

  • prediction (PredictionFormat) : data of your prediction, can be a Classification, a Segmentation or an ObjectDetection Format.
    DetectionPredictionFormat, ClassificationPredictionFormat and SegmentationPredictionFormat:
    detection_classes (list[int]): list of classes
    detection_scores (list[float]): list of scores of predictions
    DetectionPredictionFormat and SegmentationPredictionFormat:
    detection_boxes (list[list[int]]): list of bboxes representing rectangles of your shapes. bboxes are formatted as
    [top, left, bottom, right]
    SegmentationPredictionFormat:
    detection_masks (list[list[int]]): list of polygons of your shapes. each polygon is a list of points with coordinates flattened
    [x1, y1, x2, y2, x3, y3, x4, y4, ..]

  • source (str, optional) : Data will have this source in Picsellia. Defaults to None.

  • tags (list of str, optional) : tags that can give some metadata to your prediction. Defaults to None.

  • timestamp (float, optional) : timestamp of your prediction. Defaults to timestamp of monitoring service on reception.

  • shadow_latency (float, optional) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat, optional) : data of your prediction made by shadow model.

  • content_type (str, optional) : if given, we won't try to infer content type with mimetype library

  • metadata (dict, optional) : Data will have this metadata in Picsellia. Defaults to None.

Returns

a dict of data returned by our monitoring service


monitor_bytes

monitor_bytes(
   raw_image: bytes, content_type: Union[SupportedContentType, str], filename: str,
   latency: float, height: int, width: int, prediction: PredictionFormat,
   source: Optional[str] = None, tags: Optional[list[str]] = None,
   timestamp: Optional[float] = None, shadow_latency: Optional[float] = None,
   shadow_prediction: Optional[PredictionFormat] = None,
   metadata: Optional[dict] = None
)

Description

Send a prediction for this deployment on our monitoring service.
You can use this method instead of monitor() if you have a bytes image and not an image file.
We will convert it into base 64 as utf8 string and send it to the monitoring service.

Arguments

  • raw_image (bytes) : raw image in bytes

  • content_type (Union[SupportedContentType, str]) : content type of image, only 'image/jpeg' or 'image/png' currently supported

  • filename (str) : filename of image

  • latency (float) : latency used by model to compute your prediction

  • height (int) : height of image

  • width (int) : width of image

  • prediction (PredictionFormat) : data of your prediction, can be a Classification, a Segmentation or an ObjectDetection Format.
    DetectionPredictionFormat, ClassificationPredictionFormat and SegmentationPredictionFormat:
    detection_classes (list[int]): list of classes
    detection_scores (list[float]): list of scores of predictions
    DetectionPredictionFormat and SegmentationPredictionFormat:
    detection_boxes (list[list[int]]): list of bboxes representing rectangles of your shapes. bboxes are formatted as
    [top, left, bottom, right]
    SegmentationPredictionFormat:
    detection_masks (list[list[int]]): list of polygons of your shapes. each polygon is a list of points with coordinates flattened
    [x1, y1, x2, y2, x3, y3, x4, y4, ..]

  • source (str, optional) : source that can give some metadata to your prediction. Defaults to None.

  • tags (list of str, optional) : tags that can give some metadata to your prediction. Defaults to None.

  • timestamp (float, optional) : timestamp of your prediction. Defaults to timestamp of monitoring service on reception.

  • shadow_latency (float, optional) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat, optional) : data of your prediction made by shadow model.

  • metadata (dict, optional) : Data will have this metadata in Picsellia. Defaults to None.

Returns

a dict of data returned by our monitoring service


monitor_cloud_image

monitor_cloud_image(
   object_name: str, latency: float, height: int, width: int,
   prediction: PredictionFormat, content_type: Union[SupportedContentType, str],
   source: Optional[str] = None, tags: Optional[list[str]] = None,
   timestamp: Optional[float] = None, shadow_latency: Optional[float] = None,
   shadow_prediction: Optional[PredictionFormat] = None,
   metadata: Optional[dict] = None
)

Description

Monitor an image on our monitoring platform, using object_name of a cloud object stored in your object storage.

Arguments

  • object_name (str) : object name of the cloud image.

  • latency (float) : latency used by model to compute your prediction

  • height (int) : height of image

  • width (int) : width of image

  • prediction (PredictionFormat) : data of your prediction, can be a Classification, a Segmentation or an ObjectDetection Format.
    DetectionPredictionFormat, ClassificationPredictionFormat and SegmentationPredictionFormat:
    detection_classes (list[int]): list of classes
    detection_scores (list[float]): list of scores of predictions
    DetectionPredictionFormat and SegmentationPredictionFormat:
    detection_boxes (list[list[int]]): list of bboxes representing rectangles of your shapes. bboxes are formatted as
    [top, left, bottom, right]
    SegmentationPredictionFormat:
    detection_masks (list[list[int]]): list of polygons of your shapes. each polygon is a list of points with coordinates flattened
    [x1, y1, x2, y2, x3, y3, x4, y4, ..]

  • source (str, optional) : Data will have this source in Picsellia. Defaults to None.

  • tags (list of str, optional) : tags that can give some metadata to your prediction. Defaults to None.

  • timestamp (float, optional) : timestamp of your prediction. Defaults to timestamp of monitoring service on reception.

  • shadow_latency (float, optional) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat, optional) : data of your prediction made by shadow model.

  • content_type (str, optional) : if given, we won't try to infer content type with mimetype library

  • metadata (dict, optional) : Data will have this metadata in Picsellia. Defaults to None.

Returns

a dict of data returned by our monitoring service


monitor_data

monitor_data(
   data: Data, latency: float, height: int, width: int, prediction: PredictionFormat,
   tags: Optional[list[str]] = None, timestamp: Optional[float] = None,
   shadow_latency: Optional[float] = None,
   shadow_prediction: Optional[PredictionFormat] = None
)

Description

Monitor an image on our monitoring platform
Your data must already be stored in the datalake used by the deployment (the target datalake).
If there already is a prediction in this deployment linked to this data, it will be dismissed.

Arguments

  • data Data : data to monitor

  • latency (float) : latency used by model to compute your prediction

  • height (int) : height of image

  • width (int) : width of image

  • prediction (PredictionFormat) : data of your prediction, can be a Classification, a Segmentation or an ObjectDetection Format.
    DetectionPredictionFormat, ClassificationPredictionFormat and SegmentationPredictionFormat:
    detection_classes (list[int]): list of classes
    detection_scores (list[float]): list of scores of predictions
    DetectionPredictionFormat and SegmentationPredictionFormat:
    detection_boxes (list[list[int]]): list of bboxes representing rectangles of your shapes. bboxes are formatted as
    [top, left, bottom, right]
    SegmentationPredictionFormat:
    detection_masks (list[list[int]]): list of polygons of your shapes. each polygon is a list of points with coordinates flattened
    [x1, y1, x2, y2, x3, y3, x4, y4, ..]

  • tags (list of str, optional) : tags that can give some metadata to your prediction. Defaults to None.

  • timestamp (float, optional) : timestamp of your prediction. Defaults to timestamp of monitoring service on reception.

  • shadow_latency (float, optional) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat, optional) : data of your prediction made by shadow model.

Returns

a dict of data returned by our monitoring service


monitor_shadow

monitor_shadow(
   predicted_asset: PredictedAsset, shadow_latency: float,
   shadow_prediction: PredictionFormat
)

Description

Add a shadow prediction on an existing PredictedAsset.
You can call monitor_shadow_from_oracle_prediction_id() if you only have oracle_prediction_id

Arguments

  • predicted_asset PredictedAsset : asset already processed on which to add shadow_prediction

  • shadow_latency (float) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat) : data of your prediction made by shadow model


monitor_shadow_from_oracle_prediction_id

monitor_shadow_from_oracle_prediction_id(
   oracle_prediction_id: Union[str, UUID], shadow_latency: float,
   shadow_prediction: PredictionFormat
)

Description

Add a shadow prediction on an existing PredictedAsset, from the oracle_prediction_id

Arguments

  • oracle_prediction_id (str or UUID) : oracle_prediction_id that was returned on monitor()

  • shadow_latency (float) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat) : data of your prediction made by shadow model


find_predicted_asset

find_predicted_asset(
   id: Union[str, UUID, None] = None, oracle_prediction_id: Union[str, UUID,
   None] = None, object_name: Optional[str] = None, filename: Optional[str] = None,
   data_id: Union[str, UUID, None] = None
)

Description

Find a PredictedAsset of this deployment.

Examples

oracle_prediction_id = deployment.monitor(path, latency, height, width, prediction_data)["id"]
predicted_asset = deployment.find_predicted_asset(oracle_prediction_id=oracle_prediction_id)
deployment.monitor_shadow(predicted_asset, shadow_latency, shadow_prediction_data)

Arguments

  • id (UUID, optional) : id of PredictedAsset to fetch. Defaults to None.

  • oracle_prediction_id (UUID, optional) : id of the prediction in our monitoring system. Defaults to None.

  • filename (str, optional) : filename of the data. Defaults to None.

  • object_name (str, optional) : object_name of the data. Defaults to None.

  • data_id (UUID, optional) : id of the data related to this PredictedAsset. Defaults to None.

Raises

If no asset match the query, it will raise a NotFoundError.
In some case, it can raise an InvalidQueryError,
it might be because the platform stores 2 assets matching this query (for example if filename is duplicated)

Returns

The PredictedAsset found


list_predicted_assets

list_predicted_assets(
   limit: Optional[int] = None, offset: Optional[int] = None,
   page_size: Optional[int] = None, order_by: Optional[list[str]] = None
)

Description