Deployment

Properties


Methods

add_tags

add_tags(
   tags: Union[Tag, List[Tag]]
)

Description

Add some tags to an object.
It can be used on Data/MultiData/Asset/MultiAsset/DatasetVersion/Dataset/Model/ModelVersion.

You can give a Tag or a list of Tag.

Examples

tag_bicycle = client.create_tag("bicycle", Target.DATA)
tag_car = client.create_tag("car", Target.DATA)
tag_truck = client.create_tag("truck", Target.DATA)

data.add_tags(tag_bicycle)
data.add_tags([tag_car, tag_truck])

get_tags

get_tags()

Description

Retrieve the tags of your deployment.

Examples

tags = deployment.get_tags()
assert tags[0].name == "cool"

Returns

A list of Tag objects


retrieve_information

retrieve_information()

Description

Retrieve some information about this deployment from service.

Examples

my_deployment.retrieve_information()

Returns

A dict with information about this deployment


update

update(
   name: Optional[str] = None, target_datalake: Optional[Datalake] = None,
   min_threshold: Optional[float] = None
)

Description

Update this deployment with a new name, another target datalake or a minimum threshold

Examples

a_tag.update(name="new name", min_threshold=0.4)

Arguments

  • name (str, optional) : New name of the deployment

  • target_datalake (Datalake, optional) : Datalake where data will be uploaded on new prediction

  • min_threshold (float, optional) : Minimum confidence threshold.
    Serving will filter detection boxes or masks that have a detection score lower than this threshold


delete

delete(
   force_delete: bool = False
)

Description


set_model

set_model(
   model_version: ModelVersion
)

Description

Set the model version to use for this deployment

Examples

model_version = client.get_model("my-model").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.set_model(model_version)

Arguments


get_model_version

get_model_version()

Description

Retrieve currently used model version

Examples

model_version = deployment.get_model_version()

Returns

A ModelVersion object


set_shadow_model

set_shadow_model(
   shadow_model_version: ModelVersion
)

Description

Set the shadow model version to use for this deployment

Examples

shadow_model_version = client.get_model("my-model").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.set_shadow_model(shadow_model_version)

Arguments


get_shadow_model

get_shadow_model()

Description

Retrieve currently used shadow model version

Examples

shadow_model = deployment.get_shadow_model()

Returns

A ModelVersion object


predict

predict(
   file_path: Union[str, Path], tags: Union[str, Tag, List[Union[Tag, str]],
   None] = None, source: Union[str, DataSource, None] = None
)

Description

Run a prediction on our Serving platform

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.predict('image_420.png', tags=["gonna", "give"], source="camera-1")

Arguments

  • file_path (str or Path) : path to the image to predict.

  • tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data that will be created on the platform.

  • source (str or DataSource, optional) : a source to attach to the data that will be created on the platform.

Returns

A (dict) with information of the prediction


predict_bytes

predict_bytes(
   filename: str, raw_image: bytes, tags: Union[str, Tag, List[Union[Tag, str]],
   None] = None, source: Union[str, DataSource, None] = None
)

Description

Run a prediction on our Serving platform with bytes of an image

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
filename = "frame.png"
with open(filename, 'rb') as img:
    img_bytes = img.read()
deployment.predict_bytes(filename, img_bytes, tags=["tag1", "tag2"], source="camera-1")

Arguments

  • filename (str) : filename of the image.

  • raw_image (bytes) : bytes of the image to predict.

  • tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data that will be created on the platform.

  • source (str or DataSource, optional) : a source to attach to the data that will be created on the platform.

Returns

A (dict) with information of the prediction


setup_feedback_loop

setup_feedback_loop(
   dataset_version: Optional[DatasetVersion] = None
)

Description

Set up the Feedback Loop for a Deployment.
You can specify one Dataset Version to attach to it or use the
attach_dataset_to_feedback_loop() afterward,so you can add multiple ones.
This is a great option to increase your training set with quality data.

Examples

dataset_version = client.get_dataset("my-dataset").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_feedback_loop(dataset_version)

Arguments

  • dataset_version (DatasetVersion, optional) : This parameter is deprecated. Use attach_dataset_to_feedback_loop() instead.

attach_dataset_version_to_feedback_loop

attach_dataset_version_to_feedback_loop(
   dataset_version: DatasetVersion
)

Description

Attach a Dataset Version to a previously configured feedback-loop.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_feedback_loop()
for dataset_version in dataset_versions:
    deployment.attach_dataset_version_to_feedback_loop(dataset_version)

Arguments


detach_dataset_version_from_feedback_loop

detach_dataset_version_from_feedback_loop(
   dataset_version: DatasetVersion
)

Description

Detach a Dataset Version from a previously configured feedback-loop.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_feedback_loop()
for dataset_version in dataset_versions:
    deployment.attach_dataset_version_to_feedback_loop(dataset_version)
deployment.detach_dataset_version_from_feedback_loop(dataset_versions[0])

Arguments


list_feedback_loop_datasets

list_feedback_loop_datasets()

Description

List the Dataset Versions attached to the feedback-loop

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
dataset_versions = deployment.list_feedback_loop_datasets()

Returns

A list of DatasetVersion


check_feedback_loop_status

check_feedback_loop_status()

Description

Refresh feedback loop status of this deployment.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.check_feedback_loop_status()

disable_feedback_loop

disable_feedback_loop()

Description

Disable the Feedback Loop for a Deployment.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.disable_feedback_loop()

toggle_feedback_loop

toggle_feedback_loop(
   active: bool
)

Description

Toggle feedback loop for this deployment

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.toggle_feedback_loop(
    True
)

Arguments

  • active (bool) : (des)activate feedback loop

set_training_data

set_training_data(
   dataset_version: DatasetVersion
)

Description

This will give the training data reference to the deployment,
so we can compute metrics based on this training data distribution in our Monitoring service

Examples

dataset_version = client.get_dataset("my-dataset").get_version("latest")
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.set_training_data(dataset_version)

Arguments


check_training_data_metrics_status

check_training_data_metrics_status()

Description

Refresh the status of the metrics compute over the training data distribution.
Set up can take some time, so you can check current state with this method.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.check_training_data_metrics_status()

Returns

A string with the status of the metrics compute over the training data distribution


disable_training_data_reference

disable_training_data_reference()

Description

Disable the reference to the training data in this Deployment.
This means that you will not be able to see supervised metrics from the dashboard anymore.

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.disable_training_data_reference()

setup_continuous_training

setup_continuous_training(
   project: Project, dataset_version: Optional[DatasetVersion] = None,
   model_version: Optional[ModelVersion] = None, trigger: Union[str,
   ContinuousTrainingTrigger] = None, threshold: Optional[int] = None,
   experiment_parameters: Optional[dict] = None, scan_config: Optional[dict] = None
)

Description

Initialize and activate the continuous training features of picsellia. πŸ₯‘
A Training will be triggered using the configured Dataset
and Model as base whenever your Deployment pipeline hit the trigger.

There is 2 types of continuous training different:
You can launch a continuous training via Scan configuration or via Experiment
You need to give whether experiment_parameters or scan_config but not both
For scan configuration: more info.

You can call attach_dataset_version_to_continuous_training() method afterward.

Examples

We want to set up a continuous training pipeline that will be trigger
every 150 new predictions reviewed by your team.
We will use the same training parameters as those used when building the first model.

deployment = client.get_deployment("awesome-deploy")
project = client.get_project(name="my-project")
dataset_version = project.get_dataset(name="my-dataset").get_version("latest")
model_version = client.get_model(name="my-model").get_version(0)
experiment = model_version.get_source_experiment()
parameters = experiment.get_log('parameters')
feedback_loop_trigger = 150
deployment.setup_continuous_training(
    project, dataset_version,
    threshold=150, experiment_parameters=experiment_parameters
)

Arguments

  • project Project : The project that will host your pipeline.

  • dataset_version (Optional[DatasetVersion], optional) : The Dataset Version that will be used as training data for your training.

  • model_version (ModelVersion, deprecated) : This parameter is deprecated and is not used anymore.

  • threshold (int) : Number of images that need to be review to trigger the training.

  • trigger (ContinuousTrainingTrigger) : Type of trigger to use when there is enough reviews.

  • experiment_parameters (Optional[dict], optional) : Training parameters. Defaults to None.

  • scan_config (Optional[dict], optional) : Scan configuration dict. Defaults to None.


attach_dataset_version_to_continuous_training

attach_dataset_version_to_continuous_training(
   alias: str, dataset_version: DatasetVersion
)

Description

Attach a Dataset Version to a previously configured continuous training.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_continuous_training(...)
aliases = ["train", "test", "eval"]
for i, dataset_version in enumerate(dataset_versions):
    deployment.attach_dataset_version_to_continuous_training(aliases[i], dataset_version)

Arguments

  • alias (str) : Alias of attached dataset

  • dataset_version DatasetVersion : A dataset version to attach to the Continuous Training.


detach_dataset_version_from_continuous_training

detach_dataset_version_from_continuous_training(
   dataset_version: DatasetVersion
)

Description

Detach a Dataset Versions to a previously configured continuous training.

Examples

dataset_versions = client.get_dataset("my-dataset").list_versions()
deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_continuous_training()
for dataset_version in dataset_versions:
    deployment.attach_dataset_version_to_continuous_training(dataset_version)
deployment.detach_dataset_version_from_continuous_training(dataset_versions[0])

Arguments


toggle_continuous_training

toggle_continuous_training(
   active: bool
)

Description

Toggle continuous training for this deployment

Examples

deployment = client.get_deployment("awesome-deploy")
deployment.toggle_continuous_training(active=False)

Arguments

  • active (bool) : (des)activate continuous training

setup_continuous_deployment

setup_continuous_deployment(
   policy: Union[ContinuousDeploymentPolicy, str]
)

Description

Set up the continuous deployment for this pipeline

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.setup_continuous_deployment(ContinuousDeploymentPolicy.DEPLOY_MANUAL)

Arguments

  • policy (ContinuousDeploymentPolicy) : policy to use

toggle_continuous_deployment

toggle_continuous_deployment(
   active: bool
)

Description

Toggle continuous deployment for this deployment

Examples

deployment = client.get_deployment(
    name="awesome-deploy"
)
deployment.toggle_continuous_deployment(
    dataset
)

Arguments

  • active (bool) : (des)activate continuous deployment

get_stats

get_stats(
   service: ServiceMetrics, model_version: Optional[ModelVersion] = None,
   from_timestamp: Optional[float] = None, to_timestamp: Optional[float] = None,
   since: Optional[int] = None, includes: Optional[List[str]] = None,
   excludes: Optional[List[str]] = None, tags: Optional[List[str]] = None
)

Description

Retrieve stats of this deployment stored in Picsellia environment.

Mandatory param is "service" an enum of type ServiceMetrics. Values possibles are :
PREDICTIONS_OUTLYING_SCORE
PREDICTIONS_DATA
REVIEWS_OBJECT_DETECTION_STATS
REVIEWS_CLASSIFICATION_STATS
REVIEWS_LABEL_DISTRIBUTION_STATS

AGGREGATED_LABEL_DISTRIBUTION
AGGREGATED_OBJECT_DETECTION_STATS
AGGREGATED_PREDICTIONS_DATA
AGGREGATED_DRIFTING_PREDICTIONS

For aggregation, computation may not have been done by the past.
You will need to force computation of these aggregations and retrieve them again.

Examples

my_deployment.get_stats(ServiceMetrics.PREDICTIONS_DATA)
my_deployment.get_stats(ServiceMetrics.AGGREGATED_DRIFTING_PREDICTIONS, since=3600)
my_deployment.get_stats(ServiceMetrics.AGGREGATED_LABEL_DISTRIBUTION, model_version=my_model)

Arguments

  • service (str) : service queried

  • model_version (ModelVersion, optional) : Model that shall be used when retrieving data.
    Defaults to None.

  • from_timestamp (float, optional) : System will only retrieve prediction data after this timestamp.
    Defaults to None.

  • to_timestamp (float, optional) : System will only retrieve prediction data before this timestamp.
    Defaults to None.

  • since (int, optional) : System will only retrieve prediction data that are in the last seconds.
    Defaults to None.

  • includes (List[str], optional) : Research will include these ids and excludes others.
    Defaults to None.

  • excludes (List[str], optional) : Research will exclude these ids.
    Defaults to None.

  • tags (List[str], optional) : Research will be done filtering by tags.
    Defaults to None.

Returns

A dict with queried statistics about the service you asked


monitor

monitor(
   image_path: Union[str, Path], latency: float, height: int, width: int,
   prediction: PredictionFormat, source: Optional[str] = None,
   tags: Optional[List[str]] = None, timestamp: Optional[float] = None,
   model_version: Optional[ModelVersion] = None,
   shadow_model_version: Optional[ModelVersion] = None,
   shadow_latency: Optional[float] = None,
   shadow_raw_predictions: Optional[PredictionFormat] = None,
   shadow_prediction: Optional[PredictionFormat] = None,
   content_type: Optional[Union[SupportedContentType, str]] = None
)

Description

Send a prediction for this deployment on our monitoring service.

:warning: Signature of this method has been recently changed and can break some methods :

  • model_version and shadow_model_version are not used anymore : system will use what's currently being monitored in this deployment
  • shadow_raw_predictions has been renamed to shadow_prediction

Arguments

  • image_path (str or Path) : image path

  • latency (float) : latency used by model to compute your prediction

  • height (int) : height of image

  • width (int) : width of image

  • prediction (PredictionFormat) : data of your prediction, can be a Classification, a Segmentation or an ObjectDetection Format.
    DetectionPredictionFormat, ClassificationPredictionFormat and SegmentationPredictionFormat:
    detection_classes (List[int]): list of classes
    detection_scores (List[float]): list of scores of predictions
    DetectionPredictionFormat and SegmentationPredictionFormat:
    detection_boxes (List[List[int]]): list of bboxes representing rectangles of your shapes. bboxes are formatted as
    [top, left, bottom, right]
    SegmentationPredictionFormat:
    detection_masks (List[List[int]]): list of polygons of your shapes. each polygon is a list of points with coordinates flattened
    [x1, y1, x2, y2, x3, y3, x4, y4, ..]

  • source (str, optional) : source that can give some metadata to your prediction. Defaults to None.

  • tags (list of str, optional) : tags that can give some metadata to your prediction. Defaults to None.

  • timestamp (float, optional) : timestamp of your prediction. Defaults to timestamp of monitoring service on reception.

  • shadow_latency (float, optional) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat, optional) : data of your prediction made by shadow model.

  • content_type (str, optional) : if given, we won't try to infer content type with mimetype library

Returns

a dict of data returned by our monitoring service


monitor_bytes

monitor_bytes(
   raw_image: bytes, content_type: Union[SupportedContentType, str], filename: str,
   latency: float, height: int, width: int, prediction: PredictionFormat,
   source: Optional[str] = None, tags: Optional[List[str]] = None,
   timestamp: Optional[float] = None, shadow_latency: Optional[float] = None,
   shadow_prediction: Optional[PredictionFormat] = None
)

Description

Send a prediction for this deployment on our monitoring service.
You can use this method instead of monitor() if you have a bytes image and not an image file.
We will convert it into base 64 as utf8 string and send it to the monitoring service.

Arguments

  • raw_image (bytes) : raw image in bytes

  • content_type (Union[SupportedContentType, str]) : content type of image, only 'image/jpeg' or 'image/png' currently supported

  • filename (str) : filename of image

  • latency (float) : latency used by model to compute your prediction

  • height (int) : height of image

  • width (int) : width of image

  • prediction (PredictionFormat) : data of your prediction, can be a Classification, a Segmentation or an ObjectDetection Format.
    DetectionPredictionFormat, ClassificationPredictionFormat and SegmentationPredictionFormat:
    detection_classes (List[int]): list of classes
    detection_scores (List[float]): list of scores of predictions
    DetectionPredictionFormat and SegmentationPredictionFormat:
    detection_boxes (List[List[int]]): list of bboxes representing rectangles of your shapes. bboxes are formatted as
    [top, left, bottom, right]
    SegmentationPredictionFormat:
    detection_masks (List[List[int]]): list of polygons of your shapes. each polygon is a list of points with coordinates flattened
    [x1, y1, x2, y2, x3, y3, x4, y4, ..]

  • source (str, optional) : source that can give some metadata to your prediction. Defaults to None.

  • tags (list of str, optional) : tags that can give some metadata to your prediction. Defaults to None.

  • timestamp (float, optional) : timestamp of your prediction. Defaults to timestamp of monitoring service on reception.

  • shadow_latency (float, optional) : latency used by shadow model to compute prediction

  • shadow_prediction (PredictionFormat, optional) : data of your prediction made by shadow model.

Returns

a dict of data returned by our monitoring service