Deployment

Deployment


get_tags

Signature

get_tags()

Description

Retrieve the tags of your deployment.

Examples

    tags = deployment.get_tags()
    assert tags[0].name == "cool"

Returns

List of tags as Tag


retrieve_information

Signature

retrieve_information()

Description

Retrieve some information about this deployment from service.

Examples

    print(my_deployment.retrieve_information())
    >>> {

    }

update

Signature

update(
   name: Optional[str] = None, active: Optional[bool] = None,
   target_datalake: Optional[Datalake] = None, min_threshold: Optional[float] = None
)

Description

Update this deployment with a new name.

Examples

    a_tag.update(name="new name")

delete

Signature

delete(
   force_delete: bool = False
)

Description


set_model

Signature

set_model(
   model_version: ModelVersion
)

Description

Update this deployment with a new name.

Examples

    a_tag.update(name="new name")

get_model_version

Signature

get_model_version()

Description

Retrieve currently used model version

Examples

    model_version = deployment.get_model()

Returns

A Model object


set_shadow_model

Signature

set_shadow_model(
   shadow_model_version: ModelVersion
)

Description

Update this deployment with a new name.

Examples

    a_tag.update(name="new name")

get_shadow_model

Signature

get_shadow_model()

Description

Retrieve currently used shadow model

Examples

    shadow_model = deployment.get_shadow_model()

Returns

A Model object


predict

Signature

predict(
   file_path: str
)

Description

Run a prediction on our Serving platform

Examples

    deployment = client.get_deployment(
        name="awesome-deploy"
    )
    deployment.predict('my-image.png')

Arguments

  • file_path (str) : path to the image to predict

Returns

A (dict) with information of the prediction


setup_feedback_loop

Signature

setup_feedback_loop(
   dataset_version: DatasetVersion
)

Description

Set up the Feedback Loop for a Deployment.
This way, you will be able to attached reviewed predictions to the Dataset.
This is a great option to increase your training set with quality data.

Examples

    dataset = client.get_dataset(
        name="my-dataset",
        version="latest"
    )
    deployment = client.get_deployment(
        name="awesome-deploy"
    )
    deployment.setup_feedback_loop(
        dataset
    )

Arguments


toggle_feedback_loop

Signature

toggle_feedback_loop(
   active: bool
)

Description

Toggle the Feedback Loop for a Deployment.
This way, you will be able to attached reviewed predictions to the updated Dataset.
This is a great option to increase your training set with quality data.

Examples

    dataset = client.get_dataset(
        name="my-dataset",
        version="new-version"
    )
    deployment = client.get_deployment(
        name="awesome-deploy"
    )
    deployment.toggle_feedback_loop(
        dataset
    )

Arguments

  • active (bool) : (des)activate feedback loop

setup_continuous_training

Signature

setup_continuous_training(
   project: Project, dataset_version: DatasetVersion, model_version: ModelVersion,
   trigger: Union[str, ContinuousTrainingTrigger] = None,
   threshold: Optional[int] = None, experiment_parameters: Optional[dict] = None,
   scan_config: Optional[dict] = None
)

Description

Initialize and activate the continuous training features of picsellia. 🥑
A Training will be triggered using the configured Dataset
and Model as base whenever your Deployment pipeline hit the trigger.

There is 2 types of continuous training different:
You can launch a continuous training via Scan configuration or via Experiment
You need to give whether experiment_parameters or scan_config but not both
For scan configuration: more info.

Examples

We want to set up a continuous training pipeline that will be trigger
every 150 new predictions reviewed by your team.
We will use the same training parameters as those used when building the first model.

    deployment = client.get_deployment("awesome-deploy")
    project = client.get_project(name="my-project")
    dataset_version = project.get_dataset(name="my-dataset").get_version("latest")
    model_version = client.get_model(name="my-model").get_version(0)
    experiment = model_version.get_source_experiment()
    parameters = experiment.get_log('parameters')
    feedback_loop_trigger = 150
    deployment.setup_continuous_training(
        project, dataset_version, model_version,
        threshold=150, experiment_parameters=experiment_parameters
    )

Arguments

  • project Project : The project that will host your pipeline.

  • dataset_version DatasetVersion : The Dataset that will be used as training data for your training.

  • model_version ModelVersion : The exported Model to perform transfer learning from.

  • threshold (int) : Number of images that need to be review to trigger the training.

  • trigger (ContinuousTrainingTrigger) : Type of trigger to use when there is enough reviews.

  • experiment_parameters (Optional[dict], optional) : Training parameters. Defaults to None.

  • scan_config (Optional[dict], optional) : Scan configuration dict. Defaults to None.


toggle_continuous_training

Signature

toggle_continuous_training(
   active: bool
)

Description

Update your continuous training pipeline.

Examples

    deployment = client.get_deployment("awesome-deploy")
    deployment.update_continuous_training(active=False)

setup_continuous_deployment

Signature

setup_continuous_deployment(
   policy: Union[ContinuousDeploymentPolicy, str]
)

Description

Setup the continuous deployment for this pipeline

Examples

    deployment = client.get_deployment(
        name="awesome-deploy"
    )
    deployment.setup_continuous_deployment(ContinuousDeploymentPolicy.DEPLOY_MANUAL)

Arguments

  • policy (ContinuousDeploymentPolicy) : policy to use

toggle_continuous_deployment

Signature

toggle_continuous_deployment(
   active: bool
)

Description

Toggle continuous deployment for this deployment

Examples

    deployment = client.get_deployment(
        name="awesome-deploy"
    )
    deployment.toggle_continuous_deployment(
        dataset
    )

Arguments

  • active (bool) : (des)activate continuous deployment

get_stats

Signature

get_stats(
   service: ServiceMetrics, model_version: Optional[ModelVersion] = None,
   from_timestamp: Optional[float] = None, to_timestamp: Optional[float] = None,
   since: Optional[int] = None, includes: Optional[List[str]] = None,
   excludes: Optional[List[str]] = None, tags: Optional[List[str]] = None
)

Description

Retrieve stats of this deployment stored in Picsellia environment.

Mandatory param is "service" an enum of type ServiceMetrics. Values possibles are :
PREDICTIONS_OUTLYING_SCORE
PREDICTIONS_DATA
REVIEWS_OBJECT_DETECTION_STATS
REVIEWS_CLASSIFICATION_STATS
REVIEWS_LABEL_DISTRIBUTION_STATS

AGGREGATED_LABEL_DISTRIBUTION
AGGREGATED_OBJECT_DETECTION_STATS
AGGREGATED_PREDICTIONS_DATA
AGGREGATED_DRIFTING_PREDICTIONS

For aggregation, computation may not have been done by the past.
You will need to force computation of these aggregations and retrieve them again.

Examples

    my_deployment.get_stats(ServiceMetrics.PREDICTIONS_DATA)
    my_deployment.get_stats(ServiceMetrics.AGGREGATED_DRIFTING_PREDICTIONS, since=3600)
    my_deployment.get_stats(ServiceMetrics.AGGREGATED_LABEL_DISTRIBUTION, model_id=1239012)

Arguments

  • service (str) : service queried

  • model_version (ModelVersion, optional) : Model that shall be used when retrieving data.
    Defaults to None.

  • from_timestamp (float, optional) : System will only retrieve prediction data after this timestamp.
    Defaults to None.

  • to_timestamp (float, optional) : System will only retrieve prediction data before this timestamp.
    Defaults to None.

  • since (int, optional) : System will only retrieve prediction data that are in the last seconds.
    Defaults to None.

  • includes (List[str], optional) : Research will include these ids and excludes others.
    Defaults to None.

  • excludes (List[str], optional) : Research will exclude these ids.
    Defaults to None.

  • tags (List[str], optional) : Research will be done filtering by tags.
    Defaults to None.

Returns

A dict with queried statistics about the service you asked


monitor

Signature

monitor(
   image_path: str, latency: float, height: int, width: int,
   prediction: PredictionFormat, source: Optional[str] = None,
   tags: Optional[List[str]] = None, timestamp: Optional[float] = None,
   model_version: Optional[ModelVersion] = None,
   shadow_model_version: Optional[ModelVersion] = None,
   shadow_latency: Optional[float] = None,
   shadow_raw_predictions: Optional[PredictionFormat] = None
)

Description