Properties
name
Name of this Deployment
Methods
add_tags
add_tags(
tags: Union[Tag, List[Tag]]
)
Description
Add some tags to an object.
It can be used on Data/MultiData/Asset/MultiAsset/DatasetVersion/Dataset/Model/ModelVersion.
You can give a Tag or a list of Tag.
Examples
tag_bicycle = client.create_tag("bicycle", Target.DATA)
tag_car = client.create_tag("car", Target.DATA)
tag_truck = client.create_tag("truck", Target.DATA)
data.add_tags(tag_bicycle)
data.add_tags([tag_car, tag_truck])
get_tags
get_tags()
Description
Retrieve the tags of your deployment.
Examples
tags = deployment.get_tags()
assert tags[0].name == "cool"
Returns
List of tags as Tag
retrieve_information
retrieve_information()
Description
Retrieve some information about this deployment from service.
Examples
my_deployment.retrieve_information()
update
update(
name: Optional[str] = None, target_datalake: Optional[Datalake] = None,
min_threshold: Optional[float] = None
)
Description
Update this deployment with a new name, another target datalake or a minimum threshold
Examples
a_tag.update(name="new name", min_threshold=0.4)
Args
-
name (str, optional) : New name of the deployment
-
target_datalake (Datalake, optional) : Datalake where data will be uploaded on new prediction
-
min_threshold (float, optional) : Minimum confidence threshold.
Serving will filter detection boxes or masks that have a detection score lower than this threshold
delete
delete(
force_delete: bool = False
)
Description
set_model
set_model(
model_version: ModelVersion
)
Description
Update this deployment with a new name.
Examples
a_tag.update(name="new name")
get_model_version
get_model_version()
Description
Retrieve currently used model version
Examples
model_version = deployment.get_model()
Returns
A Model object
set_shadow_model
set_shadow_model(
shadow_model_version: ModelVersion
)
Description
Update this deployment with a new name.
Examples
a_tag.update(name="new name")
get_shadow_model
get_shadow_model()
Description
Retrieve currently used shadow model
Examples
shadow_model = deployment.get_shadow_model()
Returns
A Model object
predict
predict(
file_path: Union[str, Path], tags: Union[str, Tag, List[Union[Tag, str]],
None] = None, source: Union[str, DataSource, None] = None
)
Description
Run a prediction on our Serving platform
Examples
deployment = client.get_deployment(
name="awesome-deploy"
)
deployment.predict('image_420.png', tags=["gonna", "give"], source="camera-1")
Arguments
-
tags (str, Tag, list of str or Tag, optional) : a list of tag to add to the data that will be created on the platform
-
source (str or DataSource, optional) : a source to attach to the data that will be created on the platform.
-
file_path (str or Path) : path to the image to predict
Returns
A (dict) with information of the prediction
setup_feedback_loop
setup_feedback_loop(
dataset_version: DatasetVersion
)
Description
Set up the Feedback Loop for a Deployment.
This way, you will be able to attached reviewed predictions to the Dataset.
This is a great option to increase your training set with quality data.
Examples
dataset_version = client.get_dataset("my-dataset").get_version("latest")
deployment = client.get_deployment(
name="awesome-deploy"
)
deployment.setup_feedback_loop(dataset_version)
Arguments
- dataset_version DatasetVersion : a connected DatasetVersion
check_feedback_loop_status
check_feedback_loop_status()
Description
Refresh feedback loop status of this deployment.
Examples
deployment = client.get_deployment(
name="awesome-deploy"
)
deployment.check_feedback_loop_status()
disable_feedback_loop
disable_feedback_loop()
Description
Disable the Feedback Loop for a Deployment.
Examples
deployment = client.get_deployment(
name="awesome-deploy"
)
deployment.disable_feedback_loop()
setup_continuous_training
setup_continuous_training(
project: Project, dataset_version: DatasetVersion, model_version: ModelVersion,
trigger: Union[str, ContinuousTrainingTrigger] = None,
threshold: Optional[int] = None, experiment_parameters: Optional[dict] = None,
scan_config: Optional[dict] = None
)
Description
Initialize and activate the continuous training features of picsellia. 🥑
A Training will be triggered using the configured Dataset
and Model as base whenever your Deployment pipeline hit the trigger.
There is 2 types of continuous training different:
You can launch a continuous training via Scan configuration or via Experiment
You need to give whether experiment_parameters
or scan_config
but not both
For scan configuration: more info.
Examples
We want to set up a continuous training pipeline that will be trigger
every 150 new predictions reviewed by your team.
We will use the same training parameters as those used when building the first model.
deployment = client.get_deployment("awesome-deploy")
project = client.get_project(name="my-project")
dataset_version = project.get_dataset(name="my-dataset").get_version("latest")
model_version = client.get_model(name="my-model").get_version(0)
experiment = model_version.get_source_experiment()
parameters = experiment.get_log('parameters')
feedback_loop_trigger = 150
deployment.setup_continuous_training(
project, dataset_version, model_version,
threshold=150, experiment_parameters=experiment_parameters
)
Arguments
-
project Project : The project that will host your pipeline.
-
dataset_version DatasetVersion : The Dataset that will be used as training data for your training.
-
model_version ModelVersion : The exported Model to perform transfer learning from.
-
threshold (int) : Number of images that need to be review to trigger the training.
-
trigger (ContinuousTrainingTrigger) : Type of trigger to use when there is enough reviews.
-
experiment_parameters (Optional[dict], optional) : Training parameters. Defaults to None.
-
scan_config (Optional[dict], optional) : Scan configuration dict. Defaults to None.
toggle_continuous_training
toggle_continuous_training(
active: bool
)
Description
Update your continuous training pipeline.
Examples
deployment = client.get_deployment("awesome-deploy")
deployment.update_continuous_training(active=False)
setup_continuous_deployment
setup_continuous_deployment(
policy: Union[ContinuousDeploymentPolicy, str]
)
Description
Set up the continuous deployment for this pipeline
Examples
deployment = client.get_deployment(
name="awesome-deploy"
)
deployment.setup_continuous_deployment(ContinuousDeploymentPolicy.DEPLOY_MANUAL)
Arguments
- policy (ContinuousDeploymentPolicy) : policy to use
toggle_continuous_deployment
toggle_continuous_deployment(
active: bool
)
Description
Toggle continuous deployment for this deployment
Examples
deployment = client.get_deployment(
name="awesome-deploy"
)
deployment.toggle_continuous_deployment(
dataset
)
Arguments
- active (bool) : (des)activate continuous deployment
get_stats
get_stats(
service: ServiceMetrics, model_version: Optional[ModelVersion] = None,
from_timestamp: Optional[float] = None, to_timestamp: Optional[float] = None,
since: Optional[int] = None, includes: Optional[List[str]] = None,
excludes: Optional[List[str]] = None, tags: Optional[List[str]] = None
)
Description
Retrieve stats of this deployment stored in Picsellia environment.
Mandatory param is "service" an enum of type ServiceMetrics. Values possibles are :
PREDICTIONS_OUTLYING_SCORE
PREDICTIONS_DATA
REVIEWS_OBJECT_DETECTION_STATS
REVIEWS_CLASSIFICATION_STATS
REVIEWS_LABEL_DISTRIBUTION_STATS
AGGREGATED_LABEL_DISTRIBUTION
AGGREGATED_OBJECT_DETECTION_STATS
AGGREGATED_PREDICTIONS_DATA
AGGREGATED_DRIFTING_PREDICTIONS
For aggregation, computation may not have been done by the past.
You will need to force computation of these aggregations and retrieve them again.
Examples
my_deployment.get_stats(ServiceMetrics.PREDICTIONS_DATA)
my_deployment.get_stats(ServiceMetrics.AGGREGATED_DRIFTING_PREDICTIONS, since=3600)
my_deployment.get_stats(ServiceMetrics.AGGREGATED_LABEL_DISTRIBUTION, model_version=my_model)
Arguments
-
service (str) : service queried
-
model_version (ModelVersion, optional) : Model that shall be used when retrieving data.
Defaults to None. -
from_timestamp (float, optional) : System will only retrieve prediction data after this timestamp.
Defaults to None. -
to_timestamp (float, optional) : System will only retrieve prediction data before this timestamp.
Defaults to None. -
since (int, optional) : System will only retrieve prediction data that are in the last seconds.
Defaults to None. -
includes (List[str], optional) : Research will include these ids and excludes others.
Defaults to None. -
excludes (List[str], optional) : Research will exclude these ids.
Defaults to None. -
tags (List[str], optional) : Research will be done filtering by tags.
Defaults to None.
Returns
A dict with queried statistics about the service you asked
monitor
monitor(
image_path: Union[str, Path], latency: float, height: int, width: int,
prediction: PredictionFormat, source: Optional[str] = None,
tags: Optional[List[str]] = None, timestamp: Optional[float] = None,
model_version: Optional[ModelVersion] = None,
shadow_model_version: Optional[ModelVersion] = None,
shadow_latency: Optional[float] = None,
shadow_raw_predictions: Optional[PredictionFormat] = None
)
Description