Properties
-
name
Name of this Experiment -
status
Status of this Experiment
Methods
update
update(
name: Optional[str] = None, description: Optional[str] = None,
base_experiment: Optional['Experiment'] = None,
base_model_version: Optional['ModelVersion'] = None,
status: Union[ExperimentStatus, str, None] = None
)
Description
Update this experiment with a given name, description, a base experiment, a base model version or a status.
Examples
my_experiment.update(description="First try Yolov5")
Arguments
-
name (str, optional) : Name of the experiment. Defaults to None.
-
description (str, optional) : Description of the experiment. Defaults to None.
-
base_experiment (Experiment, optional) : Base experiment of the experiment. Defaults to None.
-
base_model_version (ModelVersion, optional) : Base model version of the experiment. Defaults to None.
-
status (Union[ExperimentStatus, str, None], optional) : Status of the experiment. Defaults to None.
delete
delete()
Description
Delete this experiment.
Examples
my_experiment.delete()
list_artifacts
list_artifacts(
limit: Optional[int] = None, offset: Optional[int] = None,
order_by: Optional[list[str]] = None
)
Description
List artifacts stored in the experiment.
Examples
artifacts = my_experiment.list_artifacts()
Arguments
-
limit (int, optional) : limit of artifacts to retrieve. Defaults to None.
-
offset (int, optional) : offset to start retrieving artifacts. Defaults to None.
-
order_by (list[str], optional) : fields to order by. Defaults to None.
Returns
A list of (Artifact) objects that you can manipulate
delete_all_artifacts
delete_all_artifacts()
Description
Delete all stored artifacts for experiment
⚠️ DANGER ZONE: This will definitely remove the artifacts from our servers
Examples
experiment.delete_all_artifacts()
create_artifact
create_artifact(
name: str, filename: str, object_name: str, large: bool = False
)
Description
Create an artifact for this experiment.
Examples
self.create_artifact(name="a_file", filename="file.png", object_name="some_file_in_s3.png", large=False)
Arguments
-
name (str) : name of the artifact.
-
filename (str) : filename.
-
object_name (str) : s3 object name.
-
large (bool, optional) : >5Mb or not. Defaults to False.
Returns
An (Artifact) object
store
store(
name: str, path: Union[str, Path, None] = None, do_zip: bool = False
)
Description
Store an artifact and attach it to the experiment.
Examples
# Zip and store a folder as an artifact for the experiment
# you can choose an arbitrary name or refer to our 'namespace'
# for certain artifacts to have a custom behavior
trained_model_path = "my_experiment/saved_model"
experiment.store("model-latest", trained_model_path, do_zip=True)
Arguments
-
name (str) : name for the artifact. Defaults to "".
-
path (str or Path) : path to the file or folder. Defaults to None.
-
do_zip (bool, optional) : Whether to compress the file to a zip file. Defaults to False.
Raises
- FileNotFoundException : No file found at the given path
Returns
An (Artifact) object
store_local_artifact
store_local_artifact(
name: str
)
Description
Store an artifact in platform that is locally stored
This artifact shall have the name: config, checkpoint-data-latest, checkpoint-index-latest or model-latest
It will look for special file into current directory.
Examples
my_experiment.store_local_artifact("model-latest")
Arguments
- name (str) : Name of the artifact to store
Returns
An (Artifact) object
get_base_model_version
get_base_model_version()
Description
Retrieve the base model version of this experiment.
Examples
model_version = experiment.get_base_model_version()
Returns
A ModelVersion object representing the base model.
get_base_experiment
get_base_experiment()
Description
Retrieve the base experiment of this experiment.
Examples
previous = experiment.get_base_experiment()
Returns
An Experiment object representing the base experiment.
get_artifact
get_artifact(
name: str
)
Description
Retrieve an artifact from its name in this experiment.
Examples
model_artifact = experiment.get_artifact("model-latest")
assert model_artifact.name == "model-latest"
assert model_artifact.object_name == "d67924a0-7757-48ed-bf7a-322b745e917e/saved_model.zip"
Arguments
- name (str) : Name of the artifact to retrieve
Returns
An (Artifact) object
list_logs
list_logs(
limit: Optional[int] = None, offset: Optional[int] = None,
order_by: Optional[list[str]] = None
)
Description
List everything that has been logged.
List everything that has been logged to an experiment.
Examples
logs = experiment.list_logs()
assert logs[0].type == LogType.Table
assert logs[0].data == {"batch_size":4, "epochs":1000}
Arguments
-
limit (int, optional) : limit of logs to retrieve. Defaults to None.
-
offset (int, optional) : offset to start retrieving logs. Defaults to None.
-
order_by (list[str], optional) : fields to order by. Defaults to None.
Returns
A list of Log objects
delete_all_logs
delete_all_logs()
Description
Delete everything that has been logged.
Delete everything that has been logged into this experiment.
Examples
experiment.delete_all_logs()
get_log
get_log(
name: str
)
Description
Retrieve a log from its name in this experiment.
Examples
parameters = experiment.get_log("parameters")
assert log.parameters == { "batch_size":4, "epochs":1000}
Arguments
- name (str) : name of the log to retrieve
Returns
A Log object
log
log(
name: str, data: LogDataType, type: Union[LogType, str], replace: bool = True
)
Description
Record some data in an experiment.
It will create a Log object, that you can manipulate in SDK.
All logs are displayed in experiment view on Picsellia.
If a Log with this name already exists, it will be updated unless parameter replace is set to False.
If it is a LogType LINE and replace is True, then it will append data at the end of stored data.
If you want to replace a line, delete this log and create another one.
Examples
parameters = {
"batch_size":4,
"epochs":1000
}
exp.log("parameters", parameters, type=LogType.TABLE)
Arguments
-
name (str) : Name of the log.
-
data (Any) : Data to be saved.
-
type (LogType, optional) : Type of the data to log.
This will condition how it is displayed in the experiment dashboard. Defaults to None. -
replace (bool, optional) : If true and log already exists and it is not a line, it will replace log data.
Defaults to True.
Raises
- Exception : Impossible to upload the file when logging an image.
Returns
A Log object
log_parameters
log_parameters(
parameters: dict
)
Description
Record parameters of an experiment into Picsellia.
If parameters were already setup, it will be replaced.
Examples
parameters = {
"batch_size":4,
"epochs":1000
}
exp.log_parameters(parameters)
Arguments
- parameters (Any) : Parameters to be saved.
Returns
A Log object
store_logging_file
store_logging_file(
path: Union[str, Path]
)
Description
Store a logging file for this experiment.
Examples
experiment.store_logging_file("logs.txt")
Arguments
- path (str or Path) : path to the file or folder.
Raises
- FileNotFoundException : No file found at the given path
Returns
A (LoggingFile) object
get_logging_file
get_logging_file()
Description
Retrieve logging file of this experiment.
Examples
logging_file = experiment.get_logging_file()
logging_file.download()
Returns
A (LoggingFile) object
send_logging
send_logging(
log: Union[str, list], part: str, final: bool = False, special: Union[str, bool,
list] = False
)
Description
Send a logging experiment to the experiment.
Examples
experiment.send_logging("Hello World", "Hello", final=True)
Arguments
-
log (str) : Log content
-
part (str) : Logging Part
-
final (bool, optional) : True if Final line. Defaults to False.
-
special (bool, optional) : True if special log. Defaults to False.
start_logging_chapter
start_logging_chapter(
name: str
)
Description
Print a log entry to the log.
Examples
experiment.start_logging_chapter("Training")
Arguments
- name (str) : Chapter name
start_logging_buffer
start_logging_buffer(
length: int = 1
)
Description
Start logging buffer.
Examples
experiment.start_logging_buffer()
Arguments
- length (int, optional) : Buffer length. Defaults to 1.
end_logging_buffer
end_logging_buffer()
Description
End the logging buffer.
Examples
experiment.end_logging_buffer()
update_job_status
update_job_status(
status: Union[JobStatus, str]
)
Description
Update the job status of this experiment.
Examples
experiment.update_job_status(JobStatus.FAILED)
Arguments
- status (JobStatus) : Status to send
publish
publish(
name: str
)
Description
export_as_model
export_as_model(
name: str
)
Description
Publish an Experiment as a ModelVersion to your registry.
A Model with given name will be created, and its first version will be the exported experiment
Examples
model_version = experiment.export_as_model("awesome-model")
Arguments
- name (str) : Target Name for the model in the registry.
Returns
A ModelVersion just created from the experiment
export_in_existing_model
export_in_existing_model(
existing_model: Model
)
Description
Publish an Experiment as a ModelVersion of given already existing Model
Examples
my_model = client.get_model("foo_model")
model_version = experiment.export_in_existing_model(my_model)
Arguments
- existing_model (Model) : Model in the registry were this experiment should be exported.
Returns
A ModelVersion just created from the experiment
launch
launch(
gpus: int = 1
)
Description
Launch a job on a remote environment with this experiment.
ℹ️ The remote environment has to be setup prior launching the experiment.
It defaults to our remote training engine.
Examples
experiment.launch()
Arguments
- gpus (int, optional) : Number of GPU to use for the training. Defaults to 1.
download_artifacts
download_artifacts(
with_tree: bool
)
Description
Download all artifacts from the experiment to the local directory.
Examples
experiment.download_artifacts(with_tree=True)
Arguments
- with_tree : If True, the artifacts will be downloaded in a tree structure.
attach_model_version
attach_model_version(
model_version: ModelVersion, do_attach_base_parameters: bool = True
)
Description
Attach model version to this experiment.
There is only one model version attached to an experiment
Examples
foo_model = client.get_model("foo").get_version(3)
my_experiment.attach_model_version(foo_model)
Arguments
-
model_version ModelVersion : A model version to attach to the experiment.
-
do_attach_base_parameters (bool) : Attach base parameters of model version to experiment. Defaults to True.
attach_dataset
attach_dataset(
name: str, dataset_version: DatasetVersion
)
Description
Attach a dataset version to this experiment.
Retrieve or create a dataset version and attach it to this experiment.
Examples
foo_dataset = client.get_dataset("foo").get_version("first")
my_experiment.attach_dataset("training", foo_dataset)
Arguments
-
name (str) : Name to label this attached dataset. Use it like a descriptor of the attachment.
-
dataset_version DatasetVersion : A dataset version to attach to the experiment.
detach_dataset
detach_dataset(
dataset_version: DatasetVersion
)
Description
Detach a dataset version from this experiment.
Examples
foo_dataset = client.get_dataset("foo").get_version("first")
my_experiment.attach_dataset(foo_dataset)
my_experiment.detach_dataset(foo_dataset)
Arguments
- dataset_version DatasetVersion : A dataset version to attach to the experiment.
list_attached_dataset_versions
list_attached_dataset_versions()
Description
Retrieve all dataset versions attached to this experiment
Examples
datasets = my_experiment.list_attached_dataset_versions()
Returns
A list of DatasetVersion object attached to this experiment
get_dataset
get_dataset(
name: str
)
Description
Retrieve the dataset version attached to this experiment with given name
Examples
pics = dataset.list_assets()
annotations = dataset.list_annotations()
Arguments
- name (str) : Name of the dataset version in the experiment
Returns
A DatasetVersion object attached to this experiment
run_train_test_split_on_dataset
run_train_test_split_on_dataset(
name: str, prop: float = 0.8, random_seed: Optional[Any] = None
)
Description
Run a train test split on a dataset attached to this experiment.
Examples
pics = dataset.list_assets()
annotations = dataset.list_annotations()
Arguments
-
name (str) : Name of the dataset version in the experiment
-
prop (float) : Proportion of the dataset to use for training
-
random_seed (int) : Random seed to use for the split
Returns
- train_assets MultiAsset : assets to use for training
- eval_assets MultiAsset: assets to use for evaluation
- train_label_count (dict): number of assets per label in train set
- eval_label_count (dict): number of assets per label in eval set
- labels (list[Label]): list of labels in the dataset
A tuple containing:
add_evaluation
add_evaluation(
asset: Asset, add_type: Union[str,
AddEvaluationType] = AddEvaluationType.REPLACE,
rectangles: Optional[list[tuple[int, int, int, int, Label, float]]] = None,
polygons: Optional[list[tuple[list[list[int]], Label, float]]] = None,
classifications: Optional[list[tuple[Label, float]]] = None
)
Description
Add an evaluation of the asset by this experiment.
By default, if given asset had already been evaluated, evaluation will be replaced.
You can add different shapes but will only be able to compute evaluation metrics on one kind of inference type.
Examples
asset = dataset_version.find_asset(filename="asset-1.png")
experiment.add_evaluation(asset, rectangles=[(10, 20, 30, 40, label_cat, 0.8), (50, 60, 20, 30, label_dog, 0.7)])
job = experiment.compute_evaluations_metrics(InferenceType.OBJECT_DETECTION)
job.wait_for_done()
Arguments
-
asset Asset : asset to add evaluation on
-
add_type (str ou AddEvaluationType) : replace or keep old evaluation, defaults to
-
rectangles (optional) : list of tuples representing rectangles with scores
-
polygons (optional) : list of tuples representing polygons with scores
-
classifications (optional) : list of tuples representing classifications with scores
list_evaluations
list_evaluations(
limit: Optional[int] = None, offset: Optional[int] = None,
page_size: Optional[int] = None, order_by: Optional[list[str]] = None,
q: Optional[str] = None
)
Description
List evaluations of this experiment.
It will retrieve all evaluations made by this experiment.
You will then be able to manipulate them.
Arguments
-
limit (int, optional) : if given, will limit the number of evaluations returned
-
offset (int, optional) : if given, will return evaluations that would have been returned
after this offset in given order -
page_size (int, optional) : page size when returning evaluations paginated, can change performance
-
order_by (list[str], optional) : if not empty, will order evaluations by fields given in this parameter
-
q (str, optional) : if given, will try to apply query to filter evaluations
Returns
A (MultiEvaluation)
compute_evaluations_metrics
compute_evaluations_metrics(
inference_type: InferenceType, evaluations: Union[list[Union[str, UUID]],
list[Evaluation], MultiEvaluation, None] = None, worker: Optional[Worker] = None,
status: Optional[AnnotationStatus] = None
)
Description
Compute evaluation metrics across evaluations added to this experiment.
Picsellia will compute coco metrics on each evaluation and compare to existing annotations.
Examples
experiment.add_evaluation(rectangles=[(10, 20, 30, 40, label_cat, 0.8), (50, 60, 20, 30, label_dog, 0.7)])
experiment.compute_evaluations_metrics(InferenceType.OBJECT_DETECTION)
Arguments
status (AnnotationStatus, optional): Existing annotations will be filtered to only retrieve those that have this status.
Returns
A Job that you can wait for done.