Initialize a Scan

Config

Create a Scan

First, let's initialize our client :

from picsellia import Client

api_token = 'YOUR TOKEN'
project_name = 'My awesome Project'

client = Client(api_token)

project = client.get_project(project_name)

Here's an example of how to configure a Scan :

config = {
    'script': 'script.py',
    'execution': {
        'type': 'agents'
    },
    'strategy': 'grid',
    'metric': {
        'name': 'Loss-total_loss',
        'goal': 'minimize'
    },
    'parameters': {
        'batch_size': {
            'values': [2, 4, 8],
        },
        'learning_rate': {
            'values': [1e-3, 1e-4, 1e-5]
        },
        'steps': {
            'value': 1000
        },
        'annotation_type': {
            'value': 'rectangle'
        }
    },
    'base_model': 'picsell/ssd-mobilenet-v2-640-fpnlite',
    'dataset': 'SampleDataset/first'
}
project.init_scan('test-scan-1', config)

Configuration

Top-level key
executionHow you want to run the Scan (manually, remotely, using agents)
imageName of the docker image executing your training script (optional)
scriptFilename of the script you want to execute (optional)
requirementsList of package needed if you use our base docker image (optional)
strategyThe search strategy for the Scan (required)
max_runMaximum number of runs for this Scan (optional, default = 100)
early_stoppingThe chosen early-stopping or pruning algorithm (optional)
metricThe metric to optimize (required)
parametersThe parameter space used for the search (required)
base_modelA model used to start each run (as in experiment init) (optional)
datasetA dataset used to start each run (as in experiment init) (optional)

Execution

Specify how you want to run the Scan.

execution
manualWe will just define the grid of parameter, you will then be able to use our python SDK to access each run with each set of parameters and execute it on your own, either in a script or in jupyter notebook for example
remoteWe will automatically launch remote runs for you on servers equipped with NVIDIA V100S. You must set up max_worker as the limit of parallel runs.

❗️

You need to subscribe to a paid plan if you want to launch remote Scans

'execution': {
    'type': 'manual'
}

# OR 

'execution': {
    'type': 'remote',
    'max_worker': 4,
}

image

Our Scan engine is based on Docker images that we will schedule and launch on distributed Cloud machines.

If you do not specify any image parameter, we will use our base image (called custom-run:1.0) that will encapsulate the script you provided, install the specified requirements and then launch your script.

🚧

Be aware

Specifying a custom image that will run your code is compulsory if you do not provide a script param, for us to launch your script in our base image

But to save time on package install or to be sure that your script will run 100% of the time, we encourage you to build your own custom image, and then push it to the Docker HUB so we can run it remotely or just have it on every machines where you want to launch our agents.

To specify a custom image, you just have to give its name like below :

'image': 'picsellpn/custom-run:1.0'

script

If you want to be able to automatically launch your training script without having it on every machines you can specify the path the file, it will be saved on Picsellia and used for each run.

🚧

Providing a script is mandatory if youdo not want to define custom Docker images but use our base images.

'script': 'my_training_script.py'

requirements

Specify this parameter if you want to install specific Python package needed for your script to run when using our base images.

For example, as our image only have the picsellia package installed, if you need tensorflow 2.3.1 to run your script you will set the requirements as below :

'requirements': {
    'package': 'tensorflow':
    'version': '2.3.1'
}

Alternatively, you can set requirements to the path of a requirements.txt file just like this :

'requirements': 'path/to/requirements.txt'

With the requirements.txt file looking like this :

tensorflow==2.3.1
numpy==1.20.2

strategy

Allows you to choose a search strategy within the following options :

strategy
gridGrid-search, will try out every parameter combinations
optunaOptimization of hyperparameter search using Optuna library. To use this strategy, you will have to set up a distribution or specific values for every parameter.

max_run

When you perform hyperparameter-search, you never really know how many runs will be needed to find the best combination. For example if you choose an Optuna strategy for your run, the parameters for future runs are computed accordingly to the results of past runs.

That's why you can set up a max_run parameter, that allows you to be sure that your Scan will stop before using infinite resources, and that you will be able to create a new scan with a reduced search space later.

πŸ“˜

If you do not specify this parameter, the default value is set to 100 runs.

'max_run': 256

early_stopping

Early-stopping is an optional features that can drastically speed-up your hyperparameter search by deciding whether of not some runs might be stopped early or are given a chance to continue.

If some runs are not promising, they are automatically stopped and the agents get a new set of parameter to try so you do not spare time on unnecessary experiments.

methoddecription
hyperbandImplementation of the Hyperband Algorithm by Optuna

Parameters :

parameterdescription
min_iterThe minimum number of iteration (e.g training epochs or steps) to wait before deciding to prune the run or not
max_iterThe maximum number of iteration to wait before you either prune the run or let it finish
reduction_factorAt the completion point of each rung, about 1/reduction_factor trials will be promoted.
'early_stopping': {
    'hyperband': {
        'min_iter': 100,
        'max_iter': 1000,
        'reduction_factor': 3,
    }
}

metric

The name of the metric you want to optimize, and the way you want to optimize it.

'metric' {
    'name': 'loss',
    'goal': 'maximize'
}
'metric': {
    'name': 'loss',
    'goal': 'minimize'
}

For the Scan to run properly, you must log explicitly the metric somewhere in the script you use, this means that you should have a line looking like this :

experiment.log('loss', value, 'line')

Where the name of what you log must corresponds to the metric you set up during configuration.

parameters

Specify the hyperparameter space to explore. You can either set up a list of constant values for each parameter or just choose a distribution and the bounds (for optuna).

Values
value (int, float, str)Single value for hyperparameter
values (list[int, float, str])List of all values for hyperparameter
distribution (str)Choose an available distribution from the list below (available for optuna strategy)
min (int, float)Minimum value for hyperparameter. It's the lower bound for the chosen distribution
max (int, float)Maximum value for hyperparameter. It's the upper bound for the chosen distribution
q (float)Quantization step size for discrete hyperparameters
step (int)Step size between values ( for int_uniform distribution)
'parameter_name':{
    'value': 0.0001
}
'parameter_name': {
    'values': ['relu', 'elu', 'selu']
}
'parameter_name': {
    'distribution': 'log_uniform',
    'min': 1e-5,
    'max': 1e-3
}

distributions

Here is the list of all the distributions you can use :

NameInformation
constantConstant value for hyperparameter, equals to value.
categoricalCategorical distribution, hyperparameter value will be chosen from values
uniformContinuous uniform distribution. You must set the bounds min and max.
int_uniformDiscrete uniform distribution for integer. You must set the bounds min and max. You can also set step to a value higher than 1 if you want to.
discrete_uniformDiscrete uniform distribution. You must set the bounds min and max, and the q parameter (step of discretization)
log_uniformContinuous uniform distribution in the log domain. You must set up the bounds min and max.

Examples

'parameter_name': {
    'value': 0.3546
}
'parameter_name': {
    'values': ['relu', 'elu', 'selu']
}
'parameter_name': {
    'distribution': 'uniform',
    'min': 1e-5,
    'max': 1e-2,
}
'parameter_name': {
    'distribution': 'int_uniform',
    'min': 2,
    'max': 16,
    'step': 2
}
'parameter_name': {
    'distribution': 'discrete_uniform',
    'min': 0.1,
    'max': 1,
    'q': 0.1
}
'parameter_name': {
    'distribution': 'log_uniform',
    'min': 1e-2,
    'max': 1e3,
}

base_model

As when you create an experiment, you can choose a model whose files, labelmap and so on, will be duplicated in each run's experiment.

To choose a model, you have to specify the username of the author and the model name this way : <username>/<model_name>

To use a first trained experiment, you need to export it first as model to use it in a scan.
Learn how to export an experiment

'base_model': 'picsell/faster-rcnn-resnet-640'

dataset

As when you create an experiment, you can choose a dataset to attach to your run. To do this, the dataset must have already been attached to the project first. Then you have to specify the chosen dataset this way : <dataset_name>/<dataset_version>

'dataset': 'SampleDataset/first'

Did this page help you?