Now that the dataset step is completed, it is time to train our model. The first step is to create a “Project”, as described below:
You have to give a name to this project, give it a description and add the member of your organization who will work on it with you.
On the project view, you will retrieve all the dataset versions related and all the experiments performed. The first step is to attach the dataset versions which will be used in the frame of this project:
It is now time to create your first experiment to train a first model
Click on the “+” button next to the experiment.
The experiment creation form should now be displayed:
After giving a name and a description of your experiment, you need to choose the base architecture to be used for this training. You can use an experiment already performed in the current project or you can leverage an existing model to retrain it. You can use a model already exported in the past in your organization (Organization HUB) or a model available in the public HUB of Picsellia (Public HUB). When the model is selected you can modify the hyperparameters depending on your needs. The final step is to attach the dataset version to be used for the training. Please note that you need to attach the dataset to your project before using it in an experiment.
When the experiment is created, you can now launch the training of this one from the “Launch” tab.
You can launch the training of your experiment in several ways. You can run it with Google Collab, launch it on your own infrastructure by using the docker image generated, or launch it on the OVH infrastructure provided by Picsellia, during the trial period, we strongly recommend using the Google Collab or the OVH infrastructure if it has been agreed during the Trial kick-off meeting. We also recommend avoiding running training with a huge number of steps during the trial period, for instance, 5000 steps or less is largely sufficient.
On Google Collab, you just need to run the notebook cells after having filled in the information (token, organization, project, and experiment name).
If you are using the OVH infrastructure, please be advised that as soon as the training is launched, you can go to the “Telemetry” tab to see the experiment logs. However, for performance reasons, as long as the training is ongoing you will only be able to see the news logs generated after you landed on the “Telemetry” view. When the training is over you can go back to “Telemetry” and see the whole log of your training.
At the end of the training, you can go to the “Logs” tab to see all the metrics related to the training, it will help you to assert if the experiment is successful or not.
You must know that the label map displayed in the experiment “Logs” view is updated with the labels of the related dataset once the training is launched.
You can launch several experiments in your project with different dataset versions, models, or hyperparameters. From the “Experiment” list accessible on the project you can compare experiments to assert which one is the best one.
Once you have determined the best experiment, you can select it and export it as a model.
This model generated from an experiment is stored in the “Model Registry”.
A model can have several versions, for each version you can display and access all the details of this model (dataset, related experiment…).
Please note that thanks to the Python SDK it is possible to import your own model on the Picsellia platform, as explained here.
Updated 21 days ago