Picsellia platform structure

1. Global platform structure

In order to ease data scientists' lives and navigation across all the features of Picsellia, the platform has been divided into three main parts that handle each step of any Computer Vision project life cycle.

  • Data Management
  • Data Science
  • Model Operations
Platform structure

Platform structure

2. Data Management

In this first main part of Picsellia, the purpose is to upload your raw images and at the end get the ideal Dataset. Data management is composed of two main features which are Datalake and Dataset available on top of the left sidebar.

Through the Datalake you will gather & organize all your Data in a single and shared place.

Datalake

Datalake

Using Datalake subsets, you will then be able to create your Datasets. Datasets management features will allow you to version, annotate, and process your different Datasets.

Datasets

Datasets

📽️

Processings

You can create your own Processings and browse among the public ones in the Processings tab.

3. Data Science

The Data Science part of Picsellia is mainly related to experiment tracking.

Under the Projects feature available on the left sidebar, you'll retrieve all your different Data Science projects.

Projects

Projects

Each Project is composed of one or several Experiments. After Experiment creation, you'll be able to launch the training of your Model, assess the quality of the training through the Experiment Tracking dashboard, and evaluate the Model performances in the evaluation interface.

4. Model Operations

Model Operations features are used to operate a ready-to-deploy Models created with Picsellia or not.

All the Models are stored and versioned in your private Registry, from here, you can deploy them on Picsellia serving infra or on your own infrastructure.

Models

Models

When deployed, keep trace and monitor your models in the Monitoring Dashboard accessible from the Deployment tab, also available in the left sidebar.

Deployments

Deployments

🔄

Create your pipelines

From each Deployment, you can set your data pipeline up in order to retrain & redeploy continuously your Models.