Picsellia
  • Picsellia
  • Getting started
    • Start using Picsellia
    • Create, annotate, review a Dataset
    • Create a new Dataset Version with merged labels
    • Train a custom Object Detection model
    • Train a custom Classification model
    • Deploy model in production (Tensorflow only)
    • Feedback loop - Send predictions from models to Datalake or Datasets
  • Data Management
    • Upload assets to your Lake
    • Edit Tags for Pictures
    • Create a Dataset
    • Add data to a Dataset
    • Create a new Dataset version
    • Configure your Labels
    • Import annotation from other Dataset version
  • Experiment Tracking
    • Initialize an experiment
    • Checkout an experiment
    • Log your results to Picsell.ia
    • Store your files to Picsell.ia
    • Evaluate your models
    • Retrieve and update your assets
    • Publish your model
    • Namespace
  • Hyperparameter tuning
    • Overview
    • Install Picsell CLI
    • Config
    • Launch your Hyperparameters tuning
  • Models
    • Model HUB
  • References
    • API Reference
    • Python SDK Reference
    • Python Training Reference
  • Organization
  • Website
Powered by GitBook
On this page
  • File Namespace
  • config
  • checkpoint-data-latest
  • checkpoint-index-latest
  • model-latest
  • Data Namespace
  • accuracy
  • loss
  • train-split
  • test-split
  • eval-split
  • labelmap
  • confusion-matrix
  • evaluation

Was this helpful?

  1. Experiment Tracking

Namespace

This page describes all of the name of the reserved namespace for experiments and their specificity

PreviousPublish your modelNextHyperparameter tuning

Last updated 4 years ago

Was this helpful?

File Namespace

We have reserved some names for your files so special actions and emphasize can be done automatically on the platform when you upload them.

Here is the full list :

All the names are in lowercase

config

This is the config file used for training with our for Tensorflow 1 and Tensorflow 2.

checkpoint-data-latest

This is the data file of checkpoints when training with Tensorflow.

checkpoint-index-latest

This is the index file of checkpoints when training with Tensorflow.

model-latest

This is the exported trained model file, it must be called that way to be deployed using Picsell.ia

Data Namespace

We have reserved some names for what you log to Picsell.ia, it is used to emphasize the most important information in the 'Summary' section of your experiment or so we can automatically compare your experiments according to those values.

Here is the full list :

All the names are in lowercase

accuracy

If you log your accuracy to Picsell.ia under this name, we will automatically display the last (or the only) value to the summary (see above).

loss

If you log your loss to Picsell.ia under this name, we will automatically display the last (or the only) value to the summary (see above).

train-split

It will frequently happen that you split your data between a training and a test/validation batch. If you want to visualize the repartition of the data among all your classes, you will likely log a bar chart looking like this :

With the following command:

data = {
    'x': ['car', 'person', 'bird'],
    'y': [10, 25, 12],
    'image_list': [...]
}
experiment.log(name='train-split', type='bar', data=data)

By naming your data train-split, you will have access to a brand new tab in your experiment that allows you to dive deep in the batches from your splits and check if there appears to be no issues with your data.

The image_list key allows you to match each of your image with a particular set so you can explore it later in the platform. The value is a list containing the internal_id of each picture.

test-split

eval-split

labelmap

If you train a neural network within your experiment, you will need a label map to teach the network with your labels. To save it, we encourage you to log it that way :

data = {
    '0': 'car',
    '1': 'person',
    '2': 'bus'
}
experiment.log(name='labelmap', data=data, type='labelmap')

Remember to set the type as 'labelmap' so we will not try to display it in your dashboard, but we will be able to use it in the playground.

confusion-matrix

If you log a heatmap with the name 'confusion-matrix', it will obviously be displayed in your logs tab, but will also be used in the eval tab for the interactive evaluation visualization.

evaluation

After you train your model, you will surely perform an evaluation step. To make it easier for you to find edge-cases, compare predictions to ground-truth ... We created a dedicated tab in the experiment dashboard called eval.

See

See

See for more information on the format.

To see how to correctly use the evaluation name, please .

python training packages
refer to this page of the documentation
train-split
train-split
heatmap