Namespace
This page describes all of the name of the reserved namespace for experiments and their specificity
Last updated
Was this helpful?
This page describes all of the name of the reserved namespace for experiments and their specificity
Last updated
Was this helpful?
We have reserved some names for your files so special actions and emphasize can be done automatically on the platform when you upload them.
Here is the full list :
All the names are in lowercase
This is the config file used for training with our for Tensorflow 1 and Tensorflow 2.
This is the data
file of checkpoints when training with Tensorflow.
This is the index
file of checkpoints when training with Tensorflow.
This is the exported trained model file, it must be called that way to be deployed using Picsell.ia
We have reserved some names for what you log to Picsell.ia, it is used to emphasize the most important information in the 'Summary' section of your experiment or so we can automatically compare your experiments according to those values.
Here is the full list :
All the names are in lowercase
If you log your accuracy to Picsell.ia under this name, we will automatically display the last (or the only) value to the summary (see above).
If you log your loss to Picsell.ia under this name, we will automatically display the last (or the only) value to the summary (see above).
It will frequently happen that you split your data between a training and a test/validation batch. If you want to visualize the repartition of the data among all your classes, you will likely log a bar chart looking like this :
With the following command:
By naming your data train-split
, you will have access to a brand new tab in your experiment that allows you to dive deep in the batches from your splits and check if there appears to be no issues with your data.
The image_list
key allows you to match each of your image with a particular set so you can explore it later in the platform. The value is a list containing the internal_id of each picture.
If you train a neural network within your experiment, you will need a label map to teach the network with your labels. To save it, we encourage you to log it that way :
Remember to set the type as 'labelmap
' so we will not try to display it in your dashboard, but we will be able to use it in the playground.
If you log a heatmap with the name 'confusion-matrix
', it will obviously be displayed in your logs
tab, but will also be used in the eval
tab for the interactive evaluation visualization.
After you train your model, you will surely perform an evaluation step. To make it easier for you to find edge-cases, compare predictions to ground-truth ... We created a dedicated tab in the experiment dashboard called eval
.
See
See
See for more information on the format.
To see how to correctly use the evaluation name, please .