Config
Launch a Scan
First, let's initialize our client :
Here's an example of how to configure a Scan :
Configuration
Top-level key
execution
How you want to run the Scan (manually, remotely, using agents)
image
Name of the docker image executing your training script (optional)
script
Filename of the script you want to execute (optional)
requirements
List of package needed if you use our base docker image (optional)
strategy
The search strategy for the Scan (required)
max_run
Maximum number of runs for this Scan (optional, default = 100)
early_stopping
The chosen early-stopping or pruning algorithm (optional)
metric
The metric to optimize (required)
parameters
The parameter space used for the search (required)
base_model
dataset
execution
Specify how you want to run the Scan.
execution
manual
We will just define the grid of parameter, you will then be able to use our python SDK to access each run with each set of parameters and execute it on your own, either in a script or in jupyter notebook for example
remote
We will automatically launch remote runs for you on servers equipped with NVIDIA V100S. You must set up max_worker
as the limit of parallel runs.
agents
You need to subscribe to a paid plan if you want to launch remote Scans
image
Our Scan engine is based on Docker images that we will schedule and launch on distributed machines, which could be your computer or a cloud server hosted by Picsellia.
If you do not specify any image parameter, we will use our base image (called custom-run:1.0) that will encapsulate the script you provided, install the specified requirements and then launch your script.
Specifying a custom image that will run your code is compulsory if you do not provide a script
param, for us to launch your script in our base image
But to save time on package install or to be sure that your script will run 100% of the time, we encourage you to build your own custom image, and then push it to the Docker HUB so we can run it remotely or just have it on every machines where you want to launch our agents.
To specify a custom image, you just have to give its name like below :
script
If you want to be able to automatically launch your training script without having it on every machines you can specify the path the file, it will be saved on Picsellia and used for each run.
Providing a script is mandatory if youdo not want to define custom Docker images but use our base images.
requirements
Specify this parameter if you want to install specific Python package needed for your script to run when using our base images.
For example, as our image only have the picsellia
package installed, if you need tensorflow 2.3.1 to run your script you will set the requirements as below :
Alternatively, you can set requirements
to the path of a requirements.txt
file just like this :
With the requirements.txt file looking like this :
strategy
Allows you to choose a search strategy within the following options :
strategy
grid
Grid-search, will try out every parameter combinations
optuna
max_run
When you perform hyperparameter-search, you never really know how many runs will be needed to find the best combination. For example if you choose an Optuna strategy for your run, the parameters for future runs are computed accordingly to the results of past runs.
That's why you can set up a max_run parameter, that allows you to be sure that your Scan will stop before using infinite resources, and that you will be able to create a new scan with a reduced search space later.
early_stopping (coming soon)
Early-stopping is an optional features that can drastically speed-up your hyperparameter search by deciding whether of not some runs might be stopped early or are given a chance to continue.
If some runs are not promising, they are automatically stopped and the agents get a new set of parameter to try so you do not spare time on unnecessary experiments.
method
decription
hyperband
Parameters :
parameter
description
min_iter
The minimum number of iteration (e.g training epochs or steps) to wait before deciding to prune the run or not
max_iter
The maximum number of iteration to wait before you either prune the run or let it finish
reduction_factor
At the completion point of each rung, about 1/reduction_factor trials will be promoted.
metric
The name of the metric you want to optimize, and the way you want to optimize it.
For the Scan to run properly, you must log explicitly the metric somewhere in the script you use, this means that you should have a line looking like this :
Where the name of what you log must corresponds to the metric you set up during configuration.
parameters
Specify the hyperparameter space to explore. You can either set up a list of constant values for each parameter or just choose a distribution and the bounds (for optuna
).
Values
value (int, float, str)
Single value for hyperparameter
values (list[int, float, str])
List of all values for hyperparameter
distribution (str)
Choose an available distribution from the list below (available for optuna strategy)
min (int, float)
Minimum value for hyperparameter. It's the lower bound for the chosen distribution
max (int, float)
Maximum value for hyperparameter. It's the upper bound for the chosen distribution
q (float)
Quantization step size for discrete hyperparameters
step (int)
Step size between values ( for int_uniform
distribution)
distributions
Here is the list of all the distributions you can use :
Name
Information
constant
Constant value for hyperparameter, equals to value.
categorical
Categorical distribution, hyperparameter value will be chosen from values
uniform
Continuous uniform distribution. You must set the bounds min
and max
.
int_uniform
Discrete uniform distribution for integer. You must set the bounds min
and max
. You can also set step
to a value higher than 1 if you want to.
discrete_uniform
Discrete uniform distribution. You must set the bounds min
and max
, and the q
parameter (step of discretization)
log_uniform
Continuous uniform distribution in the log domain. You must set up the bounds min
and max
.
Examples
base_model
As when you create an experiment, you can choose a model whose files, labelmap and so on, will be duplicated in each run's experiment.
To choose a model, you have to specify the username of the author and the model name this way : <username>/<model_name>
dataset
As when you create an experiment, you can choose a dataset to attach to your run. To do this, the dataset must have already been attached to the project first. Then you have to specify the chosen dataset this way : <dataset_name>/<dataset_version>
Last updated
Was this helpful?