Deploy model in production (Tensorflow only)
In this tutorial, we will see how to deploy and monitor your model using Picsell.ia, from your saved model files to an inference ready production model
Last updated
In this tutorial, we will see how to deploy and monitor your model using Picsell.ia, from your saved model files to an inference ready production model
Last updated
We will assume that you have already trained and exported a TensorFlow model. If not, you can follow this short tutorial to learn how to do it with Picsell.ia.
When you export a trained model using Tensorflow, you end up with a folder that should look a bit like this
There are two ways to export a model to Picsell.ia :
From an Experiment
From raw files
We will cover both cases during this tutorial.
If you have performed your training in the scope of an experiment, it means that you an store assets that will be linked to this Experiments, we will see how to properly store our trained-model so you can deploy it later.
Start by initializing our Client
(replace the tokens and the experiment name by yours)
Then , the only command you have to run is the latter
The first argument is the name given to your file, used to retrieve the asset later or display it on the platform. It HAS TO be named model-latest
to be recognized as a trained-model file by Picsell.ia (see the documentation about namespace for more information).
The second argument is the path to the folder containing the .pb
file and the variables
folder.
Here we set zip
, the third argument to True
(which will compress your folder into a .zip file) because it's the format we need to run inference later using our engine.
We also have to know on what classes your model have been trained. To send the labelmap, you can proceed with the following method
To convert your experiment
into a model
instance we added a step that we call publishing. We did that so you don't confuse the output of your experiments that might not be good with the experiment that contains your final assets.
To do this, run the following command
The first argument is the name under which we want to create our model (it will be used to retrieve or display it on Picsell.ia)
You can also publish your model directly from the platform, if you have an experiment ready to be published (that means it has a file named 'model-latest
') then if you go to your experiment in the platform you should see something like this.
If you click on Export as model it will publish your experiment as model, just like it would have done with the publish method above.
If we are not in the scope of an experiment, you can create a model
instance directly by using the Network
object of the SDK.
Start by initializing our Client
(use your own token) and then create a Network
Remember that you have to set the type of your model, it must be one of the following:
detection
segmentation
classification
We will add some more model types in the future but these are the ones supported for inference
And then run the following command to upload your files
The first argument is the name given to your file, used to retrieve the asset later or display it on the platform. It HAS TO be named model-latest
to be recognized as a trained-model file by Picsell.ia (see the documentation about namespace for more information).
The second argument is the path to the folder containing the .pb
file and the variables
folder.
Here we set zip
, the third argument to True
(which will compress your folder into a .zip file) because it's the format we need to run inference later using our engine.
We also have to know on what classes your model have been trained. To send the label-map, you can proceed with the following method
And that's it ! Your model has been created, now let's move on to the next steps.
Now if we go the All models page in Picsell.ia, we should see our brand new model
And in the Deployment page you should see it too
Now click on the deploy icon on the right
You will see a message telling you that we are deploying your model, it should not take long before you see it succeed.
Ok, now we have a fully-functional model in production, congratulations ! 🥳
You can now click on the code icon to see a code snippet telling you how to perform inference with your model.
You can now copy/paste this snippet and just replace the token variable with your own API token and the path to the file with the file you want to perform inference on.
If you click on the zoom icon, you will have access to the details about your hosted model such as its latency, number of API calls and other stats.
That's it, you now have a fully-functional model deployed with Picsell.ia ! See you in another tutorial 😃