Picsellia
Search
⌃K
Links

Python SDK Reference

Here we will describe every method of the picsellia package

Installation

The first thing you need to do is to download our python package from PyPI, you can install it with pip
pip install picsellia

Dependencies

Here is the several packages that will be installed along the picsellia package, they are used to perform mathematical operations, array or image manipulations and HTTP requests
  • numpy>=1.18.5
  • Pillow>=7.2.0
  • requests>=2.24.0
  • scipy>=1.4.1

Client

To connect your code to Picsell.ia, you must initialize our client.
from picsellia.client import Client
​
api_token = '4d388e237d10b8a19a93517ffbe7ea32ee7f4787'
​
clt = Client(api_token)
You will be greeted by this nice message (Pierre-Nicolas is my name, you should see your username here, unless you are named Pierre-Nicolas too 👀 ) :
Hi Pierre-Nicolas, welcome back.

Datalake

You need to instantiate a Datalake object in order to interact with your pictures or datasets.

__init__

If you need to interact with your datalake, you must initialize the Datalake class. It is a subclass of the Client.
from picsellia.client import Client
​
datalake = Client.Datalake(
api_token=None,
organization=None
)

Arguments:

  • api_token (string) Your personal API token (Find it in your profile page 🔥)
  • organization (string, optional) the name of the organization you want to work with (None default to your organization)

Returns:

None

upload

This upload method allows you to upload your Dataset, (images and annotations or annotations only).
datalake.upload(
name: str=None,
version: str=None,
imgdir: str=None, -> "path/to/imgdir"
ann_path: str=None, -> "path/to/annotations"
ann_format: str=None, -> "COCO", "PASCAL-VOC", "PICSELLIA"
tags: List[str]=[],
nb_jobs: int=5,
rectangle: bool=False):
Arguments
  • name (string, required) Name of the dataset to be created or fetched if you want to upload only annotations
  • version (string, optionnal) Version of the dataset to be fetched, if None we will fetch the latest created with this name.
  • imgdir (string, optionnal) Path to the directory of the pictures to upload, leave blank if you just want to upload the annotations.
  • ann_path (string, required) Path to the annotations files, it can be a directory or a file.
  • ann_format (string, required) you can upload COCO format, PASCAL-VOC or PICSELLIA
  • tags (list, optionnal) List of tags to add to the uploaded images, if None tags will be the upload date.
  • nb_jobs (int, optionnal) Number of process to use to upload, put nb_jobs=-1 if you want to use all available processes.
  • rectangle (bool, optionnal) set to True if you want to force to upload bbox annotations when both polygons and bbox are available in your annotation files.

Picture

The picture object allow you to interact with your assets only.

__init__

If you need to interact with your experiments, you must initialize the Picture class. It is a subclass of the Datalake.
from picsellia.client import Client
​
picture = Client.Datalake.Picture(
api_token=None,
organization=None
)

Arguments:

  • api_token (string) Your personal API token (Find it in your profile page 🔥)
  • organization (string, optional) the name of the organization you want to work with (None default to your organization)
​

upload

To upload assets to your lake
picture.upload(filepath=None, tags=[], source='sdk')

Arguments:

  • filtepath (string or list) Either one filepath pointing to the asset to upload or a list of path.
  • tags (list, optional) the list of tags to attach to the upload assets
  • source (string, optional) Specify the source of the upload, default is "sdk"

list

picture.list()

Returns:

A list containing the pictures objects for your datalake.
{'pictures': [{'picture_id': '25d76bee-a6d3-43d7-8620-6ff18f7a5557',
'internal_key': '15288614-bedb-4cab-97c1-23684cf9c761.jpg',
'external_url': 'GE_121.jpg',
'creation_date': '2021-02-07',
'height': 310,
'width': 322,
'tag': []},
{'picture_id': 'd1cf0d96-5c05-4fb4-aa4a-5e90f3c748da',
'internal_key': 'ed6e12e3-0db3-461f-bcd9-54d48509680b.jpg',
'external_url': 'GE_55.jpg',
'creation_date': '2021-02-07',
'height': 2908,
'width': 4800,
'tag': []},
{'picture_id': '716e45a8-09f6-4ec5-9dd1-29c313ae2cdf',
'internal_key': 'a9597e66-584d-4568-b1cc-31b951154edd.jpg',
'external_url': 'GE_309.jpg',
'creation_date': '2021-02-07',
'height': 208,
'width': 254,
'tag': []},
{'picture_id': '88d2b82d-2a38-4c30-912b-79474a617072',
'internal_key': 'c83493f7-ce61-4a0b-8166-54698d071792.jpg',
'external_url': 'Test85.jpg',
'creation_date': '2021-02-07',
'height': 663,
'width': 710,
'tag': []},
{'picture_id': 'd9d4684f-a2d1-4431-93e7-dce352aff471',
'internal_key': 'b7a8bb6b-d3f2-46d5-9599-dfb9b4d2f1cd.jpg',
'external_url': 'GE_466.jpg',
'creation_date': '2021-02-07',
'height': 462,
'width': 520,
'tag': []},
}

fetch

Fetch images with corresponding tags
​
pictures.fetch(
quantity=1,
tags=["drone", "coco"],
)

Parameters:

  • quantity (float, optional) the percentage of assets to fetch ( 1 meaning 100%)
  • tags (list, required) a list of tags used to search in your Datalake
​

Returns:

The list of all the fetched assets
[{'picture_id': '8b536f4c-c95b-4f5f-afbe-a9f31242a235',
'internal_key': '51ee5ee9-5176-4e98-b173-0687ed6c7b2f.jpg',
'external_url': '9999966_00000_d_0000055.jpg',
'creation_date': '2021-02-07',
'height': 1050,
'width': 1400,
'tag': ['drone', 'coco', 'vizdrone']},
{'picture_id': '426ce7bd-7535-4fe5-80cd-c41e07f84c99',
'internal_key': '7f4f1b60-d1bb-4458-b3bb-5f3d01a8f7eb.jpg',
'external_url': '9999955_00000_d_0000312.jpg',
'creation_date': '2021-02-07',
'height': 788,
'width': 1400,
'tag': ['drone', 'coco', 'vizdrone']},
{'picture_id': '320e69fc-964a-478e-b689-05351213578e',
'internal_key': '5aa4036e-8050-4fef-9c3c-af9ba46db511.jpg',
'external_url': '9999955_00000_d_0000043.jpg',
'creation_date': '2021-02-07',
'height': 788,
'width': 1400,
'tag': ['drone', 'coco', 'vizdrone']},
{'picture_id': 'bed1ddab-7cf1-460b-99a8-c4125612caa3',
'internal_key': 'f73fedbf-d87e-4483-859a-77a3f8e38702.jpg',
'external_url': '9999982_00000_d_0000167.jpg',
'creation_date': '2021-02-07',
'height': 1050,
'width': 1400,
'tag': ['drone', 'coco', 'vizdrone']},
{'picture_id': '0bc695d1-03bb-48ea-bd89-5bef5bf02c23',
'internal_key': '98ac39c5-a157-4a5e-bafc-a8399b90f230.jpg',
'external_url': '9999974_00000_d_0000049.jpg',
'creation_date': '2021-02-07',
'height': 1078,
'width': 1916,
'tag': ['drone', 'coco', 'vizdrone']},
]

status

Once, you have fetched pictures, you can call status method to visualize the number of assets fetched
pictures.status()

Return:

Number of Assets selected : 1472

delete

Delete the list of pictures
pictures.delete(
pictures=None
)

Arguments:

  • pictures (list, optional) The list of pictures to delete from your lake, if none will delete the latest fetched pictures.
​

Returns:

None

add_tags

Add tags to selected pictures
pictures.add_tags(
pictures=[],
tags=["tag_to_add"]
)

Arguments:

  • pictures (list, optional) The list of pictures to selected from your lake, if none will add tags to the last fetched pictures.
  • tags (list, required) The list of tags to add to the selected pictures

Returns:

None

remove_tags

Remove tags from selected pictures
pictures.remove_tags(
pictures=[],
tags=["tag_to_add"]
)

Arguments:

  • pictures (list, optional) The list of pictures to selected from your lake, if none will remove tags from the last fetched pictures.
  • tags (list, required) The list of tags to delete from the selected pictures

Returns:

None

Dataset

The dataset object allow you to interact with your dataset ( annotations, labels, question and answers ).

__init__

If you need to interact with your experiments, you must initialize the Dataset class. It is a subclass of the Datalake.
from picsellia.client import Client
​
dataset = Client.Datalake.Dataset(
api_token=None,
organization=None
)

Arguments:

  • api_token (string) Your personal API token (Find it in your profile page 🔥)
  • organization (string, optional) the name of the organization you want to work with (None default to your organization)
​

list

dataset.list()

Returns:

A list containing the dataset objects for your account.
-------------
Dataset Name: GoogleEarthShip
Dataset Version: first
Nb Assets: 793
-------------
-------------
Dataset Name: VizDrone2017
Dataset Version: first
Nb Assets: 6470
-------------
-------------
Dataset Name: FaceMaskDetection
Dataset Version: first
Nb Assets: 6024
-------------
-------------
Dataset Name: TrashDataset
Dataset Version: first
Nb Assets: 1435
-------------

fetch

Fetch dataset with its name and version
dataset.fetch(
name="myDataset",
version="latest",
)

Parameters:

  • name (string, optional) the name of the dataset to fetch
  • version (string, optional) the version of the dataset to fetch, if None, the client will fetch latest

Returns:

A Dataset object relative to the fetched dataset
dataset = dataset.fetch(name="myDataset",version="latest")
print(dataset)
{
"dataset_id": "9061846f-597a-47d6-9711-7f75671841a2",
"dataset_name": "myDataset",
"version": "latest",
"size": 128,
"description": "None",
"private": true
}

create

Create a a new dataset and attach pictures to it, to do so you first need to fetch pictures​
dataset.create(
name="myDataset",
description='',
private=True,
pictures=[]
)
​

Parameters:

  • name (string, optional) the name ot the dataset to create
  • description (string, optional) the description of the dataset to create
  • private (bool, optional) If True, your dataset will be accessible to anyone
  • pictures (list, required) The list of pictures to attach to your dataset

Return:

the id of the created dataset

new_version

Create a a new dataset and attach pictures to it, to do so you first need to fetch pictures​
dataset.new_version(
name="myDataset",
version='newVersion',
from_version='latest',
pictures=[]
)
​

Parameters:

  • name (string, optional) the name of the dataset
  • version (string, optional) the version name of the dataset to create
  • from_version (string, optional)The origin version for your new version, if None we'll create a new version from the latest version
  • pictures (list, required) The list of pictures to attach to your version

Return:

None

create_labels

Sets up the labels (tools for drawing bounding-boxes, polygons...) for your dataset.
dataset.create_labels(
name=None,
ann_type=None
)

Arguments:

  • name (str, required) name of the label want to set up (e.g. car, bird, plane...)
  • ann_type (str, required) type of shape that will be used for annotations :
    • 'rectangle': bounding-boxes for object-detection
    • 'polygon': polygons for segmentation

Returns:

None

list_pictures

Get the list of all the images in the fetched dataset
dataset.list_pictures(
dataset_id=None
)

Arguments:

  • dataset_id (str, optional) id of the dataset (if not fetched)

Returns:

list of Picture objects

add_data

Add the fetch pictures to a dataset
dataset.add_data(
name="myDataset",
version='myVersion',
pictures=[]
)
If you fetched a dataset before, you won't have to specify the name and version of the dataset

Parameters:

  • name (string, optional) the name of the dataset
  • version (string, optional) the version name of the dataset to fetch if None, we'll take latest
  • pictures (list, required) The list of pictures to attach to add

Return:

None

delete

Delete the a dataset
dataset.delete(
name="myDataset",
version='myVersion',
)

Arguments:

  • name (string, optional) the name of the dataset to delete
  • version (string, optional) the version name of the dataset to delete if None, we'll take latest
​

Returns:

None

download

Download all the images from a dataset in a folder
dataset.download(
dataset=None,
folder_name=None
)

Arguments:

  • dataset (str, required) the name of the dataset you want to download written <dataset_name>/<version>
  • folder_name (str, optional) the name of the folder you want to download the pictures in, defaults to dataset_name/version if None

Returns:

None

add_annotation

Create an annotation for a picture in a dataset (or add objects to existing annotation)
dataset.add_annotation(
picture_external_url=None
dataset_id=None,
data={},
image_qa={}
)

Arguments:

  • picture_external_url (str, required) the name of the target image
  • dataset_id (str, optional) leave None if you already fetched a dataset
  • data (dict, required) annotation data
  • image_qa (dict, optional) Q&A data for image

Formats:

  • classification
data = [
{
"type": "classification",
"label": "car"
}
]
dataset.add_annotation("awsm_pic.jpg", True, data=data)
  • detection
data = [
{
"type": "rectangle",
"label": "car",
"rectangle": {
"top": 16,
"left": 10,
"width": 50,
"height": 60
}
}
]
dataset.add_annotation("awsm_pic.jpg", True, data=data)
  • segmentation
data = [{
"type": "polygon",
"label" : "rose",
"polygon": {
"geometry": [
{
"x": 12,
"y": 15
},
{
"x": 178,
"y": 151
},
{
"x": 122,
"y": 196
},
{
"x": 112,
"y": 10
},
]
}
}
]
dataset.add_annotation("awsm_pic.jpg", True, data=data)
  • Q&A
data = [
{
"type": "polygon",
"label" : "rose",
"polygon": {
"geometry": [
{
"x": 12,
"y": 15
},
{
"x": 178,
"y": 151
},
{
"x": 122,
"y": 196
},
{
"x": 112,
"y": 10
},
],
},
"qa": [{
"type": "text",
"question": "What color ?",
"answer": "red"
},
{
"type": "mc",
"question": "What color ?",
"answer": ["red"],
"choices": ["red", "yellow", "blue"]
},
{
"type": "select",
"question": "Is it raining ?",
"answer": "yes",
"choices": ["yes", "no"]
},
{
"type": "range",
"question": "size ?",
"answer": 68,
"max": 100,
"min": 0
}]
}
]
​
image_qa = [
{
"type": "text",
"question": "How much is the image rotated ?",
"answer": "approx. 32 deg."
},
{
"type": "mc",
"question": "image attribute",
"answer": ["high contrast"],
"choices": ["high contrast", "saturated"]
},
{
"type": "select",
"question": "image color",
"answer": "blue",
"choices": ["red", "blue", "green"]
},
{
"type": "range",
"question": "brigthness",
"answer": 36,
"max": 100,
"min": 0
},
]
ds.add_annotation("awsm_pic.jpg", True, data=data, image_qa=image_qa)

Network

Networks are trained architectures that you can either deploy for inference (if available), use to start new experiments and share within your Organization's models.

The Network object

{'organization': {'name': 'picsell'},
'model_id': 'b76ececa-274d-48de-b39e-70cf73941aba',
'serving_id': 'b76ececa-274d-48de-b39e-70cf73941aba',
'tag': ['efficientdet', 'd2', 'COCO', 'base'],
'private': False,
'network_name': 'efficientdet-d2',
'description': 'This is a real game changer',
'model_object_name': '',
'checkpoint_object_name': '',
'origin_checkpoint_objects': {},
'type': 'detection',
'files': {'config': 'b76ececa-274d-48de-b39e-70cf73941aba/pipeline.config',
'model-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/0/saved_model.zip',
'checkpoint-data-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/ckpt-0.data-00000-of-00001',
'checkpoint-index-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/ckpt-0.index'},
'thumb_object_name': 'b76ececa-274d-48de-b39e-70cf73941aba/effdet.png',
'framework': 'tensorflow2'
}

Attributes

  • model_id (string) Unique identifier of your model
  • owner (hash, user_object) The creator of the model
  • network_name (string) The name of your model
  • description (string) A short description of what your model does
  • type (string) The type of application for your model, if you want to perform pre-annotation on Picsellia it has to be one of the following (but you can set your own type otherwise):
    • 'detection'
    • 'segmentation'
    • 'classification'
  • organization (hash, organization object) The organization under which your model is stored
  • private (boolean) Tells if your model is available for everyone in the public HUB or not
  • framework (string) The framework used for training
    • 'tensorflow1'
    • 'tensorflow2'
    • 'pytorch'
  • tag (list) List of tags to identify and sort your models
  • files (dict) Dictionary containing the list of files of your model
  • labels (dict) Dictionary of the labelmap of your model
  • base_parameters (dict) Dictionary of the base parameters allowing anyone to reproduce the training or iterate with already existing parameters
  • readme_text (str) A markdown text containing more information about your model

__init__

If you want to interact with your models, you have to initialize the Network class. It is a subclass of the Client.
from picsellia.client import Client
​
network = Client.Network(
api_token=None,
organization=None,
)

Arguments:

  • api_token (string) Your personal API token
  • organization (string, optional) the name of the organization you want to work with (None default to your organization)

list

List of all the models for an organization
By default it will list the models of your own organization but you can specify the name of another organization where you are part of the team.
network.list()

Returns:

A list containing the models of the chosen organization.
[
{'organization': {'name': 'picsell'},
'model_id': 'b76ececa-274d-48de-b39e-70cf73941aba',
'serving_id': 'b76ececa-274d-48de-b39e-70cf73941aba',
'tag': ['efficientdet', 'd2', 'COCO', 'base'],
'private': False,
'network_name': 'efficientdet-d2',
'description': 'This is a real game changer',
'model_object_name': '',
'checkpoint_object_name': '',
'origin_checkpoint_objects': {},
'type': 'detection',
'files': {'config': 'b76ececa-274d-48de-b39e-70cf73941aba/pipeline.config',
'model-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/0/saved_model.zip',
'checkpoint-data-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/ckpt-0.data-00000-of-00001',
'checkpoint-index-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/ckpt-0.index'},
'thumb_object_name': 'b76ececa-274d-48de-b39e-70cf73941aba/effdet.png',
'framework': 'tensorflow2'
},
{'organization': {'name': 'picsell'},
'model_id': 'b76ececa-274d-48de-b39e-70cf73941aba',
'serving_id': 'b76ececa-274d-48de-b39e-70cf73941aba',
'tag': None,
'private': True,
'network_name': 'vizdrone-test',
'description': 'This is a real game changer',
'model_object_name': '',
'checkpoint_object_name': '',
'origin_checkpoint_objects': {},
'type': 'detection',
'files': {'config': 'b76ececa-274d-48de-b39e-70cf73941aba/pipeline.config',
'model-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/0/saved_model.zip',
'checkpoint-data-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/ckpt-101.data-00000-of-00001',
'checkpoint-index-latest': 'b76ececa-274d-48de-b39e-70cf73941aba/ckpt-101.index'},
'thumb_object_name': '',
'framework': ''
}
]

get

This method allows you to retrieve a particular model in order to update it or store some files.
network.get(identifier=None)

Arguments:

  • identifier (string) Either the name or the id of the model you want to retrieve

Returns:

The Network object

create

This methods allows you to create a new Network from the SDK
network.create(
name=None,
type=None
)

Attributes:

  • name (str, required) The name of your Network
  • type (str, required) the type of your Network, it is the task it performs such as :
    • 'detection', for object detection
    • 'segmentation', for object segmentation
    • 'classification', for image classification

Returns:

Network object

update

This methods allows you to update the properties of a Network from your Organization
You must get or create a network before calling the update method.
network.update(**kwargs)

Arguments:

  • **kwargs (required) can be any property from the Network object described in the Network Object.