Check the following when troubleshooting your pre-annotations.
Units of image annotations
The units the x, y, width and height of image annotations are provided in percentages of overall image dimension.
Use the following conversion formulas for x
, y
, width
, height
:
pixel_x = x / 100.0 * original_width
pixel_y = y / 100.0 * original_height
pixel_width = width / 100.0 * original_width
pixel_height = height / 100.0 * original_height
For example:
task = { "annotations": [{ "result": [ { "...": "...", "original_width": 600, "original_height": 403, "image_rotation": 0, "value": { "x": 5.33, "y": 23.57, "width": 29.16, "height": 31.26, "rotation": 0, "rectanglelabels": [ "Airplane" ] } } ] }] } # convert from LS percent units to pixels def convert_from_ls(result): if 'original_width' not in result or 'original_height' not in result: return None value = result['value'] w, h = result['original_width'], result['original_height'] if all([key in value for key in ['x', 'y', 'width', 'height']]): return w * value['x'] / 100.0, \ h * value['y'] / 100.0, \ w * value['width'] / 100.0, \ h * value['height'] / 100.0 # convert from pixels to LS percent units def convert_to_ls(x, y, width, height, original_width, original_height): return x / original_width * 100.0, y / original_height * 100.0, \ width / original_width * 100.0, height / original_height * 100 # convert from LS output = convert_from_ls(task['annotations'][0]['result'][0]) if output is None: raise Exception('Wrong convert') pixel_x, pixel_y, pixel_width, pixel_height = output print(pixel_x, pixel_y, pixel_width, pixel_height) # convert back to LS x, y, width, height = convert_to_ls(pixel_x, pixel_y, pixel_width, pixel_height, 600, 403) print(x, y, width, height)
Check the configuration values of the labeling configuration and tasks
The from_name
of the pre-annotation task JSON must match the value of the name in the <Labels name="label" toName="text">
portion of the labeling configuration. The to_name
must match the toName
value.
In the text example on this page, the JSON includes "from_name": "label"
to correspond with the <Labels name="label"
and "to_name": text
to correspond with the toName="text
of the labeling configuration. The default template might contain <Labels name="ner" toName="text">
. To work with this example JSON, you need to update the values to match.
In the image example on this page, the XML includes:
...
<Choices name="choice" toName="image" showInLine="true">`
...
<RectangleLabels name="label" toName="image">
...
To correspond with the following portions of the example JSON:
...
"type": "rectanglelabels",
"from_name": "label", "to_name": "image",
...
type": "choices",
"from_name": "choice", "to_name": "image",
...
Check the labels in your configuration and your tasks
Make sure that you have a labeling configuration set up for the labeling interface, and that the labels in your JSON file exactly match the labels in your configuration. If you’re using a tool to transform your model output, make sure that the labels aren’t altered by the tool.
Check the IDs and toName values
If you’re performing nested labeling, such as displaying a TextArea tag for specific Label or Choice values, the IDs for those results must match.
For example, if you want to transcribe text alongside a named entity resolution task, you might have the following labeling configuration:
<View>
<Labels name="label" toName="text">
<Label value="PER" background="red"/>
<Label value="ORG" background="darkorange"/>
<Label value="LOC" background="orange"/>
<Label value="MISC" background="green"/>
</Labels>
<Text name="text" value="$text"/>
<TextArea name="entity" toName="text" perRegion="true"/>
</View>
If you wanted to add predicted text and suggested transcriptions for this labeling configuration, you might use the following example JSON.
{ "data":{ "text":"The world that we live in is a broad expanse of nothingness, said the existential philosopher, before he rode away with his cat on his motorbike. " }, "predictions":[ { "result":[ { "value":{ "start":135, "end":144, "text":"motorbike", "labels":[ "ORG" ] }, "id":"def", "from_name":"ner", "to_name":"text", "type":"labels" }, { "value":{ "start":135, "end":144, "text":[ "yay" ] }, "id":"def", "from_name":"entity", "to_name":"text", "type":"textarea" } ] } ] }
Because the TextArea tag applies to each labeled region, the IDs for the label results and the textarea results must match.
Read only and hidden regions
In some situations it’s very helpful to hide or to make read-only
bounding boxes, text spans, audio segments, etc. You can put "readonly": true
or "hidden": true
in regions to achieve this (the dict inside of annotations.result
list).