Summary
To control which tasks get predictions in Label Studio, you can manually retrieve predictions for specific tasks, use the API, batch process predictions, configure automatic predictions in project settings, and ensure your ML backend supports batch processing. These methods help streamline the prediction process and ensure accurate task management.
Ways to Retrieve Predictions
1. Manually Retrieve Predictions for Specific Tasks
- Open Data Manager: Navigate to the Data Manager in your project.
- Select Tasks: Select the tasks for which you want to retrieve predictions.
- Open tasks: Opened the selected tasks.
2. Use the API to Retrieve Predictions for Specific Tasks
- Make a POST request to the `/predict` endpoint of your ML backend with a payload of the tasks you want predictions for.
{
"tasks": [{
"data": {
"text": "some text"
}
}]
}
3. Batch Predictions
- Initiate predictions for a batch of selected tasks using the Data Manager's Batch Predictions action.
4. Check for Batch Processing Support
- Ensure your ML backend supports batch processing of tasks. Check logs for messages like:
ML backend '{self.title}' doesn't support batch processing of tasks, switched to one-by-one task retrieval
Example Code for Batch Predictions
import requests
def retrieve_batch_predictions(project_id, task_ids, api_key):
url = f"http://localhost:8080/api/dm/actions?id=retrieve_tasks_predictions&project={project_id}"
headers = {
'Authorization': f'Token {api_key}',
'Content-Type': 'application/json'
}
data = {
'selectedItems': {
'all': False,
'included': task_ids
}
}
response = requests.post(url, headers=headers, json(data))
return response.json()
# Example usage
project_id = 1
task_ids = [1, 2, 3]
api_key = 'your_api_key'
result = retrieve_batch_predictions(project_id, task_ids, api_key)
print(result)
More resources
- Label Studio documentation on integrating machine learning
- Project settings