Run your models with confidence

From expert sampling to custom quality control workflows, our best practices massively improve productivity per annotator without jeopardizing the quality of datasets for your ML models.
Request Demo
HELPING TEAMS INNOVATE FASTER AT

The new benchmark for data labeling

Playment’s quality control practises have helped several businesses across the globe
execute complex annotations with highest labeling accuracies.

200+

Global Customers

100+

Use Cases

1B+

Annotations

>97%

Avg Precision

>97%

Avg Recall

Here’s how our quality process works

Unbiased sampling for quality control by experts.

Random sampling of labeled data for QC

Random or clustered sampling

For datasets containing a large number of short, low FPS sequences, randomly picking a certain percentage of the sequences at the original FPS will ensure an unbiased representation.
Systematic sampling of labeled data for QC

Systematic sampling

For datasets containing long, high FPS sequences, scaling down the FPS by a certain factor via systematic sampling helps extract better samples that represent the whole dataset.

Quantified metrics that help you measure quality.

Precision & Recall

Progressive algorithms measure the precision and recall of annotated datasets to help ML teams objectively evaluate data performance.
  • True Positive
    Represents correct annotations that have met all given quality parameters
  • False Positive
    Represents annotations that do not meet one or more quality parameters.
  • False Negative
    Represents objects that are not annotated but visible to the naked eye.
True Positive, False Positive, False Negative annotations marked during QC process

Confusion Metrics

Detailed metrics reveal how many times an annotator is confused between certain classes and attributes while making annotations. These error metrics can help you develop better annotation guidelines and provide adequate feedback to annotators for improving label accuracies.
Confusion metrics calculated for a segmentation project in GT Studio Dashboard

Different annotations require varied quality metrics.

Object Tracking

Class Precision
97.1%
Attribute Precision
96.2%
Geometric Precision
95.3%
Tracking Precision
99.7%
Precision Score
94.5 %
Recall Score
95.23 %

Segmentation

Class Precision
96.4%
Geometric Precision
97.4%
Instance Precision
96.6%
Precision Score
94.6 %
Recall Score
97.5 %
Quality metrics for semantic segmentation - incorrect boundary, instance, and label

Quality maintenance tools for efficient feedback loops

We’ve developed efficient QC tools and interactive feedback mechanisms to ensure quality standards are adequately reinforced to every annotator performing labeling tasks.

Visual inspection

Each annotation can visually be inspected by multiple editors and built-in tools like comments, doodles, and instance-marking allow them to immediately flag incorrect annotations during the review process.
Visual inspetion of a sematic segmentation image showing non-critical errors

Multi-user verification

Each annotation can also be assigned for inspection multiple times and to multiple editors for air-tight quality maintenance. Our tools are designed to help the editors immediately correct mistakes or provide feedback to annotators to improve label accuracies.
Multi-user verification for quality assurance

Feedback reports

Once an editor has corrected the annotation errors, a detailed auto-generated feedback report including key error metrics is sent to the annotator so that the gaps can be filled to ensure future annotations are correct and of higher quality.
Feedback report for polygon project showing user details, man hours, average making and editing time

High-quality data is the foundation of successful AI systems.

Built-in QC tools for building ground truth data

Built-in Quality Control Tools

With specialised QC tools like doodles, comments, and instance-marking, multiple users can run tests on samples to identify if all the predefined annotations requirements have been met with.
Custom QC workflows

Custom quality check workflows

Customize your quality check workflows the way you need to achieve your desired label accuracy levels or use our built-in QC workflow model.
Project progress and performance tracking

Project progress and performance tracking

Closely track team progress, project timelines, annotator productivity, and other key metrics via real-time analytics.
Auto-generated feedback reports for continuous improvement of outputs

Auto-generated feedback reports

You can also provide constructive feedback to your team based on performance analytics or use our auto-generated reports to improve annotation quality.

We provide the highest quality assurance benchmarks for fully managed services.

Our comprehensive QA process includes:

  • High-threshold training and qualification for annotators.
  • ML-assistance for error reduction.
  • Quantitative and qualitative checks.
  • Customer QC and rework arrangements.
Want more information about the QA framework for our service model?
Download Framework