Client overview
Our client is a leading perception software company headquartered in Korea. They are focused on advancing autonomous vehicle (AV) technology and already work with large amounts of transportation data collected from global sources.
Their core challenge was not in collecting data but in ensuring that massive volumes of raw data could be annotated with precision. Accurate annotation is critical for machine learning models that power AV perception systems. Without high-quality object detection data, these models would struggle to perform in real-world driving scenarios.
What the client needs

The client came to us with three specific needs:
Accuracy above 98%:
For autonomous driving, even a minor error in labeling can pose significant safety risks. The client required an error margin of less than 2%.
Scalable resources:
Their roadmap involved hundreds of thousands of 3D images. They needed a partner who could scale quickly to meet deadlines.
Strong QA process:
Since the data would directly feed into model training, the client expected a clear quality control workflow to keep consistency across the dataset.
Overall, they needed more than just manpower. They needed a structured annotation process with clear guidelines for quality and delivery.
How we did it

1. Team setup
We launched the project with a team of 40 annotators, all trained in LiDAR annotation. Thanks to our internal recruitment system, we were able to assemble and onboard the team quickly, ensuring the project started on schedule.
2. Intensive training
Before full production began, we designed a training program tailored to the client’s dataset. The training focused on:
-Annotating vehicles, pedestrians, cyclists, and road objects in 3D space.
-Working with point clouds of different densities in urban and highway settings.
-Handling difficult cases such as overlapping objects and low-visibility scans.
We combined this initial training with regular review sessions. These sessions gave annotators the chance to raise questions, discuss edge cases, and refine rules with input from the client.
3. Quality control
Accuracy was the client’s top requirement, so we set up a multi-step QA process:
-Self-check: Annotators reviewed their own tasks and tracked their performance based on completed work and rework rates.
-Cross-review: Team members reviewed each other’s work to spot recurring errors that might be overlooked during self-checks.
-Vertical review: Experienced project managers checked both the dataset as a whole and individual outputs.
-Final random inspection: For projects that required near-100% accuracy, a final team randomly inspected about 30% of the work. These checks were compared against the latest client feedback to ensure consistency.
4. Scaling up
After delivering the first 200,000 annotated images, the client approved a scale-up. Over the following months, we expanded from 40 to 140 annotators, supported by project managers and QA leads.
This expansion allowed us to process over 1 million annotated images within a year while maintaining the same high standards of quality.

What the results have

Within one year, the project delivered significant outcomes:
- 1 million+ images annotated
- 99% accuracy rate
- Stable, scalable team
- Improved the client’s AV perception performance






