Client overview
Our client is a Japan-based company that provides high-quality, large-scale data for AI development. They deliver services to global technology firms, research institutes, and developers working on AI solutions.
Because of their focus on providing high-quality data at scale, annotation quality is central to their business. They needed a trusted partner who could handle large, complex annotation tasks while meeting accuracy and security requirements.
What the client needs
The client came to us with three main needs:
High-quality segmentation annotation:
The dataset involved 2D street images of Japanese roads. Each image required pixel-level annotations across dozens of object classes, ranging from vehicles and pedestrians to traffic signs and sky.
Scalable workforce:
The project required annotating 30,000+ images in just five months. That meant building and managing a team that could work at scale without affecting quality.
Data security:
Since the client’s data was proprietary, the annotation had to be done entirely within their own environment. This required our team to adapt to the client’s tools and workflow, rather than using our own.
How we did it
1. Team setup
We set up a dedicated group of 30+ annotators with a strong background in image labeling. Because segmentation annotation requires pixel-level precision, recruitment emphasized accuracy, consistency, and visual attention to detail. A project manager supervised the team, coordinated directly with the client, and kept progress on track.
2. Training
Annotators were trained using a curriculum developed jointly with the client. Training program covered:
-Annotation standards for over 40 classes.
-Practice sessions with review and feedback.
-Special scenarios such as overlapping objects or blurred boundaries.
3. Execution
The project focused on segmentation annotation of complex Japanese street scenes. Each image required precise pixel-level labeling across categories such as:
-Moving objects: Vehicles, pedestrians, bicycles, motorcycles, animals.
-Static objects: Traffic signs, lights, barriers, poles, buildings.
-Road elements: Lanes, boundaries, surfaces, arrows, stop lines, crosswalks, speed bumps.
-Environment: Trees, vegetation, sidewalks, sky.
To keep quality stable while scaling, we built a layered review process:
-Annotators first validated their own work.
-Peers then checked each other’s annotations to catch recurring mistakes.
-Project leads reviewed samples to ensure alignment with client guidelines.
-Finally, selected batches were re-checked against the most recent client feedback.
This cycle allowed us to detect issues early, refine rules continuously, and keep accuracy consistently above 98%.
4. Delivery
Instead of waiting until the end, we delivered annotated datasets in rolling batches. Each batch was accompanied by a short progress note including accuracy levels, turnaround time, and any adjustments made from client feedback. This approach allowed our client to incorporate the segmentation data into their AI pipeline without delay.
What the results have
Over the five-month project, we delivered:
-30,000+ annotated images within 5 months.
-98% accuracy rate
-Secure operations
-Improved AI model performance for better scene understanding and object differentiation.
The successful delivery strengthened the client’s dataset offerings and supported their goal of providing high-quality, large-scale data for AI development.






