Client overview
The client is a Singapore-based manufacturer specializing in the sorting, processing, and recycling of electronic waste. Their operations focus on handling everything from microchips to power sources, with the goal of reducing waste and improving recovery of reusable materials.
They invested in an AI-based classification system. The model was designed to automatically identify, sort, and categorize electronic components from incoming waste streams. However, before it could be deployed at scale, the system needed a large volume of accurately annotated data.
What the client needs

The client approached us with a clear objective:
– Classify and label common types of electronic waste such as microchips, batteries, and power sources.
– Provide bounding box annotation on 2D images, making sure each object was correctly identified and localized.
– Build a dataset large enough to train a classification model that could be used in real-world recycling facilities.
The challenge was not only the sheer volume of data, but also the need for precision. Unlike natural images, electronic waste contains small, complex parts that often overlap or appear in cluttered backgrounds. A missed object could affect the model’s ability to classify correctly.
How we did it

We carried out the following steps:
1. Requirements
Our team met with the client to confirm expectations and define annotation rules. Since bounding box quality directly impacts training results, we focused on:
-Minimum and maximum box size.
-Handling overlapping components (e.g., stacked microchips).
-Differentiating between classes such as “incomplete parts” vs. “fully intact components.”
-Annotation speed vs. precision, since the project timeline was tight (1 month).
2. Pilot
We conducted a two-week pilot with a small sub-team. During this time, the team labeled about 10,000 objects, which were reviewed by both our QA staff and the client’s engineers.
The pilot gave us early insights:
-Microchips were the hardest to annotate due to their small size.
-Bounding boxes around power sources had to be carefully adjusted, since reflections in images sometimes confused annotators.
-Clear instructions were needed for images with multiple objects in the same frame.
By the end of the pilot, the workflow was refined, and the client signed off on the guidelines.
3. Team Setup
After validation, we scaled the team to 30 members:
-2 Project Managers – overseeing communication, reporting, and delivery.
-3 QA Leads – performing random checks and re-annotation where needed.
-25 Annotators – focusing on bounding box annotation.
4. Execution
Once the team was fully set up, implementation ramped up quickly. Each annotator handled 3,000-4,000 objects per day, depending on complexity.
-Difficult images with cluttered components were escalated to experienced annotators.
-Common mistakes (e.g., bounding boxes too tight or too loose) were corrected and shared with the team during end-of-day reviews.
-PMs monitored pace to keep the project on track for the 1-month deadline.
This structured approach allowed us to annotate over 1 million objects in a short time.
5. Delivery
We delivered all outputs to clients so that they could start training their AI model before the entire dataset was complete. All packages included:
-Annotated files.
-A QA report with accuracy statistics.
-Notes on edge cases and how they were handled.
What the results have

By the end of the project, we delivered:
– Over 1 million individual objects labeled (microchips, power sources, and other e-waste categories).
– Reached the accuracy rate of up to 98%.
– Enhanced the AI system to identify e-waste with high precision, reducing manual sorting efforts.






