Workspace & Object Taxonomy
Target objects cataloged with 3D reference models (CAD or scanned); workspace zones defined (pick area, place area, obstacle region, human collaboration zone). Grasp affordance categories established.
Perception, manipulation, and navigation annotation for robotics — from warehouse automation to surgical robots and agricultural drones.
Target objects cataloged with 3D reference models (CAD or scanned); workspace zones defined (pick area, place area, obstacle region, human collaboration zone). Grasp affordance categories established.
RGB, depth, and LiDAR data aligned using extrinsic calibration. Point clouds registered to camera coordinate frames for consistent 3D annotation across modalities.
Per-object 6-DOF pose (translation + rotation) annotated using 3D bounding cuboids aligned to CAD models. Grasp points labeled with approach vectors and finger placement for manipulation tasks.
Semantic scene segmentation (floor, obstacle, shelf, conveyor) for navigation. Waypoint sequences annotated with traversability scores and clearance measurements.
Synthetic data annotations validated against real-world counterparts; domain gap metrics computed (feature distribution shift, appearance variance) to guide sim-to-real transfer learning.
Human collaboration zone annotations verified for accuracy; safety-critical labels (human presence, collision risk) receive 100% review. Delivered in ROS-compatible formats (tf, PoseStamped, PointCloud2).
Generic annotation vendors can label data. Domain experts label it correctly. Here's why the difference matters in your industry.
Annotating a 6-DOF pose means understanding how a 3D object sits in space — not just drawing a box. Our annotators work with CAD model overlays and multi-view validation to achieve sub-degree rotational accuracy critical for robotic grasping.
A crumpled bag, a folded cloth, or a flexible package doesn't fit in a rigid bounding box. Our mesh annotation capability handles deformable objects with surface point annotations and contact region labeling for manipulation planning.
When robots work alongside humans, a missed person detection can cause physical harm. All human collaboration zone annotations and human presence labels receive 100% expert review — no statistical sampling on safety-critical classes.
See how our domain-specific capabilities compare to generic annotation services.
| Capability | UTL Data Engine | Typical Vendor |
|---|---|---|
| 6-DOF pose annotation with CAD alignment | Sub-degree precision | 3-DOF bounding box |
| Grasp point + approach vector annotation | Manipulation-ready | Not available |
| RGB-D + LiDAR fusion annotation | Configurable intervals | Not available |
| Sim-to-real domain gap validation | Distribution metrics | Not available |
| ROS-compatible output formats | tf, PoseStamped | Custom only |
| Deformable object handling | Mesh annotation | Rigid only |
"UTL's 3D annotation quality for our pick-and-place system was exceptional. Their team handled complex multi-object scenes with deformable packaging — a task most vendors struggle with."
Let's discuss your specific data challenges and build a tailored annotation pipeline.