You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

RoboX Egocentric Collection

The RoboX Egocentric Collection is a crowdsourced egocentric video dataset of human first-person interactions, built for robotics imitation learning. Clips are filmed from a first-person perspective using a smartphone and span four campaigns: grasping, daily activities, scene captures, and navigation.

What's Included Here

This repository contains 7,342 annotated clips across four RoboX campaigns. Each campaign is published as a self-contained folder with its own video clips, metadata, annotations, and reports.

Need more episodes, custom collections, or campaign-specific subsets? Request access via robox.to or contact the RoboX team directly.

Repository Structure

The collection is organized by campaign. Each campaign folder is a complete, self-contained dataset.

  • README.md
  • LICENSE
  • ego_grasp/
  • ego_daily/
  • ego_scene/
  • ego_nav/

Each campaign folder contains:

  • README.md: Campaign-specific dataset card
  • clips/: Video files (MP4, H.265)
  • recordings/: Raw recording sessions
  • metadata/: Per-clip metadata (device, duration, quality, contributor)
  • annotations/: Hand keypoints, object tracks, action segments, sensor data
  • notebooks/: Example notebooks for loading and visualizing the data
  • showcase/: Sample previews
  • manifest.json: Campaign manifest
  • stats.json: Per-campaign statistics
  • taxonomy.json: Campaign-specific labels and action phases
  • SANITIZATION_REPORT.json: On-device privacy processing report
  • VALIDATION_REPORT.json: Quality assurance report

Campaigns

Campaign Folder Clips Description
EgoGrasp ego_grasp/ 3,711 Single grasp actions on everyday objects, 1,036 unique object categories
EgoDaily ego_daily/ 2,103 Routine household and workplace activities
EgoScene ego_scene/ 935 Scene-level environmental captures
EgoNav ego_nav/ 593 First-person navigation through indoor and outdoor spaces
Total 7,342

Collection Method

Videos are collected through the RoboX mobile app by distributed contributors following structured task prompts. Each campaign has its own prompt set and quality criteria. Quality filtering and review are applied before clips enter the annotation pipeline.

The app captures video with rich per-frame metadata including camera pose (6DoF), IMU data (200Hz), hand keypoints (21 joints), body pose, object detection, scene planes, optical flow, audio levels, navigation data, and quality metrics. On-device processing applies face detection and blurring before the video leaves the device.

Annotation Pipeline

Each clip is processed through a layered annotation pipeline:

  1. Hand keypoints: 2D joint positions for both hands across all frames
  2. Object detection and tracking: Bounding boxes with per-frame object identity tracking
  3. Action segmentation: Temporal labels for campaign-specific phases (reach, grasp, lift, hold, place, release for EgoGrasp; task segments for EgoDaily; scan and focus segments for EgoScene; walk, turn, traverse for EgoNav)
  4. Spatial context: Scene-level labels describing surface type, environment, and camera viewpoint

Annotation coverage varies by campaign. EgoGrasp and EgoDaily include the full annotation stack. EgoScene emphasizes spatial context and object detection. EgoNav emphasizes camera pose, IMU, and scene segmentation.

For full schema details, see the taxonomy.json and README.md inside each campaign folder.

Use Cases

The RoboX Egocentric Collection is designed for researchers working on imitation learning, manipulation, navigation, and scene understanding. The egocentric viewpoint and real-world diversity make it well suited for sim-to-real transfer and learning from unstructured environments.

Specific applications include:

  • Robotic manipulation and grasping policy training via imitation learning
  • Long-horizon activity recognition and temporal action segmentation
  • Visual navigation and SLAM for mobile robots
  • Scene understanding and 3D spatial reasoning
  • Hand-object interaction modeling
  • Object recognition in egocentric settings
  • Benchmarking across multi-task egocentric settings

Request Additional Episodes

Need more episodes, larger collections, or custom data tailored to your use case? RoboX runs ongoing collection campaigns and can deliver:

  • Larger volumes of any of the four campaigns
  • New custom campaigns built around your specific tasks, environments, or object categories
  • Higher-resolution video, raw sensor streams, or extended annotation layers

Visit robox.to to submit a request or contact the RoboX team directly.

License

CC-BY-NC-4.0: Free for research and non-commercial use.

Citation

If you use the RoboX Egocentric Collection in your research, please cite:

@dataset{robox_ego_collection_2026,
  title={RoboX-Egocentric-Collection-v0.2},
  author={RoboX Team},
  year={2026},
  campaigns={EgoGrasp, EgoDaily, EgoScene, EgoNav}
}
Downloads last month
16