You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dynamic Intelligence - Egocentric Human Motion Annotation Dataset

RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.


πŸ“Š Dataset Overview

Metric Value
Episodes 97
Total Frames ~28,000
FPS 30
Tasks 10 manipulation tasks
Total Duration ~15.5 minutes
Avg Episode Length ~9.6 seconds

Task Distribution

Task ID Description Episodes
Task 1 Fold the white t-shirt on the bed 8
Task 2 Fold the jeans on the bed 10
Task 3 Fold two underwear and stack them 10
Task 4 Put the pillow on the right place 10
Task 5 Pick up plate and glass, put on stove 10
Task 6 Go out the door and close it 9
Task 7 Pick up sandals, put next to scale 10
Task 8 Put cloth in basket, close drawer 10
Task 9 Screw the cap on your bottle 10
Task 10 Pick up two objects, put on bed 10

πŸ“ Repository Structure

humanoid-robots-training-dataset/
β”‚
β”œβ”€β”€ data/
β”‚   └── chunk-000/                    # Parquet files (97 episodes)
β”‚       β”œβ”€β”€ episode_000000.parquet
β”‚       β”œβ”€β”€ episode_000001.parquet
β”‚       └── ...
β”‚
β”œβ”€β”€ videos/
β”‚   └── chunk-000/rgb/                # MP4 videos (synchronized)
β”‚       β”œβ”€β”€ episode_000000.mp4
β”‚       └── ...
β”‚
β”œβ”€β”€ meta/                             # Metadata & Annotations
β”‚   β”œβ”€β”€ info.json                     # Dataset configuration (LeRobot format)
β”‚   β”œβ”€β”€ stats.json                    # Feature min/max/mean/std statistics
β”‚   β”œβ”€β”€ events.json                   # Disturbance & recovery annotations
---

## 🎯 Data Schema

### Parquet Columns (per frame)

| Column | Type | Description |
|--------|------|-------------|
| `episode_index` | int64 | Episode number (0-96) |
| `frame_index` | int64 | Frame within episode |
| `timestamp` | float64 | Time in seconds |
| `language_instruction` | string | Task description |
| `observation.state` | float[252] | 21 hand joints Γ— 2 hands Γ— 6 DoF |
| `action` | float[252] | Same as state (for imitation learning) |
| `observation.images.rgb` | struct | Video path + timestamp |

### 6-DoF Hand Pose Format

Each joint has 6 values: `[x_cm, y_cm, z_cm, yaw_deg, pitch_deg, roll_deg]`

**Coordinate System:**
- Origin: Camera (iPhone TrueDepth)
- X: Right (positive)
- Y: Down (positive)  
- Z: Forward (positive, into scene)

---

## 🏷️ Motion Semantics Annotations

**File:** `meta/annotations_motion_v1_frames.json`

Coarse temporal segmentation with motion intent, phase, and error labels.

### Annotation Schema

```json
{
  "episode_id": "Task1_Vid2",
  "segments": [
    {
      "start_frame": 54,
      "end_frame_exclusive": 140,
      "motion_type": "grasp",           // What action is being performed
      "temporal_phase": "start",        // start | contact | manipulate | end
      "actor": "both_hands",            // left_hand | right_hand | both_hands
      "target": {
        "type": "cloth_region",         // cloth_region | object | surface
        "value": "bottom_edge"          // Specific target identifier
      },
      "state": {
        "stage": "unfolded",            // Task-specific state
        "flatness": "wrinkled",         // For folding tasks only
        "symmetry": "asymmetric"        // For folding tasks only
      },
      "error": "none"                   // misalignment | slip | drop | none
    }
  ]
}

Motion Types

grasp | pull | align | fold | smooth | insert | rotate | open | close | press | hold | release | place

Why Motion Annotations?

  • Temporal Structure: Know when manipulation phases begin/end
  • Intent Understanding: What the human intends to do, not just kinematics
  • Error Detection: Labeled failure modes (slip, drop, misalignment)
  • Training Signal: Richer supervision for imitation learning

πŸ“‹ Events Metadata

File: meta/events.json

Disturbances and recovery actions for select episodes.

Disturbance Types

Type Description
OCCLUSION Hand temporarily blocked from camera
TARGET_MOVED Object shifted unexpectedly
SLIP Object slipped during grasp
COLLISION Unintended contact
DEPTH_DROPOUT Depth sensor lost valid readings

Recovery Actions

Action Description
REGRASP Release and re-acquire object
REACH_ADJUST Modify approach trajectory
ABORT Stop current action
REPLAN Compute new action sequence

πŸ“ˆ Depth Quality Metrics

Metric Description Dataset Average
valid_depth_pct % frames with valid depth at hand 95.5% βœ…

πŸš€ Usage

With LeRobot

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")

# Access episode
episode = dataset[0]
state = episode["observation.state"]      # [252] hand pose (both hands)
rgb = episode["observation.images.rgb"]   # Video frame
task = episode["language_instruction"]    # Task description

Loading Motion Annotations

import json
from huggingface_hub import hf_hub_download

# Download annotations
path = hf_hub_download(
    repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
    filename="meta/annotations_motion_v1_frames.json",
    repo_type="dataset"
)

with open(path) as f:
    annotations = json.load(f)

# Get segments for Task1
task1_episodes = annotations["tasks"]["Task1"]["episodes"]
for ep in task1_episodes:
    print(f"{ep['episode_id']}: {len(ep['segments'])} segments")

Combining Pose + Annotations

# Get frame-level motion labels
def get_motion_label(frame_idx, segments):
    for seg in segments:
        if seg["start_frame"] <= frame_idx < seg["end_frame_exclusive"]:
            return seg["motion_type"], seg["temporal_phase"]
    return None, None

# Example: label each frame
for frame_idx in range(episode["frame_index"].max()):
    motion, phase = get_motion_label(frame_idx, episode_annotations["segments"])
    if motion:
        print(f"Frame {frame_idx}: {motion} ({phase})")

πŸ“– Citation

If you use this dataset in your research, please cite:

@dataset{dynamic_intelligence_2024,
  author = {Dynamic Intelligence},
  title = {Egocentric Human Motion Annotation Dataset},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
}

πŸ“§ Contact

Email: shayan@dynamicintelligence.company
Organization: Dynamic Intelligence


πŸ–ΌοΈ Hand Landmark Reference

Hand Landmarks

Each hand has 21 tracked joints. The observation.state contains 6-DoF (x, y, z, yaw, pitch, roll) for each joint.


πŸ‘οΈ Visualizer Tips

When using the DI Hand Pose Sample Dataset Viewer:

  • Enable plots: Click the white checkbox next to joint names (e.g., left_thumb_cmc_yaw_deg) to show that data in the graph
  • Why not all enabled by default?: To prevent browser lag, only a few plots are active initially
  • Full data access: All joint data is available in the parquet files under Files and versions
Downloads last month
463

Spaces using DynamicIntelligence/humanoid-robots-training-dataset 2