Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Mask must be a pyarrow.Array of type boolean
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1594, in _prepare_split_single
                  writer.write(example)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 682, in write
                  self.write_examples_on_file()
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
                  self._write_batch(batch_examples=batch_examples)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 756, in _write_batch
                  self.write_table(pa_table, writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 775, in _write_table
                  pa_table = embed_table_storage(pa_table)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in embed_table_storage
                  embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2124, in embed_array_storage
                  return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 312, in embed_storage
                  storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 4259, in pyarrow.lib.StructArray.from_arrays
                File "pyarrow/array.pxi", line 4929, in pyarrow.lib.c_mask_inverted_from_obj
              TypeError: Mask must be a pyarrow.Array of type boolean
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1607, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 783, in finalize
                  self.write_examples_on_file()
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
                  self._write_batch(batch_examples=batch_examples)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 756, in _write_batch
                  self.write_table(pa_table, writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 775, in _write_table
                  pa_table = embed_table_storage(pa_table)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in embed_table_storage
                  embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2124, in embed_array_storage
                  return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 312, in embed_storage
                  storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 4259, in pyarrow.lib.StructArray.from_arrays
                File "pyarrow/array.pxi", line 4929, in pyarrow.lib.c_mask_inverted_from_obj
              TypeError: Mask must be a pyarrow.Array of type boolean
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1438, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1616, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding

Dataset Versions

[1] Pre-released Version

Dataset links:

Dataset Contents:

  • 244 high-quality motions sequences spanning 137 <tool, action, object> triplets
  • 206 High-resolution object models (10K~100K faces per object mesh)
  • Hand-object pose and mesh annotations
  • Egocentric RGB-D videos
  • 8 allocentric RGB videos

[2] Version 1

Dataset link (Dropbox): https://www.dropbox.com/scl/fo/8w7xir110nbcnq8uo1845/AOaHUxGEcR0sWvfmZRQQk9g?rlkey=xnhajvn71ua5i23w75la1nidx&st=9t8ofde7&dl=0

Dataset link (for a backup, Dropbox): https://www.dropbox.com/scl/fo/6wux06w26exuqt004eg1a/AM4Ia7pK_b0DURAVyxpHLuY?rlkey=e76q06hyj9yqbahhipmf5ij1o&st=c30zhh8s&dl=0

Dataset Contents:

  • 2317 motions sequences spanning 151 <tool, action, object> triplets
  • 206 High-resolution object models (10K~100K faces per object mesh)
  • Hand-object pose and mesh annotations
  • Egocentric RGB-D videos
  • 12 allocentric RGB videos
  • Camera parameters
  • Automatic Hand-object 2D segmentations
  • Automatic marker-removed images

Resized Version (512x376)

A pre-resized version of this dataset is available at mzhobro/taco_dataset_resized, with all allocentric videos and segmentation masks downscaled to 512x376. This provides ~25x faster data loading and 4.4x less disk space — recommended for training.

Original (this repo) Resized Speedup
Single sample load 3.91 s 158 ms 25x
Throughput (4 workers) 0.52 samp/s 12.2 samp/s 23x
Disk size 2.2 TB 495 GB 4.4x smaller
Allocentric videos 809 GB 4.3 GB 188x smaller

Full profiling details: tools/taco_analysis/profile_comparison.md

Archive Contents

Archive Contents
Marker_Removed_Allocentric_RGB_Videos.zip 12 camera MP4s per sequence (marker-removed)
Allocentric_RGB_Videos.zip Original allocentric videos (before marker removal)
2D_Segmentation.zip Per-camera segmentation masks (750x1024 npy)
Hand_Poses.zip MANO params (pkl) + pre-computed 3D joints (npy)
Hand_Poses_3D.zip 3D joints onlyhand_joints.npy per sequence (T, 2, 21, 3) float32, ~160 MB
Object_Poses.zip Object 6DoF transforms (npy)
Egocentric_RGB_Videos.zip Egocentric RGB videos
Egocentric_Depth_Videos.zip.* Egocentric depth videos (split archive)
object_models_released.zip 206 high-res object meshes
mano_v1_2.zip MANO hand model files

Tip: If you only need 3D hand joints (not raw MANO parameters), download Hand_Poses_3D.zip (160 MB) instead of Hand_Poses.zip (26 GB).

Downloading the Dataset

The dataset is hosted on HuggingFace. The Allocentric_RGB_Videos/ folder (~484 GB) contains the original recordings before marker removal and is not required for the analysis scripts — those use Marker_Removed_Allocentric_RGB_Videos/ instead. You can exclude either folder to save bandwidth.

Option A: CLI

# Full download (excluding original allocentric videos)
huggingface-cli download mzhobro/taco_dataset \
    --repo-type dataset \
    --exclude "Allocentric_RGB_Videos/*" \
    --local-dir taco_dataset

# Full download (excluding marker-removed videos)
huggingface-cli download mzhobro/taco_dataset \
    --repo-type dataset \
    --exclude "Marker_Removed_Allocentric_RGB_Videos/*" \
    --local-dir taco_dataset

Option B: Python API

from huggingface_hub import snapshot_download

# Full download (excluding original allocentric videos)
snapshot_download(
    "mzhobro/taco_dataset",
    repo_type="dataset",
    ignore_patterns=["Allocentric_RGB_Videos/*"],
    local_dir="taco_dataset",
)

# Full download (excluding marker-removed videos)
snapshot_download(
    "mzhobro/taco_dataset",
    repo_type="dataset",
    ignore_patterns=["Marker_Removed_Allocentric_RGB_Videos/*"],
    local_dir="taco_dataset",
)

After downloading, reassemble split archives and extract:

cd taco_dataset
# Reassemble split files (e.g. Egocentric_Depth_Videos.zip.*.part)
./reassemble.sh
# Extract all zip archives
for z in *.zip; do unzip -qn "$z"; done

Reproducing Analysis Plots

The taco_analysis/ directory contains scripts to generate dataset statistics and visualizations. MANO hand models are provided in mano_v1_2/.

Setup

cd taco_dataset
python -m venv .venv
source .venv/bin/activate
pip install numpy pandas matplotlib seaborn opencv-python imageio Pillow trimesh scipy chumpy

Running the scripts

All scripts auto-detect paths relative to the repository root. Plots are saved under taco_analysis/plots/.

# Dataset summary statistics (11 plots)
python taco_analysis/analyze_taco.py

# 3D camera + object mesh visualization
python taco_analysis/visualize_taco_3d_scene.py

# Top-down / side camera layout
python taco_analysis/visualize_taco_cameras_topdown.py

# Epipolar line grids
python taco_analysis/visualize_taco_epipolar.py

# Object mesh + hand skeleton overlay on allocentric views
python taco_analysis/render_mesh_overlay.py

# Loading performance profile
python tools/taco_analysis/profile_dataset.py --root . --output profile.md

Each script accepts --help for full options (e.g. --n-sequences, --seed, --csv, --root).

Downloads last month
1,526