Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Dataset Curation of 3DXTalker

Dataset Summary

This dataset is a large-scale, curated collection of talking head videos built for tasks such as high-fidelity 3D talking avatar generation, lip synchronization, and pose dynamics modeling.

The dataset aggregates and standardizes data from six prominent sources (GRID, RAVDESS, MEAD, VoxCeleb2, HDTF, Celebv-HQ), processed through a rigorous data curation pipeline to ensure high quality in terms of face alignment, resolution, and audio-visual synchronization. It covers diverse environments (Lab vs. Wild) and a wide range of subjects.

Supported Tasks and Leaderboards

  • 3D Talking Head Generation: Synthesizing realistic talking videos from driving speech.
  • Audio-Driven Lip Synchronization: Aligning lip movements precisely with input speech.
  • Emotion Analysis & Synthesis: Leveraging the emotional diversity in datasets like RAVDESS and MEAD.
  • Audio-Driven Head Pose Synthesis: Modeling natural head movements and orientation directly driving speech.

Dataset Structure

trainset/

β”œβ”€β”€ V0-GRID/ # 6,570 sequences from GRID corpus

β”‚ β”œβ”€β”€ V0-s1-00001/

β”‚ β”‚ β”œβ”€β”€ audio.wav # (N,) audio data

β”‚ β”‚ β”œβ”€β”€ cam.npy # (T, 3) camera parameters

β”‚ β”‚ β”œβ”€β”€ detailcode.npy # (T, 128) facial details

β”‚ β”‚ β”œβ”€β”€ envelope.npy # (N,) audio envelope

β”‚ β”‚ β”œβ”€β”€ expcode.npy # (T, 50) expression codes

β”‚ β”‚ β”œβ”€β”€ lightcode.npy # (T, 9, 3) lighting

β”‚ β”‚ β”œβ”€β”€ metadata.pkl # Sequence metadata

β”‚ β”‚ β”œβ”€β”€ posecode.npy # (T, 6) head pose

β”‚ β”‚ β”œβ”€β”€ refimg.npy # (C, H, W) reference image

β”‚ β”‚ β”œβ”€β”€ shapecode.npy # (T, 100) shape codes

β”‚ β”‚ └── texcode.npy # (T, 50) texture codes

β”‚ β”œβ”€β”€ V0-s1-00002/

β”‚ β”‚ └── ... (same 11 files)

β”‚ β”œβ”€β”€ V0-s1-00003/

β”‚ └── ... (6,570 total sequences)

β”‚

β”œβ”€β”€ V1-RAVDESS/ # 583 sequences from RAVDESS dataset

β”‚ β”œβ”€β”€ V1-Song-Actor_01-00001/

β”‚ β”‚ └── ... (same 11 files)

β”‚ β”œβ”€β”€ V1-Song-Actor_01-00002/

β”‚ β”œβ”€β”€ V1-Speech-Actor_01-00001/

β”‚ β”œβ”€β”€ V1-Speech-Actor_02-00001/

β”‚ └── ... (583 total sequences)

β”‚

β”œβ”€β”€ V2-MEAD/ # 1,939 sequences from MEAD dataset

β”‚ β”œβ”€β”€ V2-M003-angry-00001/

β”‚ β”‚ └── ... (same 11 files)

β”‚ β”œβ”€β”€ V2-M003-angry-00002/

β”‚ β”œβ”€β”€ V2-M003-happy-00001/

β”‚ β”œβ”€β”€ V2-W009-sad-00001/

β”‚ └── ... (1,939 total sequences)

β”‚

β”œβ”€β”€ V3-VoxCeleb2/ # 1,296 sequences from VoxCeleb2

β”‚ β”œβ”€β”€ {sequence_id}/

β”‚ β”‚ └── ... (same 11 files)

β”‚ └── ... (1,296 total sequences)

β”‚

β”œβ”€β”€ V4-HDTF/ # 350 sequences from HDTF dataset

β”‚ β”œβ”€β”€ {sequence_id}/

β”‚ β”‚ └── ... (same 11 files)

β”‚ └── ... (350 total sequences)

β”‚

└── V5-CelebV-HQ/ # 768 sequences from CelebV-HQ dataset

β”œβ”€β”€ {sequence_id}/

β”‚ └── ... (same 11 files)

└── ... (768 total sequences)

Data Format Details

File Overview

File Type Shape Description
audio.wav Audio (N_samples,) Original audio waveform
cam.npy Parameters (N_frames, 3) Camera parameters (position/scale)
detailcode.npy Parameters (N_frames, 128) Facial detail codes (wrinkles, fine features)
envelope.npy Parameters (N_audio_samples,) Audio envelope/amplitude over time
expcode.npy Parameters (N_frames, 50) FLAME expression parameters (50-dim)
lightcode.npy Parameters (N_frames, 9, 3) Spherical harmonics lighting (9 bands Γ— RGB)
metadata.pkl Metadata N/A Sequence metadata (integer or dict)
posecode.npy Parameters (N_frames, 6) 3 head pose + 3 jaw pose
refimg.npy Image (3, 224, 224) Reference image (RGB, 224Γ—224 pixels)
shapecode.npy Parameters (N_frames, 100) FLAME shape parameters (100-dim)
texcode.npy Parameters (N_frames, 50) Texture codes (50-dim)

Coordinate Systems and Conventions

  • FLAME model: 3D Morphable Face Model with 5023 vertices
  • Expression space: 50-dimensional linear basis
  • Shape space: 100-dimensional PCA space
  • Pose representation: 3 head pose + 3 jaw pose
  • Lighting: 2nd-order spherical harmonics (9 bands)

Temporal Synchronization

  • Video frames: 25 FPS (frames per second)
  • Audio samples: 16,000 samples per second
  • All video parameters (expcode, shapecode, detailcode, posecode, cam, lightcode, texcode) share the same N_frames dimension
  • Audio and video are temporally aligned (frame 0 corresponds to start of audio)

Data Statistics

The dataset comprises 11,706 total video samples, spanning approximately 67.4 hours of self-talking footage. The data is categorized by environment (Lab vs. Wild) and includes varying resolutions and subject diversity.

Detailed Statistics (from Curation Pipeline)

Dataset ID Environment Year Raw Resolution Size (samples) Subject Total Duration (s) Hours (h) Avg. Duration (s/sample)
GRID V0 Lab 2006 720 Γ— 576 6,600 34 99,257.81 27.57 15.04
RAVDESS V1 Lab 2018 1280 Γ— 1024 613 24 10,071.88 2.80 16.43
MEAD V2 Lab 2020 1920 Γ— 1080 1,969 60 42,868.77 11.91 21.77
VoxCeleb2 V3 Wild 2018 360P~720P 1,326 1k+ 21,528.20 5.98 16.24
HDTF V4 Wild 2021 720P~1080P 400 300+ 55,452.08 15.40 138.63
Celebv-HQ V5 Wild 2022 512 Γ— 512 798 700+ 13,486.20 3.75 16.90

Data Splits

The dataset follows a strict training and testing split protocol to ensure fair evaluation. The testing set is composed of a balanced selection from each sub-dataset.

Dataset ID Total Size Training Set Test Set
GRID V0 6,600 6,570 30
RAVDESS V1 613 583 30
MEAD V2 1,969 1,939 30
VoxCeleb2 V3 1,326 1,296 30
HDTF V4 400 350 50
Celebv-HQ V5 798 768 30
Summary 11,706 11,506 200

Dataset Creation

Curation Rationale

Raw videos from the wild (e.g., VoxCeleb2, Celebv-HQ) often contain background noise, diverse languages, or varying resolutions. This dataset is the result of the following data curation pipeline designed to ensure high-quality audio-visual consistency:

  1. Duration Filtering: To facilitate temporal modeling, short clips from lab datasets are concatenated to form 10–20s sequences, while wild samples shorter than 10s are filtered out.
  2. Signal-to-Noise Ratio (SNR) Filtering: Clips with strong background noise, music, or environmental interference are removed based on SNR thresholds to ensure clean audio features.
  3. Language Filtering: Linguistic consistency is enforced by using Whisper to discard non-English samples or those with low detection confidence.
  4. Audio-Visual Sync Filtering: SyncNet is used to eliminate clips with poor lip synchronization, abrupt scene cuts, or off-screen speakers (e.g., voice-overs).
  5. Resolution Normalization: All videos are resized and center-cropped to a unified 512Γ—512 resolution and re-encoded at 25 FPS with standardized RGB to harmonize data from diverse sources.

Source Video Data

Citation

If you use this dataset, please cite the original source datasets:

  • GRID: Cooke, M., et al. (2006). An audio-visual corpus for speech perception and automatic speech recognition.
  • RAVDESS: Livingstone, S. R., & Russo, F. A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS).
  • MEAD: Wang, K., et al. (2020). MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation.
  • VoxCeleb2: Chung, J. S., et al. (2018). VoxCeleb2: Deep Speaker Recognition.
  • HDTF: Zhang, Z., et al. (2021). Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset.
  • CelebV-HQ: Zhu, H., et al. (2022). CelebV-HQ: A Large-Scale Video Facial Attributes Dataset.

And the EMOCA model used for parameter extraction:

  • EMOCA: Danecek, R., et al. (2022). EMOCA: Emotion Driven Monocular Face Capture and Animation.

License

Please refer to the original dataset licenses:

  • GRID: Research use only
  • RAVDESS: CC BY-NA-SC 4.0
  • MEAD, VoxCeleb2, HDTF, CelebV-HQ: Check respective dataset licenses

Notes

  • Not all sequence numbers are contiguous (some sequences may be missing due to quality filtering or processing failures)
  • File counts per sequence are consistent (11 files per sequence)
  • This is a processed/derived dataset - original videos are not included, only extracted parameters
Downloads last month
37