Add paper link, GitHub link, and update metadata for VCBench

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +43 -118
README.md CHANGED
@@ -1,157 +1,82 @@
1
  ---
2
- license: mit
3
- task_categories:
4
- - video-classification
5
- - question-answering
6
  language:
7
  - en
 
 
 
 
 
8
  tags:
9
  - video-understanding
10
  - temporal-reasoning
11
  - counting
12
  - benchmark
13
- size_categories:
14
- - 1K<n<10K
15
  ---
16
 
17
- # VCBench: Clipped Videos Dataset
18
-
19
- ## Dataset Description
20
-
21
- This dataset contains **4,574 clipped video segments** from the VCBench (Video Counting Benchmark), designed for evaluating spatial-temporal state maintenance capabilities in video understanding models.
22
-
23
- ### Dataset Summary
24
-
25
- - **Total Videos**: 4,574 clips
26
- - **Total Size**: ~80 GB
27
- - **Video Format**: MP4 (H.264)
28
- - **Categories**: 8 subcategories across object counting and event counting tasks
29
-
30
- ### Categories
31
-
32
- **Object Counting (2,297 clips)**:
33
- - `O1-Snap`: Current-state snapshot (252 clips)
34
- - `O1-Delta`: Current-state delta (98 clips)
35
- - `O2-Unique`: Global unique counting (1,869 clips)
36
- - `O2-Gain`: Windowed gain counting (78 clips)
37
-
38
- **Event Counting (2,277 clips)**:
39
- - `E1-Action`: Instantaneous action (1,281 clips)
40
- - `E1-Transit`: State transition (205 clips)
41
- - `E2-Periodic`: Periodic action (280 clips)
42
- - `E2-Episode`: Episodic segment (511 clips)
43
-
44
- ## File Naming Convention
45
 
46
- ### Multi-query clips
47
- Format: `{category}_{question_id}_{query_index}.mp4`
48
 
49
- Example: `e1action_0000_00.mp4`, `e1action_0000_01.mp4`
50
 
51
- ### Single-query clips
52
- Format: `{category}_{question_id}.mp4`
53
 
54
- Example: `o1delta_0007.mp4`, `o2gain_0000.mp4`
55
 
56
- ## Video Properties
 
 
 
 
 
 
57
 
58
- - **Encoding**: H.264 (using `-c copy` for lossless clipping)
59
- - **Frame Rates**: Preserved from source (3fps, 24fps, 25fps, 30fps, 60fps)
60
- - **Duration Accuracy**: ±0.1s from annotation timestamps
61
- - **Quality**: Original quality maintained (no re-encoding)
 
 
 
62
 
63
- ## Source Datasets
64
 
65
- Videos are clipped from multiple source datasets:
66
- - YouTube walking tours and sports videos
67
- - RoomTour3D (indoor navigation)
68
- - Ego4D (first-person view)
69
- - ScanNet, ScanNetPP, ARKitScenes (3D indoor scenes)
70
- - TOMATO, CODa, OmniWorld (temporal reasoning)
71
- - Simulated physics videos
72
 
73
  ## Usage
74
 
75
- ### Loading with Python
76
-
77
- ```python
78
- from huggingface_hub import hf_hub_download
79
- import cv2
80
 
81
- # Download a specific video
82
- video_path = hf_hub_download(
83
- repo_id="YOUR_USERNAME/VCBench",
84
- filename="e1action_0000_00.mp4",
85
- repo_type="dataset"
86
- )
87
-
88
- # Load with OpenCV
89
- cap = cv2.VideoCapture(video_path)
90
- ```
91
-
92
- ### Batch Download
93
 
94
  ```bash
95
- # Install huggingface-cli
96
- pip install huggingface_hub
97
-
98
- # Download entire dataset
99
- huggingface-cli download YOUR_USERNAME/VCBench --repo-type dataset --local-dir ./vcbench_videos
100
  ```
101
 
102
- ## Annotations
103
-
104
- For complete annotations including questions, query points, and ground truth answers, please refer to the original VCBench repository:
105
- - Object counting annotations: `object_count_data/*.json`
106
- - Event counting annotations: `event_counting_data/*.json`
107
 
108
- Each annotation file contains:
109
- - `id`: Question identifier
110
- - `source_dataset`: Original video source
111
- - `video_path`: Original video filename
112
- - `question`: Counting question
113
- - `query_time` or `query_points`: Timestamp(s) for queries
114
- - `count`: Ground truth answer(s)
115
 
116
- ## Quality Validation
117
 
118
- All videos have been validated for:
119
- - Duration accuracy (100% within ±0.1s)
120
- - Frame rate preservation (original fps maintained)
121
- - ✓ No frame drops or speed changes
122
- - ✓ Lossless clipping (no re-encoding artifacts)
123
 
124
  ## Citation
125
 
126
- If you use this dataset, please cite the VCBench paper:
127
-
128
  ```bibtex
129
- @article{vcbench2026,
130
- title={VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance},
131
- author={[Authors]},
132
- journal={[Journal/Conference]},
133
  year={2026}
134
  }
135
  ```
136
 
137
  ## License
138
 
139
- MIT License - See LICENSE file for details.
140
-
141
- ## Dataset Statistics
142
-
143
- | Category | Clips | Avg Duration | Total Size |
144
- |----------|-------|--------------|------------|
145
- | O1-Snap | 252 | ~2min | ~4.3 GB |
146
- | O1-Delta | 98 | ~1min | ~1.7 GB |
147
- | O2-Unique | 1,869 | ~3min | ~32 GB |
148
- | O2-Gain | 78 | ~1min | ~1.3 GB |
149
- | E1-Action | 1,281 | ~4min | ~28 GB |
150
- | E1-Transit | 205 | ~2min | ~3.5 GB |
151
- | E2-Periodic | 280 | ~3min | ~8.7 GB |
152
- | E2-Episode | 511 | ~2min | ~4.8 GB |
153
- | **Total** | **4,574** | - | **~80 GB** |
154
-
155
- ## Contact
156
-
157
- For questions or issues, please open an issue in the dataset repository.
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
+ license: cc-by-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
+ task_categories:
8
+ - video-text-to-text
9
  tags:
10
  - video-understanding
11
  - temporal-reasoning
12
  - counting
13
  - benchmark
 
 
14
  ---
15
 
16
+ # VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance in Long Videos
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
+ [**Paper**](https://huggingface.co/papers/2603.12703) | [**Code**](https://github.com/buaaplay/VCBench) | [**Dataset**](https://huggingface.co/datasets/buaaplay/VCBench)
 
19
 
20
+ VCBench is a streaming counting benchmark that repositions counting as a minimal probe for diagnosing **spatial-temporal state maintenance** capability in video-language models. By querying models at multiple timepoints during video playback, VCBench observes how model predictions evolve rather than checking isolated answers.
21
 
22
+ ## Task Taxonomy
 
23
 
24
+ VCBench decomposes state maintenance into 8 fine-grained subcategories across two dimensions:
25
 
26
+ ### Object Counting (tracking entities)
27
+ | Subcategory | Description |
28
+ |-------------|-------------|
29
+ | **O1-Snap** | How many objects are visible *at this moment*? |
30
+ | **O1-Delta** | How many objects appeared in the *past N seconds*? |
31
+ | **O2-Unique** | How many *different* individuals have appeared so far? |
32
+ | **O2-Gain** | How many *new* individuals appeared in the past N seconds? |
33
 
34
+ ### Event Counting (tracking actions)
35
+ | Subcategory | Description |
36
+ |-------------|-------------|
37
+ | **E1-Action** | How many times has an atomic action occurred so far? |
38
+ | **E1-Transit** | How many scene transitions have occurred so far? |
39
+ | **E2-Episode** | How many activity segments have occurred so far? |
40
+ | **E2-Periodic** | How many complete cycles of a periodic action so far? |
41
 
42
+ ## Dataset Summary
43
 
44
+ - **Total Videos**: 406 source videos (generating 4,574 clipped segments)
45
+ - **Total Size**: ~80 GB
46
+ - **Annotations**: 1,000 counting questions with 4,576 streaming query points and 10,071 frame-by-frame annotations.
47
+ - **Sources**: YouTube, ARKitScenes, ScanNet, ScanNet++, Ego4D, RoomTour3D, CODa, OmniWorld, and physics simulations.
 
 
 
48
 
49
  ## Usage
50
 
51
+ ### Download via CLI
 
 
 
 
52
 
53
+ You can download the dataset using the `huggingface-cli`:
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ```bash
56
+ huggingface-cli download buaaplay/VCBench --repo-type dataset --local-dir data/videos
 
 
 
 
57
  ```
58
 
59
+ The `chunkedVideos/` directory contains 4,576 video clips (one per query point), each truncated to the query timestamp.
 
 
 
 
60
 
61
+ ### Evaluation
 
 
 
 
 
 
62
 
63
+ To compute metrics (GPA, MoC, UDA) on results using the official evaluation scripts:
64
 
65
+ ```bash
66
+ # Compute metrics on provided results
67
+ python eval/compute_metrics.py results/vcbench_gemini3flash_unified.jsonl data/vcbench_eval.jsonl
68
+ ```
 
69
 
70
  ## Citation
71
 
 
 
72
  ```bibtex
73
+ @article{vcbench2025,
74
+ title={VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance in Long Videos},
75
+ author={Liu, Pengyiang and Shi, Zhongyue and Hao, Hongye and Fu, Qi and Bi, Xueting and Zhang, Siwei and Hu, Xiaoyang and Wang, Zitian and Huang, Linjiang and Liu, Si},
 
76
  year={2026}
77
  }
78
  ```
79
 
80
  ## License
81
 
82
+ This dataset and code are released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).