text
stringlengths 10
10
|
|---|
f248c2bcdc
|
94ee15e8ba
|
d918af9c5f
|
6ee2fc1070
|
cf1ffd871d
|
9f21bdec45
|
e0de253456
|
7079b59642
|
394a542a19
|
32280ecbca
|
c50d2d1d42
|
f25f5e6f63
|
acd69a1746
|
54b6127146
|
320c3af000
|
6b40d1a939
|
50809ea0d8
|
dfac5b38df
|
e01b287af5
|
7cd2ac43b4
|
7f4d173c9c
|
4318f8bb3c
|
d415cc449b
|
5942004064
|
8e6ff28354
|
825d228aec
|
13285009a4
|
6115eddb86
|
3f15a9266d
|
07ff1c45bb
|
7977624358
|
c856c41c99
|
d7abfc4b17
|
d6702c681d
|
280b83fcf3
|
bb87c292ad
|
37ea1c52f0
|
076c822ecc
|
56a0ec536c
|
7e09430da7
|
c9abde4c4b
|
ab6983ae6c
|
13c3e046d7
|
6cc2231b9c
|
b1d75ecd55
|
31a2c91c43
|
cc5237fd77
|
85251de7d1
|
a1d9da703c
|
ab046f8faf
|
40aec5fffa
|
9f79564dbf
|
acd95847c5
|
f07340dfea
|
cbd4b3055e
|
893fb90e89
|
ab11145646
|
8b2c0938d6
|
116456116b
|
75d29d69b8
|
dc263dfbf0
|
4c5c60fa76
|
7eac902fd5
|
c545851c4f
|
e0abd740ba
|
d6d9ddb03f
|
961911d451
|
303745abc7
|
c8f2218ee2
|
5fb5d2dbf2
|
f3685d06a9
|
260db9cf5a
|
7b6477cb95
|
ed2216380b
|
3e8bba0176
|
f8f12e4e6b
|
e9ac2fc517
|
f9f95681fd
|
260fa55d50
|
410c470782
|
8a35ef3cfe
|
28a9ee4557
|
ebc200e928
|
d2f44bf242
|
c173f62b15
|
709ab5bffe
|
95d525fbfd
|
e8e81396b6
|
98fe276aa8
|
aaa11940d3
|
302a7f6b67
|
9471b8d485
|
08bbbdcc3d
|
e1b1d9de55
|
f2dc06b1d2
|
07f5b601ee
|
3db0a1c8f3
|
2b1dc6d6a5
|
09bced689e
|
a4e227f506
|
Dataset Card for Articulate3D
Articulate3D is a dataset providing part segmentation and articulation annotations for 3D indoor scene scans. It supplies structured labels describing object parts, hierarchical relationships, and articulation mechanisms (motion parameters). The annotations are provided on 280 high-quality ScanNet++ V1 scans, enabling utilization of the ScanNet++ object segmentations.
Articulate3D was created to address significant gaps in existing 3D indoor scene datasets, which often lack the articulation, connectivity, and fine-grained part-level detail required for holistic scene understanding, interaction modeling, and embodied AI. Prior datasets typically include only object-level semantics or provide limited part or articulation information, restricting their usefulness for simulation, robotics, or interaction-oriented tasks. Articulate3D provides richly annotated, real-world, high-resolution 3D scenes with complete articulation metadata - including motion types, axes, origins, ranges, interactable parts, and fixed attachments - enabling realistic physical simulation and advanced reasoning over scene hierarchies, object structure, and part mobility. It is also designed for compatibility with the USD (Universal Scene Description) format, which supports scalable 3D content creation, physics-aware simulation, and seamless integration into robotics and simulation frameworks.
Before use, please obtain the ScanNet++ scenes separately. Our segmentations follow the ScanNet++ .ply meshes, but we do NOT provide those files within the dataset.
Dataset Details
Dataset Description
Articulate3D provides high-quality part segmentation and articulation annotations for scenes from the ScanNet++ dataset. It includes hierarchical part labels, per-face and per-point segmentation, and detailed articulation metadata describing how movable parts relate to parent objects. The dataset is released as JSON files corresponding to ScanNet++ scene IDs and is intended to support research in 3D scene understanding, part reasoning, articulation modeling, and simulation.
- Curated by: INSAIT (Articulate3D authors; see citation)
- Funded by: Ministry of Education and Science of Bulgaria (support for INSAIT, part of the Bulgarian National Roadmap for Research Infrastructure)
- Shared by: INSΑIT
- License: CC-BY 4.0
Dataset Sources
Repository: https://github.com/insait-institute/Articulate3D
Demo / Challenge Website: https://insait-institute.github.io/articulate3d.github.io/challenge.html
Uses
Direct Use
- 3D part segmentation research
- Articulated object understanding
- Scene-level structure and hierarchy modeling
- Robotics, simulation, or digital twin applications requiring articulated components
- Benchmarks for OpenSUN3D Workshop Challenge, Track 3
Out-of-Scope Use
- Use without acquiring the original ScanNet++ scans
- Applications requiring raw sensor data (not included)
Dataset Structure
The dataset contains JSON annotation files following the naming scheme: {scannetpp_scan_id}_{parts|artic}.json
parts.json — Part segmentation annotations
triIndices: face-level segmentation. The face indices follow the indexing with the mesh.plyfiles in the corresponding ScanNet++ scan.vertIndices: derived per-vertex segmentation using a voting mechanism. The vertex indices follow the indexing with the point cloud.plyfiles in the corresponding ScanNet++ scan.- Hierarchy encoded via label indices, e.g.:
3.1.cabinet3.1.2_1.door3.1.2_1.3_1.handle
artic.json — Articulation annotations
pid: ID of the articulated part (refers to part segmentation’spartId)base: base part for articulation- Base can be inferred as the parent in the hierarchy
Dataset Creation
Curation Rationale
Enable holistic scene understanding targeting functionality understanding within indoor scenes.
Source Data
Data Collection and Processing
Articulate3D is built on top of the 280 publicly available training scenes from ScanNet++. Articulate3D does not modify the raw scans; instead, it adds several new layers of annotation:
- Part segmentation
- Connectivity graphs
- Articulation roles (movable, interactable, fixed)
- Motion parameters (motion type, axis, origin, range)
- Mass annotations for simulation
Who are the source data producers?
Expert-annotated data. Five expert annotators conducted primary annotations, with a sixth expert performing review and refinement.
Who are the annotators?
Annotations were produced by five expert annotators, with a sixth expert reviewer performing quality checks, corrections, and validation. Tools extended from MultiScan’s annotation suite—with added support for connectivity and articulation—were used to ensure consistency and scalability.
Citation
BibTeX:
@InProceedings{Halacheva_2025_ICCV,
author = {Halacheva, Anna-Maria and Miao, Yang and Zaech, Jan-Nico and Wang, Xi and Van Gool, Luc and Paudel, Danda Pani},
title = {Articulate3D: Holistic Understanding of 3D Scenes as Universal Scene Description},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {5633-5644}
}
Dataset Card Authors
Anna-Maria Halacheva, INSAIT
anna-maria.halacheva@insait.ai
Dataset Card Contact
Anna-Maria Halacheva, INSAIT
- Downloads last month
- 341
