File size: 12,290 Bytes
e37732c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b939c4
 
e37732c
 
 
 
4b939c4
e37732c
 
 
 
 
 
 
 
4b939c4
e37732c
4b939c4
e37732c
4b939c4
 
 
e37732c
4b939c4
e37732c
 
 
 
 
 
 
 
 
 
 
 
 
 
4b939c4
e37732c
 
 
4b939c4
e37732c
 
 
4b939c4
e37732c
4b939c4
 
 
 
 
 
 
e37732c
64c4337
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e37732c
 
 
 
 
 
 
 
 
4b939c4
e37732c
 
4b939c4
e37732c
 
 
 
 
 
 
4b939c4
 
 
e37732c
 
 
 
 
 
 
 
4b939c4
e37732c
4b939c4
e37732c
 
4b939c4
e37732c
 
4b939c4
e37732c
4b939c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e37732c
 
4b939c4
 
 
 
e37732c
 
4b939c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e37732c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b939c4
e37732c
 
 
 
 
 
 
4b939c4
 
e37732c
 
 
4b939c4
 
 
e37732c
 
 
 
 
 
 
 
 
 
 
 
4b939c4
e37732c
 
 
 
 
 
 
 
 
 
 
 
 
 
4b939c4
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
---
license: cc-by-nc-4.0
tags:
- robotics
- drone-navigation
- vision-language-navigation
- open-vocabulary-detection
- embodied-ai
- habitat-sim
- isaac-sim
- benchmark
task_categories:
- object-detection
- depth-estimation
- robotics
size_categories:
- 100K<n<1M
pretty_name: "Yonder: A 4.65M-Frame Drone Navigation Dataset"
language:
- en
---

# Yonder: A 4.65M-Frame Drone-Perspective Dataset for Indoor Navigation

> **The cross-simulator generalization gap.**
> Yonder is the largest publicly available drone-perspective dataset for indoor
> navigation, plus a closed-loop benchmark designed to expose a failure mode invisible
> to standard offline metrics: perception trained on one simulator does not transfer
> cleanly to a different simulator, even when both target the same task.

This dataset accompanies the NeurIPS 2026 Datasets & Benchmarks submission:
*"Yonder: A 4.65M-Frame Drone Navigation Dataset and the Cross-Simulator Generalization Gap."*

## Headline numbers (paper subset)

- **4,650,324** drone-perspective frames
- **387,527** waypoint NPZ files (one per waypoint × 12 yaws)
- **167** indoor 3D environments (all from HSSD, all with semantic annotations)
- **52 sensor arrays** per NPZ (stereo RGB, depth, IR, LiDAR-360, semantic segmentation, pose, IMU)
- **~3.3 TB** total

## What's in a waypoint

Every waypoint NPZ contains a single drone pose with **12 yaw orientations**. For each yaw:

| Sensor | Resolution / Format |
|---|---|
| Left RGB | 640×480, uint8 |
| Right RGB | 640×480, uint8 |
| Forward depth | 640×480, float16 (meters) |
| Landing camera | 640×480, uint8 (downward) |
| Up IR / Down IR | 640×480, uint8 |
| LiDAR-360 | 1024 × 16 channel, float32 (meters) |
| Position / Orientation / IMU | float32 (Habitat-Sim world frame) |
| Semantic segmentation | 640×480 instance + class IDs (all 167 scenes) |

## Source environments

Yonder is rendered from a single open-source 3D scene dataset:

| Source | License | Scenes | Waypoints | Has Semantics |
|---|---|---:|---:|---|
| [HSSD](https://huggingface.co/datasets/hssd/hssd-hab) (Habitat Synthetic Scenes Dataset) | CC-BY-NC-4.0 | 167 | 387,527 | 167 of 167 |

> Earlier collection passes also covered ReplicaCAD (84 scenes, CC-BY-4.0, no semantics),
> Replica (Meta research-only terms), and HM3D (Matterport academic EULA). Replica and
> HM3D were excluded because their upstream licenses do not permit open redistribution
> of derivative renders. ReplicaCAD was excluded because it lacks semantic annotations
> and was not used in any reported experiment, in service of a single-source,
> fully-experiment-relevant artifact. **Yonder ships its own rendered observations only —
> no upstream meshes are redistributed.**

## Preview gallery

A curated frame from each of the **227 scenes** (167 HSSD-derived
indoor + 60 Isaac-sim-native). Frames are picked from each scene's actual NPZ drone
trajectory by an edge-density + spatial-variance scoring heuristic over 10 waypoints × 12
yaws of candidate frames per scene. **Full gallery:** [`previews/INDEX.md`](previews/INDEX.md).

| | | | |
|---|---|---|---|
| ![hssd-102817053](previews/augmented/hssd-102817053.jpg) | ![hssd-105515151_173104068](previews/augmented/hssd-105515151_173104068.jpg) | ![hssd-107734119_175999932](previews/augmented/hssd-107734119_175999932.jpg) | ![hssd-107734146_175999971](previews/augmented/hssd-107734146_175999971.jpg) |
| `hssd-102817053` | `hssd-105515151_173104068` | `hssd-107734119_175999932` | `hssd-107734146_175999971` |
| ![hssd-108736872_177263607](previews/augmented/hssd-108736872_177263607.jpg) | ![hssd-102816729](previews/augmented/hssd-102816729.jpg) | ![hssd-104348133_171513054](previews/augmented/hssd-104348133_171513054.jpg) | ![hssd-108294870_176710551](previews/augmented/hssd-108294870_176710551.jpg) |
| `hssd-108736872_177263607` | `hssd-102816729` | `hssd-104348133_171513054` | `hssd-108294870_176710551` |
| ![hssd-108736722_177263382](previews/augmented/hssd-108736722_177263382.jpg) | ![hssd-108294465_176709960](previews/augmented/hssd-108294465_176709960.jpg) | ![hssd-106366323_174226647](previews/augmented/hssd-106366323_174226647.jpg) | ![hssd-108294624_176710203](previews/augmented/hssd-108294624_176710203.jpg) |
| `hssd-108736722_177263382` | `hssd-108294465_176709960` | `hssd-106366323_174226647` | `hssd-108294624_176710203` |
| ![warehouse_v00](previews/isaac-sim-native/warehouse_v00.jpg) | ![hospital_v00](previews/isaac-sim-native/hospital_v00.jpg) | ![rivermark_v00](previews/isaac-sim-native/rivermark_v00.jpg) | ![office_v00](previews/isaac-sim-native/office_v00.jpg) |
| `warehouse_v00` | `hospital_v00` | `rivermark_v00` | `office_v00` |

## License

- **Dataset (this repo):** [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)
  inheriting HSSD's NonCommercial restriction. HSSD attribution preserved per source license.
- **Code, model checkpoints, benchmark:** Apache-2.0 (see linked code repo).

## Quick start

```python
from huggingface_hub import snapshot_download
import numpy as np

# Download a single scene
path = snapshot_download(
    repo_id="astralhf/yonder",
    repo_type="dataset",
    allow_patterns="indoor/drone-data/augmented/hssd-102343992/*.npz",
)

# Load a waypoint
data = np.load(f"{path}/indoor/drone-data/augmented/hssd-102343992/hssd-102343992_wp0000.npz")
left_rgb_yaw0 = data["yaw000_left_rgb"]    # shape (480, 640, 3)
forward_depth_yaw0 = data["yaw000_forward_depth"]  # shape (480, 640) float16
lidar = data["lidar360"]                  # shape (1024, 16) float32
```

## Reviewer sample

For NeurIPS reviewers and others who want a small smoke-test before downloading
multiple TB, see the companion subset:
**[`astralhf/yonder-sample`](https://huggingface.co/datasets/astralhf/yonder-sample)**
(~500 MB, one HSSD scene, all 12 yaws per waypoint).

## Repository layout

```
indoor/drone-data/augmented/         ← paper's primary subset (4.65M frames)
├── hssd-102343992/
│   ├── manifest.json
│   ├── hssd-102343992_wp0000.npz
│   └── ...
└── hssd-*/                          (167 scene dirs)

annotations/                         ← COCO-format detection labels
└── hssd-*/
    ├── annotations.json             (per-scene bbox annotations)
    └── object_inventory.json        (per-scene object catalog)

indoor/isaac-sim-native/             ← cross-simulator evaluation subset
├── scenes/                          (60 Isaac-rendered indoor scenes)
└── annotations/

outdoor/                             ← sibling resources (not part of the NeurIPS paper)
├── boreal/  coastal/  desert/  forest/  lunar/
├── infinigen/                       (procedural Infinigen scenes)
├── carla-cities/                    (8 CARLA towns, full UE4 city geometry)
└── carla-roads/                     (8 CARLA towns, drivable surface only)
```

Each scene directory under `indoor/drone-data/augmented/` contains:
- `manifest.json` — scene-level metadata (scene_id, sampling parameters, total_waypoints,
  total_frames, unique_object_ids).
- `<scene_id>_wp####.npz` — one NPZ file per waypoint, each holding 12 yaws across all sensor
  modalities.

## Cross-simulator evaluation subset (`indoor/isaac-sim-native/`)

Yonder's central thesis — that perception trained on one simulator does not transfer
cleanly to another — requires a different-simulator evaluation set. We provide
**60 Isaac-rendered indoor scenes** (warehouse, hospital, office variants) under
`indoor/isaac-sim-native/scenes/`, with companion annotations under
`indoor/isaac-sim-native/annotations/`. These scenes are used for closed-loop
navigation evaluation in the paper.

## Sibling resources (not part of the NeurIPS paper)

The repository also hosts outdoor scene assets for separate research not described
in the Yonder paper. These are USD scene geometry (different schema from the indoor
waypoint NPZs):

- `outdoor/{boreal,coastal,desert,forest,lunar}/` — biome-specific scenes with
  per-biome `previews/`, `prototypes/`, and `scenes/` directories.
- `outdoor/infinigen/` — procedural scenes generated with [Infinigen](https://infinigen.org/),
  16 scenes spanning canyon, coast, desert, forest, mountain biomes.
- `outdoor/carla-cities/` — full UE4 city geometry from CARLA's 8 towns,
  ~14k mesh instances total. Open in Isaac Sim with
  `omni.usd.get_context().open_stage("outdoor/carla-cities/Town03/scene.usd")`.
- `outdoor/carla-roads/` — drivable-surface USD only, derived from CARLA OpenDRIVE
  (MIT-licensed). 8 towns, plus 380k+ semantic axis-aligned bboxes per town.

These outdoor resources are governed by their respective upstream licenses (CARLA: MIT;
Infinigen: BSD-3-Clause). They are **not** part of the NeurIPS paper's claims and
should be evaluated against their own licenses if used.

## Splits

Yonder is released without pre-defined train/val/test splits. The paper's experiments
use a held-out set of 10% of waypoints (uniformly sampled across scenes); see
the paper for the exact split file (also released alongside the code).

## Intended use**Recommended:**
- Training drone-perspective perception models (open-vocabulary detection,
  monocular depth, semantic segmentation).
- Studying cross-simulator generalization. *Pair offline metrics with closed-loop
  evaluation in a different simulator.* The whole point of Yonder is that doing
  only the former is misleading.
- Benchmarking long-horizon visual-language navigation, when paired with a
  closed-loop evaluator.

⚠ **Use with caution:**
- End-to-end navigation policy training. Yonder is a perception-training resource;
  we do not provide expert trajectories for behavior cloning.
- Any metric reported on Yonder's offline evaluation split alone, without a
  closed-loop counterpart, may not reflect deployment performance.

🚫 **Not for:**
- Commercial use (the entire dataset inherits HSSD's CC-BY-NC restriction).
- Surveillance, biometric identification, or any application of open-vocabulary
  detection to identify specific real persons. The simulated scenes contain no
  real persons; transfer to person-identification tasks is out of scope and
  expressly disallowed.

## Responsible AI considerations

- **No real persons.** Yonder is rendered entirely from synthetic 3D scenes (HSSD);
  no humans are present in any frame. No PII, no biometric data, no faces.
- **Synthetic-only domain.** Performance on Yonder does not transfer to real
  imagery without explicit sim-to-real treatment. Anyone deploying perception
  trained on Yonder in the real world must perform their own real-domain validation.
- **Geographic / cultural bias.** HSSD scenes are biased toward Western residential
  interiors. Models trained on Yonder may underperform on interior styles outside
  this distribution.
- **Cross-simulator evaluation is mandatory.** The dataset's primary contribution
  is making it easy to discover that fine-tuning gains can be illusory.
  Models reported on Yonder should be validated in a different simulator (or
  the real world) before claims of improvement are made.

See the accompanying Croissant metadata (`yonder.croissant.json`) for machine-readable
RAI fields.

## Citation

```bibtex
@inproceedings{anonymous2026yonder,
  title  = {Yonder: A 4.65M-Frame Drone Navigation Dataset and the Cross-Simulator Generalization Gap},
  author = {Anonymous Author(s)},
  booktitle = {Advances in Neural Information Processing Systems (Datasets and Benchmarks Track)},
  year   = {2026},
  note   = {Anonymized for double-blind review.}
}
```

## Authors and contact

Authors and affiliation are anonymized for NeurIPS double-blind review. After
review, the camera-ready version will list authors and contact information here.

## Changelog

- **2026-05-01** — Initial release: 167 HSSD scenes / 387,527 waypoints / 4.65M frames /
  semantic annotations for all 167. HM3D, Replica, and ReplicaCAD subsets removed prior
  to release (HM3D and Replica for license incompatibility; ReplicaCAD for lacking
  semantic annotations and not being used in any reported experiment). Three Habitat
  test scenes (apartment_1, skokloster-castle, van-gogh-room) also excluded for
  upstream-license incompatibility.