Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
title: string
id: string
licenses: list<item: struct<name: string>>
subtitle: string
description: string
isPrivate: bool
keywords: list<item: string>
vs
glottocode: string
name: string
isocodes: string
level: string
macroarea: string
latitude: double
longitude: double
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 588, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
title: string
id: string
licenses: list<item: struct<name: string>>
subtitle: string
description: string
isPrivate: bool
keywords: list<item: string>
vs
glottocode: string
name: string
isocodes: string
level: string
macroarea: string
latitude: double
longitude: doubleNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Language Data
Curated linguistic datasets covering 23,740 languages. Pulls from Glottolog, WALS, PHOIBLE, and Joshua Project, plus original integration work combining them into a single queryable format.
Datasets
Core Files
| File | Records | Size | Source | License |
|---|---|---|---|---|
glottolog_coordinates.json |
21,329 languoids | 3.8 MB | Glottolog | CC-BY-4.0 |
glottolog_languoid.csv |
23,740 languoids | 1.8 MB | Glottolog | CC-BY-4.0 |
phoible.csv |
105,484 phonemes | 24 MB | PHOIBLE | CC-BY-SA-3.0 |
wals_languages.csv |
3,573 languages | 460 KB | WALS | CC-BY-4.0 |
wals_values.csv |
76,475 feature values | 4.5 MB | WALS | CC-BY-4.0 |
wals_parameters.csv |
192 typological features | 8 KB | WALS | CC-BY-4.0 |
world_languages_integrated.json |
7,130 languages | 8.1 MB | Original integration | MIT |
Subdirectories
language-families/ -- Language family trees, Proto-Indo-European reconstructions, speaker counts. Restructured from Glottolog with enrichment from multiple sources. 10 JSON files covering family hierarchies up to 18 levels deep.
historical-corpora/ -- CLMET historical English corpus metadata.
lemmatization/ -- Historical lemma mappings (738 entries) for normalizing older English texts.
reference-corpora/ -- Brown Corpus part-of-speech statistics.
The Integrated Dataset
world_languages_integrated.json is the main original contribution. It merges four sources by ISO 639-3 code:
- Glottolog -- coordinates, family classification, macroarea (7,040 languages)
- Joshua Project -- speaker populations, demographics (7,130 languages)
- Language history -- family tree positions, historical periods (198 languages)
- US indigenous -- endangerment status, revitalization efforts (13 languages)
import json
with open("world_languages_integrated.json") as f:
languages = json.load(f)
# Each record looks like:
# {
# "iso_639_3": "eng",
# "name": "English",
# "glottolog": { "latitude": 53.0, "longitude": -1.0, "family_name": "Indo-European", ... },
# "joshua_project": { "population": 379007140, "countries": [...], ... },
# "speaker_count": { "count": 379007140, "source": "joshua_project_aggregated", ... },
# "data_sources": ["joshua_project", "glottolog"]
# }
# Find endangered languages with coordinates
endangered = [
lang for lang in languages
if lang.get("glottolog", {}).get("latitude")
and lang.get("speaker_count", {}).get("count", float("inf")) < 1000
]
This dataset is also available separately on HuggingFace as lukeslp/world-languages.
What's Not Here
ISO 639-3 code tables -- SIL International doesn't allow redistribution. Get them directly from iso639-3.sil.org.
License
Multiple licenses apply. See LICENSE.md for details.
- Our work (integration, curation, language-families): MIT
- Glottolog, WALS: CC-BY-4.0
- PHOIBLE: CC-BY-SA-3.0 (share-alike)
Sources
- Glottolog -- Max Planck Institute for Evolutionary Anthropology
- WALS Online -- World Atlas of Language Structures
- PHOIBLE -- Phonological inventory database
- Joshua Project -- People group demographics
Author
Luke Steuber
- Website: lukesteuber.com
- Bluesky: @lukesteuber.com
- Downloads last month
- 42