Papers
arxiv:2602.03772

UniGeM: Unifying Data Mixing and Selection via Geometric Exploration and Mining

Published on Feb 3
Authors:
,
,
,
,
,
,
,

Abstract

UniGeM framework unifies data mixing and selection for LLM training by treating curation as manifold approximation, achieving improved data efficiency and performance through hierarchical macro-exploration and micro-mining processes.

AI-generated summary

The scaling of Large Language Models (LLMs) is increasingly limited by data quality. Most methods handle data mixing and sample selection separately, which can break the structure in code corpora. We introduce UniGeM, a framework that unifies mixing and selection by treating data curation as a manifold approximation problem without training proxy models or relying on external reference datasets. UniGeM operates hierarchically: Macro-Exploration learns mixing weights with stability-based clustering; Micro-Mining filters high-quality instances by their geometric distribution to ensure logical consistency. Validated by training 8B and 16B MoE models on 100B tokens, UniGeM achieves 2.0times data efficiency over a random baseline and further improves overall performance compared to SOTA methods in reasoning-heavy evaluations and multilingual generalization.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.03772 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.03772 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.03772 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.