MonoWeb Models

Pretrained language models released alongside the paper:

The Role of Mixed-Language Documents for Multilingual Large Language Model Pretraining

Associated dataset: UCLNLP/monoweb-dataset

Model Details

All models are decoder-only transformers with 1.35B parameters, trained from scratch using the Llama-2 tokenizer (32K vocabulary). Architecture: 24 layers, hidden dimension 2048, 16 attention heads, context length 2048. Training was performed with Megatron-LM for ~143B tokens (34K steps).

Model Variants

Models are organized by language pair and training data configuration:

Folder Language Pair Training Data
ckpt_exp_en_de_baseline English–German FineWeb (full corpus, including bilingual docs)
ckpt_exp_en_de_monoweb English–German MonoWeb (bilingual docs removed)
ckpt_exp_en_de_onlyparallel English–German MonoWeb + parallel docs reintroduced
ckpt_exp_en_de_onlycodeswitch English–German MonoWeb + code-switching docs reintroduced
ckpt_exp_en_es_baseline English–Spanish FineWeb (full corpus, including bilingual docs)
ckpt_exp_en_es_monoweb English–Spanish MonoWeb (bilingual docs removed)
ckpt_exp_en_es_onlyparallel English–Spanish MonoWeb + parallel docs reintroduced
ckpt_exp_en_es_onlycodeswitch English–Spanish MonoWeb + code-switching docs reintroduced
ckpt_exp_en_fr_baseline English–French FineWeb (full corpus, including bilingual docs)
ckpt_exp_en_fr_monoweb English–French MonoWeb (bilingual docs removed)
ckpt_exp_en_fr_onlyparallel English–French MonoWeb + parallel docs reintroduced
ckpt_exp_en_fr_onlycodeswitch English–French MonoWeb + code-switching docs reintroduced

Each folder contains checkpoints saved every 2,000 steps from iter_2000 to iter_36000 (18 checkpoints per model).

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for UCLNLP/monoweb