nanochat-d20
Training Pipeline
1.Base-training PreTraining on FineWeb-EDU dataset using nanochat framework
Mid-training: General instruction tuning on SmolTalk, MMLU, GSM8K, Spelling tasks
SFT (Supervised Fine-Tuning): Chat-specific training on ARC, GSM8K, SmolTalk
RL (Reinforcement Learning): Optional GRPO-style training on GSM8K (if included)
Repository Structure
βββ tokenizer/
β βββ tokenizer.pkl # Tokenizer
β βββ token_bytes.pt # Token byte mappings
βββ mid_checkpoints/d34/ # Mid-training checkpoint
β βββ model_*.pt
β βββ meta_*.json
βββ chatsft_checkpoints/d20/ # SFT checkpoint
β βββ model_*.pt
β βββ meta_*.json
βββ chatsft_checkpoints_int8/d20/ # SFT checkpoint
β βββ model_*.pt
β βββ meta_*.json
βββ chatrl_checkpoints/d20/ # RL checkpoint (if available)
β βββ model_*.pt
β βββ meta_*.json
βββ report/ # Evaluation reports
β βββ report.md
βββ logs/ # Training logs
License
MIT License (same as nanochat)
Acknowledgments
- Andrej Karpathy for the nanochat framework
@misc{nanochat,
author = {Andrej Karpathy},
title = {nanochat: The best ChatGPT that $100 can buy},
year = {2025},
publisher = {GitHub},
url = {https://github.com/karpathy/nanochat}
}
- The nanochat community