Pre-trained models in MiniPLM: Knowledge Distillation for Pre-Training Language Models
AI & ML interests
Training efficient language models (MiniLLM, MiniPLM)
Organization Card
models 50
MiniLLM/MiniLLM-gpt2-340M
Text Generation • Updated
• 26 • 5
MiniLLM/SFT-gpt2-120M
Text Generation • 0.1B • Updated
• 8
MiniLLM/SFT-gpt2-760M
Text Generation • 0.8B • Updated
• 13
MiniLLM/MiniPLM-Qwen-500M
Text Generation • 0.5B • Updated
• 33 • 7
MiniLLM/MiniPLM-llama3.1-212M
Text Generation • 0.2B • Updated
• 5 • 6
MiniLLM/MiniPLM-Mamba-130M
Text Generation • 0.1B • Updated
• 10 • 3
MiniLLM/MiniPLM-Qwen-1.2B
Text Generation • 1B • Updated
• 13 • 4
MiniLLM/Ref-Pretrain-Qwen-104M
Text Generation • 0.1B • Updated
• 5 • 2
MiniLLM/Pretrain-Qwen-1.2B
Text Generation • 1B • Updated
• 2
MiniLLM/Pretrain-Qwen-500M
Text Generation • 0.5B • Updated
• 3
datasets 10
MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
Updated
• 149
MiniLLM/pile-tokenized
Updated
• 57 • 2
MiniLLM/roberta-corpus-processed
Updated
• 26
MiniLLM/openwebtext-processed
Updated
• 95
MiniLLM/dolly-processed
Viewer
• Updated
• 110k • 130 • 1
MiniLLM/sinst
Viewer
• Updated
• 8.35k • 87 • 1
MiniLLM/uinst
Viewer
• Updated
• 64.8k • 92 • 1
MiniLLM/self-inst
Viewer
• Updated
• 242 • 68 • 2
MiniLLM/Vicuna
Viewer
• Updated
• 80 • 63 • 1
MiniLLM/dolly
Viewer
• Updated
• 500 • 74