view article Article GGML and llama.cpp join HF to ensure the long-term progress of Local AI +4 3 days ago • 292
view post Post 5121 We collaborated with Hugging Face to enable you to train MoE models 12× faster with 35% less VRAM via our new Triton kernels (no accuracy loss). 🤗Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply · 🔥 29 29 🤗 5 5 + Reply
view article Article Training Design for Text-to-Image Models: Lessons from Ablations 19 days ago • 61
view article Article Unlocking Agentic RL Training for GPT-OSS: A Practical Retrospective 27 days ago • 56
Running on CPU Upgrade Featured 3k The Smol Training Playbook 📚 3k The secrets to building world-class LLMs