view post Post 6672 SmolVLM is now available on PocketPal β you can run it offline on your smartphone to interpret the world around you. ππ±And check out this real-time camera demo by @ngxson , powered by llama.cpp:https://github.com/ngxson/smolvlm-realtime-webcamhttps://x.com/pocketpal_ai See translation 4 replies Β· β€οΈ 12 12 π 1 1 + Reply
Running on CPU Upgrade Featured 3k The Smol Training Playbook π 3k The secrets to building world-class LLMs
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B Text Generation β’ Updated Feb 24, 2025 β’ 1.32M β’ β’ 1.45k
view post Post 3009 A few days ago, Thinking Machines Lab released βLoRA Without Regretβ, showing that LoRA can match full fine-tuning performance when configured right.Naturally, we decided to reproduce the results with TRL and release a guide!https://huggingface.co/docs/trl/main/en/lora_without_regret See translation π₯ 11 11 + Reply
Toward Efficient Agents: Memory, Tool learning, and Planning Paper β’ 2601.14192 β’ Published Jan 20 β’ 54
DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF Text Generation β’ 30B β’ Updated 25 days ago β’ 90.2k β’ 230
view post Post 7417 Uncensored, Heretic GGUF quants of GLM 4.7 (30B-A3B) with correct Llamacpp and all updates ; NEO-CODE Imatrix W 16 bit OTs.Also specialized quants (balanced for this model), and all quants are NEO-CODE Imatrix W 16 bit output tensor. DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF"Reg quants, non-heretic" :Also 16 bit ot, NEO-CODE Imatrix and specialized: DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF See translation π₯ 7 7 π 3 3 + Reply
view article Article Atlaset Dataset for Moroccan Darija: From Data Collection, Analysis, to Model Trainings Mar 6, 2025 β’ 27