-
tencent/Youtu-VL-4B-Instruct-GGUF
Image-Text-to-Text β’ 5B β’ Updated β’ 458 β’ 61 -
Qwen/Qwen3-VL-8B-Instruct-GGUF
Image-Text-to-Text β’ 8B β’ Updated β’ 36.1k β’ 87 -
unsloth/Qwen3-VL-8B-Instruct-GGUF
Image-Text-to-Text β’ 8B β’ Updated β’ 49.9k β’ 43 -
unsloth/Qwen3-VL-8B-Thinking-1M-GGUF
Image-Text-to-Text β’ 8B β’ Updated β’ 2.82k β’ 16
NK
nkaushik
AI & ML interests
None yet
Recent Activity
reacted to danielhanchen's post with π₯ about 20 hours ago
We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! π
Learn how 3 optimizations help your home GPU train models faster:
1. Packed-sequence metadata caching
2. Double-buffered checkpoint reloads
3. Faster MoE routing
Guide: https://unsloth.ai/blog/nvidia-collab
GitHub: https://github.com/unslothai/unsloth updated a collection about 1 month ago
Backlog to try liked a model about 1 month ago
UsefulSensors/moonshine-tinyOrganizations
None yet