Does this mean that we will have GGUF quants of models as they release, or at least support for gguf out of the box for new models in the future?
rombo dawg
rombodawg
AI & ML interests
My patreon:
https://www.patreon.com/c/Rombodawg
My Twitter:
https://x.com/dudeman6790
Recent Activity
new activity about 16 hours ago
MiniMaxAI/MiniMax-M2.5:Minimax 2.7??? new activity 16 days ago
microsoft/Phi-4-reasoning-vision-15B:Typo in model card? commentedon an article 29 days ago
GGML and llama.cpp join HF to ensure the long-term progress of Local AI