https://huggingface.co/ConicCat/GLM-4.7-Architect-355B-A32B
https://huggingface.co/ConicCat/GLM-4.7-Architect-355B-A32B
One of the only actual fine tunes of glm 4.7. If doing the quantization is too burdensome, just the imatrix.gguf would go a LONG way. Thank you!
Alredy seems to be quanted
Is there an orphaned repo I've missed?
Is there an orphaned repo I've missed?
It could also be since i just got acess to queuing i still cant see the reason if it fails, i get the same error when its quanted and when its failed, sorry
Is there an orphaned repo I've missed?
It could also be since i just got access to queuing i still cant see the reason if it fails, i get the same error when its quanted and when its failed, sorry
(Or its alredy queued)
Hi @schonsense
It was queued a while back, but we are a bit out of space on the moment with big models, so currently it's locked in the queue. We will try to quant it as soon as possible, hopefully will manage to push it into queue tomorrow =)
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GLM-4.7-Architect-355B-A32B-GGUF for quants to appear.
No worries, you guys do your thing.