GGUF
English
How to use from
Docker Model Runner
docker model run hf.co/actionpace/Synthia-13B:Q5_1
Quick Links

Some of my own quants:

  • Synthia-13B_Q5_1.gguf

Source: migtissera

Source Model: Synthia-13B

Downloads last month
2
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support