Qwen3.5-4B-Python-Coder-GGUF

Available Quantizations

The following quantization formats are available in this repository:

  • Q3_K_M: Smallest size, heavily quantized. Good for very low RAM environments, but significant loss in coding accuracy.
  • Q4_K_M: Recommended baseline. Excellent balance between file size, memory usage, and coding performance.
  • Q5_K_M: Higher accuracy than Q4, slightly larger file size.
  • Q6_K: Very close to the original unquantized model's performance. Great if you have the RAM for it.
  • Q8_0: Almost zero quality loss compared to the original 16-bit model, but largest file size and highest memory requirement.

How to Run

You can run these models locally using llama.cpp or compatible interfaces like LM Studio, Ollama, or text-generation-webui.

Example using llama.cpp in the terminal:

./main -m Qwen3.5-4B-Python-Coder-Q4_K_M.gguf -n 512 --color -i -cml -p "<|im_start|>user\nWrite a Python script to scrape a website.<|im_end|>\n<|im_start|>assistant\n"
Downloads last month
644
GGUF
Model size
4B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abhiray/Qwen3.5-4B-Python-Coder-GGUF

Finetuned
Qwen/Qwen3.5-4B
Quantized
(1)
this model