Q-SS-0.5B-Reasoning-Math-GGUF

The same structured math reasoning model — quantized and ready for instant local CPU inference.

Q-SS-0.5B-Reasoning-Math-GGUF is the quantized GGUF version of Q-SS-0.5B-Reasoning-Math, a fine-tuned Qwen/Qwen2.5-0.5B-Instruct trained with GRPO reinforcement learning on mathematical reasoning tasks. At just ~300MB with Q4_K_M quantization, it runs instantly on any CPU with no GPU required.

🔧 Want the full precision model for GPU or fine-tuning? See Q-SS-0.5B-Reasoning-Math.


✨ Highlights

  • Instant CPU inference — ~300MB Q4_K_M, runs on any machine
  • 🧠 Thinks out loud — explicit step-by-step reasoning inside <thought> tags
  • 🎯 Clean structured output — final answer always isolated in <answer> tags
  • 🖥️ No GPU required — perfect for local, offline, and edge deployments
  • 🔓 Apache 2.0 — free for personal and commercial use

📋 Model Details

Property Details
Model Name Q-SS-0.5B-Reasoning-Math-GGUF
Base Model Qwen/Qwen2.5-0.5B-Instruct
Parameters 500M
Quantization Q4_K_M
File Size ~300MB
Training Method SFT Warm-up + GRPO Reinforcement Learning
Trained On GSM8K + OpenR1-Math-220k
License Apache 2.0
Developer Saad Salman

💬 Output Format

Every response follows this strict structure:

<thought>
[Step-by-step reasoning and calculations]
</thought>
<answer>
[Final numerical answer only]
</answer>

🚀 Quick Start

llama.cpp

# Download the model
huggingface-cli download saadxsalman/Q-SS-0.5B-Reasoning-Math-GGUF \\
    --local-dir ./Q-SS-0.5B-Reasoning-Math-GGUF

# Run inference
./llama-cli \\
    -m Q-SS-0.5B-Reasoning-Math-GGUF/model-q4_k_m.gguf \\
    --temp 0.1 \\
    -n 384 \\
    -p "You are a mathematical reasoning engine. Solve the problem step-by-step inside <thought> tags, then give ONLY the final answer inside <answer> tags.\\n\\nProblem: Janet has 3 cats. Each cat eats 2 cans per day. How many cans for 7 days?"

Ollama

ollama run hf.co/saadxsalman/Q-SS-0.5B-Reasoning-Math-GGUF

Python with llama-cpp-python

from llama_cpp import Llama

llm = Llama(
    model_path = "./Q-SS-0.5B-Reasoning-Math-GGUF/model-q4_k_m.gguf",
    n_ctx      = 2048,
    n_threads  = 4,
)

SYSTEM_PROMPT = \"\"\"You are a mathematical reasoning engine.
Solve the problem step-by-step inside <thought> tags, then give ONLY the
final numerical or LaTeX result inside <answer> tags.

<thought>
[Your internal reasoning and calculations here]
</thought>
<answer>
[Final answer only]
</answer>\"\"\"

def solve(problem):
    response = llm.create_chat_completion(
        messages = [
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user",   "content": problem},
        ],
        max_tokens  = 384,
        temperature = 0.1,
    )
    answer = response["choices"][0]["message"]["content"]
    if "<answer>" in answer:
        return answer.split("<answer>")[-1].split("</answer>")[0].strip()
    return answer

print(solve("Janet has 3 cats. Each cat eats 2 cans of food per day. How many cans does she need for 7 days?"))
# Output: 42

📝 Example Outputs

Problem: Janet has 3 cats. Each cat eats 2 cans of food per day. How many cans does she need for 7 days?

<thought>
Each cat eats 2 cans per day.
Janet has 3 cats, so they eat 3 × 2 = 6 cans per day together.
For 7 days: 6 × 7 = 42 cans total.
</thought>
<answer>
42
</answer>

Problem: Tom has $50. He buys a book for $12 and a pen for $3. How much money does he have left?

<thought>
Tom starts with $50.
He spends $12 on a book and $3 on a pen.
Total spent: 12 + 3 = $15.
Money remaining: 50 - 15 = $35.
</thought>
<answer>
35
</answer>

✅ What It's Good At

Problem Type Support
Basic arithmetic ✅ Reliable
Multi-step word problems ✅ Reliable
Problems with units and currency ✅ Reliable
Basic algebra ⚠️ Partial
Competition math (AMC/AIME) ❌ Beyond capacity

🖥️ Performance on CPU

Hardware Estimated Speed
Modern laptop (8-core) ~5–10 tokens/sec
Desktop (16-core) ~15–20 tokens/sec
Apple Silicon (M1/M2/M3) ~20–30 tokens/sec
Raspberry Pi 4 ~1–2 tokens/sec

Speeds are approximate and depend on system load and memory bandwidth.


📦 Related Models

Repo Format Size Best For
Q-SS-0.5B-Reasoning-Math FP16 ~988MB GPU inference & further fine-tuning
Q-SS-0.5B-Reasoning-Math-GGUF Q4_K_M ~300MB Local CPU inference

⚠️ Limitations

  • Optimized for English language math problems only
  • Complex abstract reasoning, geometry, and calculus are beyond reliable capacity at 0.5B scale
  • Q4_K_M quantization introduces minor precision loss vs full FP16 — negligible for most use cases
  • Always verify critical calculations — the model may occasionally produce confident but incorrect answers

🙏 Acknowledgements


📄 Citation

@misc{qss-reasoning-math-gguf-2025,
  author       = {Saad Salman},
  title        = {Q-SS-0.5B-Reasoning-Math-GGUF},
  year         = {2025},
  publisher    = {HuggingFace},
  howpublished = {\\url{https://huggingface.co/saadxsalman/Q-SS-0.5B-Reasoning-Math-GGUF}},
}
Downloads last month
178
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for saadxsalman/Q-SS-0.5B-Reasoning-Math-GGUF

Quantized
(202)
this model

Datasets used to train saadxsalman/Q-SS-0.5B-Reasoning-Math-GGUF