Zaya 3B Persian
Zaya is a family of lightweight models intended for the Persian language. This repo is the 4B version of the model based on Llama-3.2 3B that has been continually pretrained and instruction-tuned for the Persian language. The model is designed for high-quality Persian language understanding and generation.
Model Details
- Base Model: Llama 3.2 3B Instruct
- Language: Multilingual with a Persian focus
- Parameters: ~3.2 Billion
- Context Length: 128K tokens
Training Procedure
Continual Pretraining:
The base Llama 3.2 3B Instruct checkpoint is continually pretrained on a curated Persian corpus to boost grammar, idiomatic usage, and contextual grounding specific to the language.Instruction Fine-tuning:
Using QLoRA, the continually pretrained checkpoint is instruction-tuned on a high-quality Persian instruction dataset, enabling reliable conversational behavior and task following.SLERP Merge:
The instruction-tuned adapter is merged back with the original Llama 3.2 3B Instruct weights via SLERP (Spherical Linear Interpolation), balancing the strong general reasoning of the base model with the Persian-specialized behaviors learned during fine-tuning.
Intended Use
- Persian language understanding and generation
- Instruction following, chatbots, and classroom assistants in Persian
- Knowledge-grounded question answering and information retrieval applications
Note: While performant for its size, this 3B-class model is not meant to replace larger systems for advanced reasoning or long-horizon planning. It shines where quick, high-quality Persian responses are needed under tight compute budgets.
Usage
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "arxyzan/zaya-3b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "system", "content": "You are a helpful assistant intended for the Persian language."},
{"role": "user", "content": "مدلهای زبانی بزرگ چگونه آموزش داده میشوند؟"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Llama.cpp
To run the instruct checkpoint with llama.cpp, clone that repository and execute:
./llama-cli -hf arxyzan/zaya-3b-it -p "یک خلاصه از شاهنامه فردوسی بده."
Ollama
Quantized builds (Q4_0 and Q8_0) are shipped directly in this repository for quick use with Ollama:
# Q4_0
ollama run hf.co/arxyzan/zaya-3b-it:Q4_0
# Q8_0
ollama run hf.co/arxyzan/zaya-3b-it:Q8_0
Evaluation
Coming soon! Benchmarks across Persian instruction-following suites and multilingual safety evals will be published here once finalized.
Limitations & Bias
- Bias: Training data drawn from Persian web sources may carry societal or regional biases, which the model can reproduce.
- Hallucination: Despite continual pretraining, the model can output confident but incorrect statements. Always verify critical answers.
- Safety: Without an external guardrail, the model may emit harmful or sensitive content when prompted. Downstream deployments should include moderation layers.
Citation
If you use this model, please cite this repository.
Reach Out
Questions, feedback, or deployment stories are welcome at arxyzan@gmail.com or on Telegram via @arxyzan.
- Downloads last month
- 20