Instructions to use arxyzan/zaya-4b-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use arxyzan/zaya-4b-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="arxyzan/zaya-4b-it") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("arxyzan/zaya-4b-it") model = AutoModelForImageTextToText.from_pretrained("arxyzan/zaya-4b-it") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use arxyzan/zaya-4b-it with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="arxyzan/zaya-4b-it", filename="zaya-4b-it-Q4_0.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use arxyzan/zaya-4b-it with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf arxyzan/zaya-4b-it:Q4_0 # Run inference directly in the terminal: llama-cli -hf arxyzan/zaya-4b-it:Q4_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf arxyzan/zaya-4b-it:Q4_0 # Run inference directly in the terminal: llama-cli -hf arxyzan/zaya-4b-it:Q4_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf arxyzan/zaya-4b-it:Q4_0 # Run inference directly in the terminal: ./llama-cli -hf arxyzan/zaya-4b-it:Q4_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf arxyzan/zaya-4b-it:Q4_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf arxyzan/zaya-4b-it:Q4_0
Use Docker
docker model run hf.co/arxyzan/zaya-4b-it:Q4_0
- LM Studio
- Jan
- vLLM
How to use arxyzan/zaya-4b-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "arxyzan/zaya-4b-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "arxyzan/zaya-4b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/arxyzan/zaya-4b-it:Q4_0
- SGLang
How to use arxyzan/zaya-4b-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "arxyzan/zaya-4b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "arxyzan/zaya-4b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "arxyzan/zaya-4b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "arxyzan/zaya-4b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Ollama
How to use arxyzan/zaya-4b-it with Ollama:
ollama run hf.co/arxyzan/zaya-4b-it:Q4_0
- Unsloth Studio new
How to use arxyzan/zaya-4b-it with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for arxyzan/zaya-4b-it to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for arxyzan/zaya-4b-it to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for arxyzan/zaya-4b-it to start chatting
- Docker Model Runner
How to use arxyzan/zaya-4b-it with Docker Model Runner:
docker model run hf.co/arxyzan/zaya-4b-it:Q4_0
- Lemonade
How to use arxyzan/zaya-4b-it with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull arxyzan/zaya-4b-it:Q4_0
Run and chat with the model
lemonade run user.zaya-4b-it-Q4_0
List all available models
lemonade list
Zaya 4B Persian
Zaya is a family of lightweight models intended for the Persian language. This repo is the 4B version of the model based on Gemma-3 4B that has been continually pretrained and instruction-tuned for the Persian language. The model is designed for high-quality Persian language understanding and generation.
Model Details
- Base Model: Gemma-3 4B
- Language: Multilingual, with a focus on Persian
- Parameters: 4.3 Billion
- Context Length: 128K
Training Procedure
Continual Pretraining:
The base Gemma-3 4B model was continually pretrained on a large-scale Persian corpus to improve its understanding of Persian language, grammar, and context.Instruction Fine-tuning:
The model was then instruction-tuned on a curated Persian instruction dataset using QLoRa, enabling it to follow user prompts and generate helpful, context-aware responses in Persian.SLERP Merge:
Finally, the instruction-tuned model was merged with the original Gemma-3 4B Instruct model using the SLERP (Spherical Linear Interpolation) method. This approach combines the strengths of both models, balancing general capabilities with Persian-specific instruction-following.
Intended Use
- Persian language generation and understanding
- Instruction following in Persian
- Chatbots, assistants, and educational tools for Persian speakers
Note: This model is relatively small compared to other large language models, making it suitable for applications where computational resources are limited while still providing high-quality performance in Persian. The main intended use case would be information retrieval, question answering, and conversational AI but lacks the extensive capabilities of larger models such as reasoning, complex task execution, or advanced problem-solving.
Usage
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "arxyzan/zaya-4b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "system", "content": "You are a helpful assistant intended for the Persian language."},
{"role": "user", "content": "مدل های زبانی بزرگ چطوری ساخته میشن؟"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Llama.cpp
To use this model with llama.cpp, cd to your cloned llama.cpp directory and run the following commands:
./llama-cli -hf arxyzan/zaya-4b-it -p "چطوری میتونم یه موشک بسازم؟"
Ollama
There are two quants of this model available right in this repo; Q8_0 and Q4_0. You can use them with Ollama as follows:
# Q4_0
ollama run hf.co/arxyzan/zaya-4b-it:Q4_0
# Q8_0
ollama run hf.co/arxyzan/zaya-4b-it:Q8_0
Evaluation
Coming soon!
Limitations & Bias
- Bias: The model may exhibit biases present in the training data, which is predominantly sourced from the Persian internet and other text corpora. This can lead to biased or inappropriate responses in certain contexts.
- Hallucination: The model may generate plausible-sounding but factually incorrect or nonsensical answers. It is important to verify critical information independently.
- Safety: The model may generate harmful or sensitive content, especially if prompted inappropriately. Users should implement safety measures to mitigate this risk.
Citation
If you use this model, please cite this repository.
Reach Out
For questions or feedback, you can reach out to me via mail at arxyzan@gmail.com or through Telegram at @arxyzan.
- Downloads last month
- -