Instructions to use inclusionAI/ZwZ-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use inclusionAI/ZwZ-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="inclusionAI/ZwZ-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-4B") model = AutoModelForImageTextToText.from_pretrained("inclusionAI/ZwZ-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use inclusionAI/ZwZ-4B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "inclusionAI/ZwZ-4B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "inclusionAI/ZwZ-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/inclusionAI/ZwZ-4B
- SGLang
How to use inclusionAI/ZwZ-4B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "inclusionAI/ZwZ-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "inclusionAI/ZwZ-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "inclusionAI/ZwZ-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "inclusionAI/ZwZ-4B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use inclusionAI/ZwZ-4B with Docker Model Runner:
docker model run hf.co/inclusionAI/ZwZ-4B
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-4B")
model = AutoModelForImageTextToText.from_pretrained("inclusionAI/ZwZ-4B")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))ZwZ-4B
📃 Paper | 🏠 Project | 🤗 Collection
Model Summary
ZwZ-4B is a fine-grained multimodal perception model built upon Qwen3-VL-4B. It is trained using Region-to-Image Distillation (R2I) combined with reinforcement learning, enabling superior fine-grained visual understanding in a single forward pass — no inference-time zooming or tool calling required. ZwZ-4B achieves state-of-the-art performance on fine-grained perception benchmarks among open-source models of comparable size.
| Models | General Perception | Specific Perception | OOD Generalization | Avg | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ZoomBench | HR-4K | HR-8K | VStar | CV-B. | MME-RW-en | MME-RW-cn | GP-Avg | CountQA | ColorB. | MMStar | BabyVision | ||
| Closed-Source Models | |||||||||||||
| GPT-5.1 | 47.22 | 67.00 | 65.25 | 70.16 | 84.22 | 64.04 | 55.57 | 64.78 | 31.41 | 83.43 | 71.60 | 13.92 | 59.44 |
| Gemini-3-Flash | 59.29 | 87.88 | 85.00 | 86.39 | 89.57 | 74.86 | 72.62 | 79.37 | 66.88 | 85.47 | 83.60 | 34.51 | 75.10 |
| Open-Source Models | |||||||||||||
| Qwen3-VL-2B | 41.30 | 71.75 | 70.12 | 72.77 | 78.94 | 59.52 | 60.77 | 65.02 | 22.19 | 76.86 | 60.4 | 12.11 | 56.98 |
| Qwen3-VL-4B | 40.24 | 78.25 | 72.88 | 80.10 | 84.95 | 63.47 | 63.63 | 69.07 | 28.14 | 81.63 | 69.73 | 13.66 | 61.52 |
| Qwen2.5-VL-7B | 42.49 | 71.62 | 67.88 | 78.53 | 75.34 | 60.80 | 58.30 | 64.99 | 18.91 | 76.36 | 61.93 | 12.89 | 56.82 |
| Qwen3-VL-8B | 37.87 | 78.88 | 74.63 | 86.39 | 85.44 | 65.96 | 66.67 | 70.83 | 28.99 | 82.77 | 70.93 | 12.89 | 62.86 |
| MiMo-VL-7B-RL | 45.09 | 74.38 | 72.88 | 81.15 | 84.31 | 63.40 | 59.78 | 68.71 | 28.27 | 82.80 | 73.53 | 16.24 | 61.98 |
| MiniCPM-V-4.5 (9B) | 42.60 | 69.88 | 63.62 | 70.16 | 80.25 | 58.16 | 56.23 | 62.99 | 23.43 | 79.75 | 67.87 | 14.95 | 56.99 |
| GLM-4.5V (108B) | 49.23 | 81.63 | 74.88 | 83.25 | 87.59 | 66.04 | 60.71 | 71.90 | 35.93 | 84.59 | 75.87 | 15.72 | 65.04 |
| Qwen3-VL-235B-A22B | 49.11 | 84.50 | 81.62 | 87.96 | 86.72 | 67.07 | 65.29 | 74.61 | 40.58 | 85.62 | 76.33 | 18.30 | 67.55 |
| Kimi-K2.5 (1T) | 56.33 | 81.87 | 75.38 | 85.86 | 89.18 | 71.51 | 68.40 | 75.50 | 52.81 | 86.61 | 81.80 | 33.25 | 71.18 |
| Our Models | |||||||||||||
| ZwZ-2B (Ours) | 53.49 | 77.00 | 75.38 | 82.72 | 83.36 | 65.61 | 65.39 | 71.85 | 21.60 | 79.37 | 63.40 | 17.78 | 62.28 |
| ZwZ-4B (Ours) | 55.74 | 81.75 | 79.50 | 92.67 | 87.90 | 68.52 | 68.09 | 76.31 | 30.82 | 83.08 | 71.13 | 16.24 | 66.86 |
| ZwZ-7B (Ours) | 55.62 | 75.38 | 73.25 | 88.48 | 79.83 | 66.21 | 66.96 | 72.25 | 20.72 | 80.82 | 63.40 | 15.98 | 62.42 |
| ZwZ-8B (Ours) | 58.11 | 84.38 | 82.00 | 91.10 | 87.40 | 69.87 | 70.59 | 77.64 | 32.40 | 83.59 | 73.13 | 16.75 | 68.12 |
Key Features
- ⚡ Single-Pass Efficiency: Achieves fine-grained perception in one forward pass, eliminating inference-time tool-calling overhead
- 🎯 Superior Accuracy: State-of-the-art on perception benchmarks among open-source models
- 📈 Broad Improvements: Enhances not only perception benchmarks but also out-of-distribution generalization on visual reasoning, GUI agent, and AIGC detection
How It Works
Traditional "Thinking-with-Images" methods zoom into regions of interest during inference, incurring high latency from repeated tool calls and visual re-encoding. ZwZ transforms zooming from an inference-time tool into a training-time primitive:
- Zoom in to micro-cropped regions and let strong teacher models (Qwen3-VL-235B, GLM-4.5V) generate high-quality VQA data
- Distill this region-grounded supervision back to the full image with explicit bounding-box overlays
- Reinforce via RL training to enable single-glance fine-grained perception without tool use
Quickstart
Installation
pip install transformers accelerate torch
Inference
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
# default: Load the model on the available device(s)
model = Qwen3VLForConditionalGeneration.from_pretrained(
"inclusionAI/ZwZ-4B", dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-4B")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
)
inputs = inputs.to(model.device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Training Data
ZwZ-4B is trained on inclusionAI/ZwZ-RL-VQA, a 74K-sample Region-to-Image distilled VQA dataset synthesized from diverse image pools (SA-1B, LAION, MetaCLIP, Visual Genome, CC12M, STPLS3D).
Citation
@article{wei2026zooming,
title={Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception},
author={Wei, Lai and He, Liangbo and Lan, Jun and Dong, Lingzhong and Cai, Yutong and Li, Siyuan and Zhu, Huijia and Wang, Weiqiang and Kong, Linghe and Wang, Yue and Zhang, Zhuosheng and Huang, Weiran},
journal={arXiv preprint arXiv:2602.11858},
year={2026}
}
License
This model follows the license of Qwen3-VL-4B. Please refer to the base model's license for usage terms.
- Downloads last month
- 419
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="inclusionAI/ZwZ-4B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)