Instructions to use zenlm/zen3-image-dev with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use zenlm/zen3-image-dev with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("zenlm/zen3-image-dev", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
metadata
language: en
license: apache-2.0
tags:
- zen
- zenlm
- hanzo
- zen3
- text-to-image
- diffusion
pipeline_tag: text-to-image
library_name: diffusers
Zen3 Image Dev
Developer variant of Zen3 Image for research and fine-tuning workflows.
Overview
Built on Zen MoDE (Mixture of Distilled Experts) architecture with 12B parameters.
Developed by Hanzo AI and the Zoo Labs Foundation.
Quick Start
from diffusers import AutoPipelineForText2Image
import torch
model_id = "zenlm/zen3-image-dev"
pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
image = pipe("A serene mountain landscape at sunset, photorealistic").images[0]
image.save("output.png")
API Access
from openai import OpenAI
client = OpenAI(base_url="https://api.hanzo.ai/v1", api_key="your-api-key")
response = client.images.generate(
model="zen3-image-dev",
prompt="A serene mountain landscape at sunset",
size="1024px",
)
print(response.data[0].url)
Model Details
| Attribute | Value |
|---|---|
| Parameters | 12B |
| Architecture | Zen MoDE |
| Max Resolution | 1024px |
| License | Apache 2.0 |
License
Apache 2.0