Instructions to use RunDiffusion/Juggernaut-Z-Image with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use RunDiffusion/Juggernaut-Z-Image with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("RunDiffusion/Juggernaut-Z-Image", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update diffusers usage docs: model is at repo root, drop subfolder= arg
Browse files
README.md
CHANGED
|
@@ -131,13 +131,13 @@ Cleaner structural lines and more coherent material rendering.
|
|
| 131 |
| `Juggernaut_Z_V1_by_RunDiffusion_q5_k_s-005.gguf` | GGUF · q5_k_s | |
|
| 132 |
| `Juggernaut_Z_V1_by_RunDiffusion_q4_k_m-002.gguf` | GGUF · q4_k_m | |
|
| 133 |
| `Juggernaut_Z_V1_by_RunDiffusion_q4_k_s-001.gguf` | GGUF · q4_k_s | Smallest footprint |
|
| 134 |
-
| `
|
| 135 |
|
| 136 |
-
Use the `.safetensors` variants with the workflow that matches your local inference stack. Use the `.gguf` variants with a GGUF-compatible runtime. Use the
|
| 137 |
|
| 138 |
## Use with 🤗 Diffusers
|
| 139 |
|
| 140 |
-
The `
|
| 141 |
|
| 142 |
```python
|
| 143 |
from diffusers import DiffusionPipeline
|
|
@@ -145,7 +145,6 @@ import torch
|
|
| 145 |
|
| 146 |
pipe = DiffusionPipeline.from_pretrained(
|
| 147 |
"RunDiffusion/Juggernaut-Z-Image",
|
| 148 |
-
subfolder="diffusers",
|
| 149 |
torch_dtype=torch.bfloat16,
|
| 150 |
).to("cuda")
|
| 151 |
|
|
@@ -157,7 +156,7 @@ image = pipe(
|
|
| 157 |
image.save("output.png")
|
| 158 |
```
|
| 159 |
|
| 160 |
-
Requires a version of `diffusers` that includes `ZImagePipeline` support (
|
| 161 |
|
| 162 |
## Links
|
| 163 |
|
|
|
|
| 131 |
| `Juggernaut_Z_V1_by_RunDiffusion_q5_k_s-005.gguf` | GGUF · q5_k_s | |
|
| 132 |
| `Juggernaut_Z_V1_by_RunDiffusion_q4_k_m-002.gguf` | GGUF · q4_k_m | |
|
| 133 |
| `Juggernaut_Z_V1_by_RunDiffusion_q4_k_s-001.gguf` | GGUF · q4_k_s | Smallest footprint |
|
| 134 |
+
| `model_index.json` + `transformer/`, `text_encoder/`, `tokenizer/`, `vae/`, `scheduler/` | 🤗 Diffusers format | Loaded by `DiffusionPipeline.from_pretrained("RunDiffusion/Juggernaut-Z-Image")` |
|
| 135 |
|
| 136 |
+
Use the `.safetensors` variants with the workflow that matches your local inference stack. Use the `.gguf` variants with a GGUF-compatible runtime. Use the Diffusers component layout with the 🤗 Diffusers library — see below.
|
| 137 |
|
| 138 |
## Use with 🤗 Diffusers
|
| 139 |
|
| 140 |
+
The repo includes `model_index.json` and the standard 🤗 Diffusers component directories (`transformer/`, `text_encoder/`, `tokenizer/`, `vae/`, `scheduler/`) at the root, exported as a `ZImagePipeline`. Load it with:
|
| 141 |
|
| 142 |
```python
|
| 143 |
from diffusers import DiffusionPipeline
|
|
|
|
| 145 |
|
| 146 |
pipe = DiffusionPipeline.from_pretrained(
|
| 147 |
"RunDiffusion/Juggernaut-Z-Image",
|
|
|
|
| 148 |
torch_dtype=torch.bfloat16,
|
| 149 |
).to("cuda")
|
| 150 |
|
|
|
|
| 156 |
image.save("output.png")
|
| 157 |
```
|
| 158 |
|
| 159 |
+
`from_pretrained` only downloads files declared in `model_index.json`, so it will not pull the standalone `.safetensors` / `.gguf` variants at the repo root. Requires a version of `diffusers` that includes `ZImagePipeline` support (verified against `diffusers` 0.37.1 and 0.38.0). Commercial use of the model and its outputs is restricted under CC BY-NC 4.0 — see [License & Commercial Use](#license--commercial-use) below.
|
| 160 |
|
| 161 |
## Links
|
| 162 |
|