Unconditional Image Generation
Diffusers
Safetensors
English
bitdance
imagenet
class-conditional
custom-pipeline
Instructions to use BiliSakura/BitDance-ImageNet-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use BiliSakura/BitDance-ImageNet-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("BiliSakura/BitDance-ImageNet-diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Update all files for BitDance-ImageNet-diffusers
Browse files
BitDance_B_1x/autoencoder/modeling_autoencoder.py
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
from diffusers.configuration_utils import ConfigMixin, register_to_config
|
| 4 |
+
from diffusers.models.modeling_utils import ModelMixin
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
class BitDanceImageNetAutoencoder(ModelMixin, ConfigMixin):
|
| 8 |
+
@register_to_config
|
| 9 |
+
def __init__(self, ddconfig=None, num_codebooks: int = 4, **kwargs):
|
| 10 |
+
super().__init__()
|
| 11 |
+
self.ddconfig = ddconfig
|
| 12 |
+
self.num_codebooks = num_codebooks
|
| 13 |
+
|
| 14 |
+
@classmethod
|
| 15 |
+
def from_pretrained(cls, pretrained_model_name_or_path: str, *args, **kwargs):
|
| 16 |
+
del pretrained_model_name_or_path, args, kwargs
|
| 17 |
+
return cls()
|