Mitsua/vroid-image-dataset-lite
Viewer • Updated • 3k • 65 • 11
How to use Mitsua/vroid-diffusion-test-unconditional with Diffusers:
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Mitsua/vroid-diffusion-test-unconditional", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Mitsua/vroid-diffusion-test-unconditional", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]This is a latent unconditional diffusion model to demonstrate how U-Net training affects the generated images.
StableDiffusionPipeline.StableDiffusionPipeline. This model will not work on A1111 WebUI.from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("Mitsua/vroid-diffusion-test-unconditional")
Image generation for research and educational purposes.
Any deployed use case of the model.
We use full version of VRoid Image Dataset Lite with some modifications.