Instructions to use Lakonik/pi-FLUX.2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lakonik/pi-FLUX.2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lakonik/pi-FLUX.2", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Will there be ComfyUI support for the Flux.2 model?
Hey!
I absolutely love your method of distillation, and Flux.2 is a model that even w/ super good consumer hardware is heavy and time consuming to generate an image with. I was super stoked seeing your repo update come across my github feed the other day, but then I figured out that only the location implementation had been upgraded to support it.
Will there be a new node update/workflow to use this model w/in ComfyUI?
Thanks so much for any info!!
(oh and I definitely second the Kandinsky5 request, but I'd be cool w/ any Pi-"Video" distillation tbh and that one seems like it'd be the hardest to pull off due to the massive size 20b - buuuut it's fantastic quality if you haven't used it).
Happy holidays!
-D
Thanks for your interest in our work. Yea there will be a ComfyUI update. I just need some time.