Instructions to use TenStrip/LTX2.3-10Eros with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use TenStrip/LTX2.3-10Eros with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("TenStrip/LTX2.3-10Eros", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
NVFP4 version ?
#16
by Paton255 - opened
Hello, on a 5070ti the FP8 learned is faster than any GGUF, but I'm pretty sure a NVFP4 version will be even better because it would entirely fit on 16GB Vram. How difficult would it be to create a NVFP4 version ? I could try with Claude but I don't have a clue...
I'm not good with quants or conversion the only successful version I made was the bf16 the quants are all done by others.