Breaking Changes: 22th February 2026: Model loading change

See more here: https://huggingface.co/Kijai/LTXV2_comfy

Will update the workflow asap to reflect the breaking changes in ComfyUI model loader logic.

And if you have some experience with nodes, see the link above on what to change, if you want to do yourself. (basically just swapping out the main model loader with a new one)


The workflows are based on the extracted models from https://huggingface.co/Kijai/LTXV2_comfy The extracted models runs easier on the computer (as separate files), as well as GGUF support etc

(but you can easily swap out the model loader for the ComfyUI default model loader if you want to load the checkpoint with "all in one" vae built-in etc)

Gemma 3 12B it GGUF text encoder: https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/

Needed nodes:

(video made with LTX-2, Credit to https://www.reddit.com/user/fantazart/) https://www.reddit.com/r/StableDiffusion/comments/1qeovkh/ltx2_cinematic_love_letter_to_opensource_community/


A general guide: https://docs.ltx.video/open-source-model/integration-tools/comfy-ui

More workflows:

ComfyUI official workflows: https://docs.comfy.org/tutorials/video/ltx/ltx-2

LTX-Video official workflows: https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows

Some really nice clean workflows here: https://comfyui.nomadoor.net/en/basic-workflows/ltx-2/

RunComfy (can download workflow to use locally):

LTX-2 Controlnet (pose, depth etc) https://www.runcomfy.com/comfyui-workflows/ltx-2-controlnet-in-comfyui-depth-controlled-video-workflow

LTX-2 First Last Frame https://www.runcomfy.com/comfyui-workflows/ltx-2-first-last-frame-in-comfyui-audio-visual-motion-control

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support