LTX-2: Efficient Joint Audio-Visual Foundation Model
Paper
• 2601.03233 • Published
• 157
FP8 quantized versions of the LTX-2.3 22B models by Lightricks.
| Name | Original | Size |
|---|---|---|
| ltx-2.3-22b-dev-fp8_mixed.safetensors | ltx-2.3-22b-dev | ~30 GB |
| ltx-2.3-22b-distilled-fp8_mixed.safetensors | ltx-2.3-22b-distilled | ~30 GB |
float8_e4m3fn (E4M3, max=448)attn1, attn2, audio_attn1, audio_attn2, audio_to_video_attn, video_to_audio_attn, ff.net, audio_ff.net — specifically to_q, to_k, to_v, to_out.0, ff.net.0.proj, ff.net.2 and their audio equivalentsweight_scale = max(|W|) / 448 stored as F32 scalar alongside each weight. Static input_scale = 1.0 placeholder matching the source model formatThis is a quantized derivative of Lightricks/LTX-2.3. All original model details, usage instructions, and license terms apply.
LTX-2.3 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model.
@article{hacohen2025ltx2,
title={LTX-2: Efficient Joint Audio-Visual Foundation Model},
author={HaCohen, Yoav and Brazowski, Benny and Chiprut, Nisan and Bitterman, Yaki and Kvochko, Andrew and Berkowitz, Avishai and Shalem, Daniel and Lifschitz, Daphna and Moshe, Dudu and Porat, Eitan and Richardson, Eitan and Guy Shiran and Itay Chachy and Jonathan Chetboun and Michael Finkelson and Michael Kupchick and Nir Zabari and Nitzan Guetta and Noa Kotler and Ofir Bibi and Ori Gordon and Poriya Panet and Roi Benita and Shahar Armon and Victor Kulikov and Yaron Inger and Yonatan Shiftan and Zeev Melumian and Zeev Farbman},
journal={arXiv preprint arXiv:2601.03233},
year={2025}
}