STARFlow2: Bridging Language Models and Normalizing Flows for Unified Multimodal Generation
Abstract
Autoregressive normalizing flows based on Transformer architecture enable unified multimodal generation by aligning text and image processing through shared causal masking and KV-cache mechanisms.
Deep generative models have advanced rapidly across text and vision, motivating unified multimodal systems that can understand, reason over, and generate interleaved text-image sequences. Most existing approaches combine autoregressive language modeling with diffusion-based image generators, inheriting a structural mismatch between causal text generation and iterative visual denoising. We observe that autoregressive normalizing flows are autoregressive Transformers--sharing the same causal mask, KV-cache mechanism, and left-to-right structure as LLMs--making them the most natural paradigm for true unified multimodal generation. We present STARFlow2, built on the Pretzel architecture that vertically interleaves a pretrained VLM stream with a TarFlow stream via residual skip connections, both operating under the same causal mask. Combined with a deep-shallow flow design and a unified FAE latent space, STARFlow2 enables cache-friendly interleaved generation where both text and visual outputs directly enter the KV-cache without re-encoding. Experiments demonstrate strong performance across image generation and multimodal understanding benchmarks, validating autoregressive flows as a viable foundation for unified multimodal modeling.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation (2026)
- MMCORE: MultiModal COnnection with Representation Aligned Latent Embeddings (2026)
- LatentUM: Unleashing the Potential of Interleaved Cross-Modal Reasoning via a Latent-Space Unified Model (2026)
- Large Language Models are Universal Reasoners for Visual Generation (2026)
- LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model (2026)
- HYDRA: Unifying Multi-modal Generation and Understanding via Representation-Harmonized Tokenization (2026)
- Uni-ViGU: Towards Unified Video Generation and Understanding via A Diffusion-Based Video Generator (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2605.08029 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper