Papers
arxiv:2604.08536

RewardFlow: Generate Images by Optimizing What You Reward

Published on Apr 9
· Submitted by
Ismini Lourentzou
on Apr 10
Authors:
,
,
,
,
,
,
,
,
,

Abstract

RewardFlow enables pretrained diffusion and flow-matching models to be guided during inference through multi-reward Langevin dynamics without requiring inversion, achieving superior performance in image editing and compositional generation.

AI-generated summary

We introduce RewardFlow, an inversion-free framework that steers pretrained diffusion and flow-matching models at inference time through multi-reward Langevin dynamics. RewardFlow unifies complementary differentiable rewards for semantic alignment, perceptual fidelity, localized grounding, object consistency, and human preference, and further introduces a differentiable VQA-based reward that provides fine-grained semantic supervision through language-vision reasoning. To coordinate these heterogeneous objectives, we design a prompt-aware adaptive policy that extracts semantic primitives from the instruction, infers edit intent, and dynamically modulates reward weights and step sizes throughout sampling. Across several image editing and compositional generation benchmarks, RewardFlow delivers state-of-the-art edit fidelity and compositional alignment.

Community

Paper submitter

We introduce RewardFlow, an inversion-free framework that steers pretrained diffusion and flow-matching models at inference time through multi-reward Langevin dynamics.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.08536 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.08536 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.08536 in a Space README.md to link it from this page.

Collections including this paper 1