Online Causal Kalman Filtering for Stable and Effective Policy Optimization
Abstract
Online Causal Kalman Filtering addresses high-variance token-level importance sampling in reinforcement learning for large language models by modeling IS ratios as evolving latent states and using Kalman filtering for stable policy optimization.
Reinforcement learning for large language models suffers from high-variance token-level importance sampling (IS) ratios, which would destabilize policy optimization at scale. To improve stability, recent methods typically use a fixed sequence-level IS ratio for all tokens in a sequence or adjust each token's IS ratio separately, thereby neglecting temporal off-policy derivation across tokens in a sequence. In this paper, we first empirically identify that local off-policy deviation is structurally inconsistent at the token level, which may distort policy-gradient updates across adjacent tokens and lead to training collapse. To address the issue, we propose Online Causal Kalman Filtering for stable and effective Policy Optimization (KPO). Concretely, we model the desired IS ratio as a latent state that evolves across tokens and apply a Kalman filter to update this state online and autoregressively based on the states of past tokens, regardless of future tokens. The resulting filtered IS ratios preserve token-wise local structure-aware variation while strongly smoothing noise spikes, yielding more stable and effective policy updates. Experimentally, KPO achieves superior results on challenging math reasoning datasets compared with state-of-the-art counterparts.
Community
(Work in progress) We are adding more comparison methods and models for KPO and will soon open-source KPO.
๐ This paper introduces KPO: Online Causal Kalman Filtering to stabilize RL for LLMs by modeling importance sampling ratios as evolving latent states, effectively smoothing out high-variance noise.
By applying an autoregressive Kalman filter, KPO preserves key token-level structural information while mitigating volatility, achieving superior results on challenging math reasoning benchmarks!
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper