PEGRL: Improving Machine Translation by Post-Editing Guided Reinforcement Learning
Abstract
A two-stage reinforcement learning framework using post-editing as an auxiliary task improves translation quality by stabilizing training and enabling both global exploration and local optimization.
Reinforcement learning (RL) has shown strong promise for LLM-based machine translation, with recent methods such as GRPO demonstrating notable gains; nevertheless, translation-oriented RL remains challenged by noisy learning signals arising from Monte Carlo return estimation, as well as a large trajectory space that favors global exploration over fine-grained local optimization. We introduce PEGRL, a two-stage RL framework that uses post-editing as an auxiliary task to stabilize training and guide overall optimization. At each iteration, translation outputs are sampled to construct post-editing inputs, allowing return estimation in the post-editing stage to benefit from conditioning on the current translation behavior, while jointly supporting both global exploration and fine-grained local optimization. A task-specific weighting scheme further balances the contributions of translation and post-editing objectives, yielding a biased yet more sample-efficient estimator. Experiments on EnglishtoFinnish, EnglishtoTurkish, and EnglishleftrightarrowChinese show consistent gains over RL baselines, and for EnglishtoTurkish, performance on COMET-KIWI is comparable to advanced LLM-based systems (DeepSeek-V3.2).
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper