Papers
arxiv:2602.03352

PEGRL: Improving Machine Translation by Post-Editing Guided Reinforcement Learning

Published on Feb 3
Authors:
,
,
,
,
,

Abstract

A two-stage reinforcement learning framework using post-editing as an auxiliary task improves translation quality by stabilizing training and enabling both global exploration and local optimization.

AI-generated summary

Reinforcement learning (RL) has shown strong promise for LLM-based machine translation, with recent methods such as GRPO demonstrating notable gains; nevertheless, translation-oriented RL remains challenged by noisy learning signals arising from Monte Carlo return estimation, as well as a large trajectory space that favors global exploration over fine-grained local optimization. We introduce PEGRL, a two-stage RL framework that uses post-editing as an auxiliary task to stabilize training and guide overall optimization. At each iteration, translation outputs are sampled to construct post-editing inputs, allowing return estimation in the post-editing stage to benefit from conditioning on the current translation behavior, while jointly supporting both global exploration and fine-grained local optimization. A task-specific weighting scheme further balances the contributions of translation and post-editing objectives, yielding a biased yet more sample-efficient estimator. Experiments on EnglishtoFinnish, EnglishtoTurkish, and EnglishleftrightarrowChinese show consistent gains over RL baselines, and for EnglishtoTurkish, performance on COMET-KIWI is comparable to advanced LLM-based systems (DeepSeek-V3.2).

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.03352 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.03352 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.03352 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.