Papers
arxiv:2603.00680

MemPO: Self-Memory Policy Optimization for Long-Horizon Agents

Published on Apr 9
Authors:
,
,
,
,
,
,
,
,
,

Abstract

A self-memory policy optimization algorithm enables autonomous memory management during agent-environment interaction, improving performance while reducing computational resource consumption.

AI-generated summary

Long-horizon agents face the challenge of growing context size during interaction with environment, which degrades the performance and stability. Existing methods typically introduce the external memory module and look up the relevant information from the stored memory, which prevents the model itself from proactively managing its memory content and aligning with the agent's overarching task objectives. To address these limitations, we propose the self-memory policy optimization algorithm (MemPO), which enables the agent (policy model) to autonomously summarize and manage their memory during interaction with environment. By improving the credit assignment mechanism based on memory effectiveness, the policy model can selectively retain crucial information, significantly reducing token consumption while preserving task performance. Extensive experiments and analyses confirm that MemPO achieves absolute F1 score gains of 25.98% over the base model and 7.1% over the previous SOTA baseline, while reducing token usage by 67.58% and 73.12%. The code is released at https://github.com/TheNewBeeKing/MemPO.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.00680
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.00680 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.00680 in a Space README.md to link it from this page.

Collections including this paper 1