Papers
arxiv:2604.17886

Latent Preference Modeling for Cross-Session Personalized Tool Calling

Published on Apr 20
· Submitted by
Yejin Yoon
on Apr 21
Authors:
,

Abstract

Personalized tool calling in LLM-based agents is improved through memory-augmented methods that capture user choice reasoning rather than just choices, using minimal token overhead.

AI-generated summary

Users often omit essential details in their requests to LLM-based agents, resulting in under-specified inputs for tool use. This poses a fundamental challenge for tool-augmented agents, as API execution typically requires complete arguments, highlighting the need for personalized tool calling. To study this problem, we introduce MPT, a benchmark comprising 265 multi-session dialogues that cover three challenges: Preference Recall, Preference Induction, and Preference Transfer. We also propose PRefine, a test-time memory-augmented method that represents user preferences as evolving hypotheses. Through a generate--verify--refine loop, it extracts reusable constraints from history and improves tool-calling accuracy while using only 1.24% of the tokens required by full-history prompting. These results indicate that robust personalization in agentic systems depends on memory that captures the reasons behind user choices, not just the choices themselves.

Community

Paper author Paper submitter

LLM agents are increasingly expected to call APIs on behalf of users, but real users rarely spell out every argument they want — they just say "book a flight for my trip" and expect the agent to know they always fly economy. We argue this isn't a memory retrieval problem but a memory abstraction problem: the agent has to figure out which past choices reflect reusable preferences and which were just one-off decisions.
We introduce MPT, a benchmark of 265 multi-session dialogues testing three reasoning types — Preference Recall, Induction, and Transfer — and PRefine, a test-time method that maintains latent preferences as evolving hypotheses through a generate–verify–refine loop. PRefine improves tool-calling accuracy across 8 LLMs while using only 1.24% of the tokens required by full-history prompting. The takeaway: robust personalization depends on capturing the reasons behind user choices, not just the choices themselves.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.17886
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.17886 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.17886 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.17886 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.