Papers
arxiv:2504.19793

Prompt Injection Attack to Tool Selection in LLM Agents

Published on Aug 24, 2025
Authors:
,
,
,
,
,

Abstract

ToolHijacker is a prompt injection attack that manipulates LLM tool selection by inserting malicious tools into the tool library, demonstrating superior effectiveness compared to existing methods and exposing weaknesses in current defense mechanisms.

AI-generated summary

Tool selection is a key component of LLM agents. A popular approach follows a two-step process - retrieval and selection - to pick the most appropriate tool from a tool library for a given task. In this work, we introduce ToolHijacker, a novel prompt injection attack targeting tool selection in no-box scenarios. ToolHijacker injects a malicious tool document into the tool library to manipulate the LLM agent's tool selection process, compelling it to consistently choose the attacker's malicious tool for an attacker-chosen target task. Specifically, we formulate the crafting of such tool documents as an optimization problem and propose a two-phase optimization strategy to solve it. Our extensive experimental evaluation shows that ToolHijacker is highly effective, significantly outperforming existing manual-based and automated prompt injection attacks when applied to tool selection. Moreover, we explore various defenses, including prevention-based defenses (StruQ and SecAlign) and detection-based defenses (known-answer detection, DataSentinel, perplexity detection, and perplexity windowed detection). Our experimental results indicate that these defenses are insufficient, highlighting the urgent need for developing new defense strategies.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2504.19793
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.19793 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.19793 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.