Papers
arxiv:2504.09762

Position: Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces!

Published on Apr 14, 2025
Authors:
,
,
,
,
,
,
,
,

Abstract

Intermediate token generation in language models produces reasoning traces that are mistakenly anthropomorphized as human-like thought processes, leading to misleading interpretations and research practices.

AI-generated summary

Intermediate token generation (ITG), where a model produces output before the solution, has become a standard method to improve the performance of language models on reasoning tasks. These intermediate tokens have been called reasoning traces or even thoughts -- implicitly anthropomorphizing the traces, and implying that these traces resemble steps a human might take when solving a challenging problem, and as such can provide an interpretable window into the operation of the model's thinking process to the end user. In this position paper, we present evidence that this anthropomorphization isn't a harmless metaphor, and instead is quite dangerous -- it confuses the nature of these models and how to use them effectively, and leads to questionable research. We call on the community to avoid such anthropomorphization of intermediate tokens.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.09762 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.09762 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.09762 in a Space README.md to link it from this page.

Collections including this paper 1