...

[2508.16599] Humans Perceive Wrong Narratives from AI Reasoning Texts


View a PDF of the paper titled Humans Perceive Wrong Narratives from AI Reasoning Texts, by Mosh Levy and 2 other authors

View PDF

Abstract:A new generation of AI models generates step-by-step reasoning text before producing an answer. This text appears to offer a human-readable window into their computation process, and is increasingly relied upon for transparency and interpretability. However, it is unclear whether human understanding of this text matches the model’s actual computational process. In this paper, we investigate a necessary condition for correspondence: the ability of humans to identify which steps in a reasoning text causally influence later steps. We evaluated humans on this ability by composing questions based on counterfactual measurements and found a significant discrepancy: participant accuracy was only 29%, barely above chance (25%), and remained low (42%) even when evaluating the majority vote on questions with high agreement. Our results reveal a fundamental gap between how humans interpret reasoning texts and how models use it, challenging its utility as a simple interpretability tool. We argue that reasoning texts should be treated as an artifact to be investigated, not taken at face value, and that understanding the non-human ways these models use language is a critical research direction.

Submission history

From: Mosh Levy [view email]
[v1]
Sat, 9 Aug 2025 16:29:10 UTC (531 KB)
[v2]
Thu, 28 Aug 2025 11:53:23 UTC (526 KB)

Source link

#Humans #Perceive #Wrong #Narratives #Reasoning #Texts