...

[2503.09598] How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation


View a PDF of the paper titled How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation, by Ruohao Guo and 2 other authors

View PDF
HTML (experimental)

Abstract:As Large Language Models (LLMs) are widely deployed in diverse scenarios, the extent to which they could tacitly spread misinformation emerges as a critical safety concern. Current research primarily evaluates LLMs on explicit false statements, overlooking how misinformation often manifests subtly as unchallenged premises in real-world interactions. We curated EchoMist, the first comprehensive benchmark for implicit misinformation, where false assumptions are embedded in the query to LLMs. EchoMist targets circulated, harmful, and ever-evolving implicit misinformation from diverse sources, including realistic human-AI conversations and social media interactions. Through extensive empirical studies on 15 state-of-the-art LLMs, we find that current models perform alarmingly poorly on this task, often failing to detect false premises and generating counterfactual explanations. We also investigate two mitigation methods, i.e., Self-Alert and RAG, to enhance LLMs’ capability to counter implicit misinformation. Our findings indicate that EchoMist remains a persistent challenge and underscore the critical need to safeguard against the risk of implicit misinformation.

Submission history

From: Ruohao Guo [view email]
[v1]
Wed, 12 Mar 2025 17:59:18 UTC (305 KB)
[v2]
Tue, 27 May 2025 16:40:26 UTC (2,822 KB)

Source link

#Protect #Radiation #Investigating #LLM #Responses #Implicit #Misinformation