View a PDF of the paper titled Investigating Hallucination in Conversations for Low Resource Languages, by Amit Das and 9 other authors
Abstract:Large Language Models (LLMs) have demonstrated remarkable proficiency in generating text that closely resemble human writing. However, they often generate factually incorrect statements, a problem typically referred to as ‘hallucination’. Addressing hallucination is crucial for enhancing the reliability and effectiveness of LLMs. While much research has focused on hallucinations in English, our study extends this investigation to conversational data in three languages: Hindi, Farsi, and Mandarin. We offer a comprehensive analysis of a dataset to examine both factual and linguistic errors in these languages for GPT-3.5, GPT-4o, Llama-3.1, Gemma-2.0, DeepSeek-R1 and Qwen-3. We found that LLMs produce very few hallucinated responses in Mandarin but generate a significantly higher number of hallucinations in Hindi and Farsi.
Submission history
From: Amit Das [view email]
[v1]
Wed, 30 Jul 2025 14:39:51 UTC (422 KB)
[v2]
Wed, 19 Nov 2025 14:23:53 UTC (13,786 KB)
Source link
#Investigating #Hallucination #Conversations #Resource #Languages
![[2507.22720] Investigating Hallucination in Conversations for Low Resource Languages [2507.22720] Investigating Hallucination in Conversations for Low Resource Languages](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=150&resize=150,150&ssl=1)








