View a PDF of the paper titled A Survey on Hallucination in Giant Language Fashions: Ideas, Taxonomy, Challenges, and Open Questions, by Lei Huang and 10 different authors
Summary:The emergence of enormous language fashions (LLMs) has marked a major breakthrough in pure language processing (NLP), fueling a paradigm shift in info acquisition. However, LLMs are susceptible to hallucination, producing believable but nonfactual content material. This phenomenon raises vital considerations over the reliability of LLMs in real-world info retrieval (IR) methods and has attracted intensive analysis to detect and mitigate such hallucinations. Given the open-ended general-purpose attributes inherent to LLMs, LLM hallucinations current distinct challenges that diverge from prior task-specific fashions. This divergence highlights the urgency for a nuanced understanding and complete overview of latest advances in LLM hallucinations. On this survey, we start with an revolutionary taxonomy of hallucination within the period of LLM after which delve into the elements contributing to hallucinations. Subsequently, we current an intensive overview of hallucination detection strategies and benchmarks. Our dialogue then transfers to consultant methodologies for mitigating LLM hallucinations. Moreover, we delve into the present limitations confronted by retrieval-augmented LLMs in combating hallucinations, providing insights for growing extra sturdy IR methods. Lastly, we spotlight the promising analysis instructions on LLM hallucinations, together with hallucination in massive vision-language fashions and understanding of data boundaries in LLM hallucinations.
Submission historical past
From: Lei Huang [view email]
[v1]
Thu, 9 Nov 2023 09:25:37 UTC (983 KB)
[v2]
Tue, 19 Nov 2024 12:42:45 UTC (567 KB)
Source link
#Ideas #Taxonomy #Challenges #Open #Questions