View a PDF of the paper titled CogniBench: A Legal-inspired Framework and Dataset for Assessing Cognitive Faithfulness of Large Language Models, by Xiaqiang Tang and 7 other authors
Abstract:Faithfulness hallucination are claims generated by a Large Language Model (LLM) not supported by contexts provided to the LLM. Lacking assessment standard, existing benchmarks only contain “factual statements” that rephrase source materials without marking “cognitive statements” that make inference from the given context, making the consistency evaluation and optimization of cognitive statements difficult. Inspired by how an evidence is assessed in the legislative domain, we design a rigorous framework to assess different levels of faithfulness of cognitive statements and create a benchmark dataset where we reveal insightful statistics. We design an annotation pipeline to create larger benchmarks for different LLMs automatically, and the resulting larger-scale CogniBench-L dataset can be used to train accurate cognitive hallucination detection model. We release our model and dataset at: this https URL
Submission history
From: Tang Xiaqiang [view email]
[v1]
Tue, 27 May 2025 06:16:27 UTC (8,275 KB)
[v2]
Wed, 28 May 2025 06:17:19 UTC (8,492 KB)
Source link
#Legalinspired #Framework #Dataset #Assessing #Cognitive #Faithfulness #Large #Language #Models