View a PDF of the paper titled ConU: Conformal Uncertainty in Giant Language Fashions with Correctness Protection Ensures, by Zhiyuan Wang and eight different authors
Summary:Uncertainty quantification (UQ) in pure language era (NLG) duties stays an open problem, exacerbated by the closed-source nature of the most recent massive language fashions (LLMs). This research investigates making use of conformal prediction (CP), which may rework any heuristic uncertainty notion into rigorous prediction units, to black-box LLMs in open-ended NLG duties. We introduce a novel uncertainty measure primarily based on self-consistency idea, after which develop a conformal uncertainty criterion by integrating the uncertainty situation aligned with correctness into the CP algorithm. Empirical evaluations point out that our uncertainty measure outperforms prior state-of-the-art strategies. Moreover, we obtain strict management over the correctness protection price using 7 widespread LLMs on 4 free-form NLG datasets, spanning general-purpose and medical situations. Moreover, the calibrated prediction units with small measurement additional highlights the effectivity of our methodology in offering reliable ensures for sensible open-ended NLG functions.
Submission historical past
From: Zhiyuan Wang [view email]
[v1]
Sat, 29 Jun 2024 17:33:07 UTC (4,518 KB)
[v2]
Solar, 20 Oct 2024 04:17:20 UTC (5,078 KB)
[v3]
Mon, 18 Nov 2024 08:33:35 UTC (5,079 KB)
Source link
#Conformal #Uncertainty #Giant #Language #Fashions #Correctness #Protection #Ensures