View a PDF of the paper titled Probabilities of Chat LLMs Are Miscalibrated but Still Predict Correctness on Multiple-Choice Q&A, by Benjamin Plaut and 2 other authors
Abstract:We study 15 large language models (LLMs) fine-tuned for chat and find that their maximum softmax probabilities (MSPs) are consistently miscalibrated on multiple-choice Q&A. However, those MSPs might still encode useful uncertainty information. Specifically, we hypothesized that wrong answers would be associated with smaller MSPs compared to correct answers. Via rigorous statistical testing, we show that this hypothesis holds for models which perform well on the underlying Q&A task. We also find a strong direction correlation between Q&A accuracy and MSP correctness prediction, while finding no correlation between Q&A accuracy and calibration error. This suggests that within the current fine-tuning paradigm, we can expect correctness prediction but not calibration to improve as LLM capabilities progress. To demonstrate the utility of correctness prediction, we show that when models have the option to abstain, performance can be improved by selectively abstaining based on the MSP of the initial model response, using only a small amount of labeled data to choose the MSP threshold.
Submission history
From: Benjamin Plaut [view email]
[v1]
Tue, 20 Feb 2024 18:24:47 UTC (212 KB)
[v2]
Fri, 4 Oct 2024 16:29:58 UTC (149 KB)
[v3]
Wed, 19 Mar 2025 16:57:23 UTC (235 KB)
Source link
#Probabilities #Chat #LLMs #Miscalibrated #Predict #Correctness #MultipleChoice