View a PDF of the paper titled Regress, Don’t Guess — A Regression-like Loss on Number Tokens for Language Models, by Jonas Zausinger and 15 other authors
Abstract:While language models have exceptional capabilities at text generation, they lack a natural inductive bias for emitting numbers and thus struggle in tasks involving quantitative reasoning, especially arithmetic. One fundamental limitation is the nature of the cross-entropy (CE) loss, which assumes a nominal scale and thus cannot convey proximity between generated number tokens. In response, we here present a regression-like loss that operates purely on token level. Our proposed Number Token Loss (NTL) comes in two flavors and minimizes either the $L_p$ norm or the Wasserstein distance between the numerical values of the real and predicted number tokens. NTL can easily be added to any language model and extend the CE objective during training without runtime overhead. We evaluate the proposed scheme on various mathematical datasets and find that it consistently improves performance in math-related tasks. In a direct comparison on a regression task, we find that NTL can match the performance of a regression head, despite operating on token level. Finally, we scale NTL up to 3B parameter models and observe improved performance, demonstrating its potential for seamless integration into LLMs. We hope to inspire LLM developers to improve their pretraining objectives and distribute NTL as a minimalistic and lightweight PyPI package $ntloss$: this https URL. Development code for full paper reproduction is available separately.
Submission history
From: Jannis Born [view email]
[v1]
Mon, 4 Nov 2024 13:43:24 UTC (966 KB)
[v2]
Sun, 25 May 2025 21:13:23 UTC (5,219 KB)
[v3]
Sun, 17 Aug 2025 09:30:08 UTC (1,991 KB)
Source link
#Regress #Dont #Guess #Regressionlike #Loss #Number #Tokens #Language #Models