Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack


View a PDF of the paper titled BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack, by Yuri Kuratov and 6 other authors

View PDF
HTML (experimental)

Abstract:In recent years, the input context sizes of large language models (LLMs) have increased dramatically. However, existing evaluation methods have not kept pace, failing to comprehensively assess the efficiency of models in handling long contexts. To bridge this gap, we introduce the BABILong benchmark, designed to test language models’ ability to reason across facts distributed in extremely long documents. BABILong includes a diverse set of 20 reasoning tasks, including fact chaining, simple induction, deduction, counting, and handling lists/sets. These tasks are challenging on their own, and even more demanding when the required facts are scattered across long natural text. Our evaluations show that popular LLMs effectively utilize only 10-20\% of the context and their performance declines sharply with increased reasoning complexity. Among alternatives to in-context reasoning, Retrieval-Augmented Generation methods achieve a modest 60\% accuracy on single-fact question answering, independent of context length. Among context extension methods, the highest performance is demonstrated by recurrent memory transformers after fine-tuning, enabling the processing of lengths up to 50 million tokens. The BABILong benchmark is extendable to any length to support the evaluation of new upcoming models with increased capabilities, and we provide splits up to 10 million token lengths.

Submission history

From: Yuri Kuratov [view email]
[v1]
Fri, 14 Jun 2024 16:00:29 UTC (7,834 KB)
[v2]
Wed, 6 Nov 2024 14:50:40 UTC (2,274 KB)

Source link

#Testing #Limits #LLMs #Long #Context #ReasoninginaHaystack

Leave a Comment