Exploring Long-Context Capability of Large Language Models towards Long-Structured Tables


View a PDF of the paper titled NeedleInATable: Exploring Long-Context Capability of Large Language Models towards Long-Structured Tables, by Lanrui Wang and 7 other authors

View PDF
HTML (experimental)

Abstract:Processing structured tabular data, particularly large and lengthy tables, constitutes a fundamental yet challenging task for large language models (LLMs). However, existing long-context benchmarks like Needle-in-a-Haystack primarily focus on unstructured text, neglecting the challenge of diverse structured tables. Meanwhile, previous tabular benchmarks mainly consider downstream tasks that require high-level reasoning abilities, and overlook models’ underlying fine-grained perception of individual table cells, which is crucial for practical and robust LLM-based table applications. To address this gap, we introduce \textsc{NeedleInATable} (NIAT), a new long-context tabular benchmark that treats each table cell as a “needle” and requires models to extract the target cell based on cell locations or lookup questions. Our comprehensive evaluation of various LLMs and multimodal LLMs reveals a substantial performance gap between popular downstream tabular tasks and the simpler NIAT task, suggesting that they may rely on dataset-specific correlations or shortcuts to obtain better benchmark results but lack truly robust long-context understanding towards structured tables. Furthermore, we demonstrate that using synthesized NIAT training data can effectively improve performance on both NIAT task and downstream tabular tasks, which validates the importance of NIAT capability for LLMs’ genuine table understanding ability. Our data, code and models will be released to facilitate future research.

Submission history

From: Lanrui Wang [view email]
[v1]
Wed, 9 Apr 2025 03:46:56 UTC (901 KB)
[v2]
Thu, 29 May 2025 03:31:02 UTC (1,906 KB)

Source link

#Exploring #LongContext #Capability #Large #Language #Models #LongStructured #Tables