...

Greedy Attention Logit Interpolation (GALI)


View a PDF of the paper titled A Training-Free Length Extrapolation Approach for LLMs: Greedy Attention Logit Interpolation (GALI), by Yan Li and 3 other authors

View PDF
HTML (experimental)

Abstract:Transformer-based Large Language Models (LLMs) struggle with inputs exceeding their training context window due to positional out-of-distribution (O.O.D.) issues that disrupt attention. Existing solutions, including fine-tuning and training-free methods, face challenges like inefficiency, redundant interpolation, logit outliers, or loss of local positional information. We propose Greedy Attention Logit Interpolation (GALI), a training-free method that improves length extrapolation by greedily reusing pretrained positional intervals and interpolating attention logit to eliminate outliers. GALI achieves stable and superior performance across a wide range of long-context tasks without requiring input-length-specific tuning. Our analysis further reveals that LLMs interpret positional intervals unevenly and that restricting interpolation to narrower ranges improves performance, even on short-context tasks. GALI represents a step toward more robust and generalizable long-text processing in LLMs. Our implementation of GALI, along with the experiments from our paper, is open-sourced at this https URL.

Submission history

From: Yan Li [view email]
[v1]
Tue, 4 Feb 2025 19:01:24 UTC (13,030 KB)
[v2]
Fri, 30 May 2025 05:42:35 UTC (7,908 KB)

Source link

#Greedy #Attention #Logit #Interpolation #GALI