A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning


View a PDF of the paper titled Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning, by Yu Fu and 5 other authors

View PDF
HTML (experimental)

Abstract:Key-Value (KV) caching is a common technique to enhance the computational efficiency of Large Language Models (LLMs), but its memory overhead grows rapidly with input length. Prior work has shown that not all tokens are equally important for text generation, proposing layer-level KV cache compression to selectively retain key information. Recognizing the distinct roles of attention heads in generation, we propose HeadKV, a head-level KV cache compression method, and HeadKV-R2, which leverages a novel contextual reasoning ability estimation for compression. Our approach operates at the level of individual heads, estimating their importance for contextual QA tasks that require both retrieval and reasoning capabilities. Extensive experiments across diverse benchmarks (LongBench, LooGLE), model architectures (e.g., Llama-3-8B-Instruct, Mistral-7B-Instruct), and long-context abilities tests demonstrate that our head-level KV cache compression significantly outperforms strong baselines, particularly in low-resource settings (KV size = 64 & 128). Notably, our method retains just 1.5% of the KV cache while achieving 97% of the performance of the full KV cache on the contextual question answering this http URL are available at this https URL

Submission history

From: Yu Fu [view email]
[v1]
Fri, 25 Oct 2024 02:22:00 UTC (2,319 KB)
[v2]
Mon, 28 Oct 2024 19:32:23 UTC (2,319 KB)
[v3]
Thu, 14 Nov 2024 01:56:11 UTC (2,319 KB)

Source link

#HeadLevel #Cache #Compression #Method #Integrated #Retrieval #Reasoning

Leave a Comment