Extreme Context Compression for Retrieval-augmented Generation with One Token


View a PDF of the paper titled xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token, by Xin Cheng and 7 other authors

View PDF
HTML (experimental)

Abstract:This paper introduces xRAG, an innovative context compression method tailored for retrieval-augmented generation. xRAG reinterprets document embeddings in dense retrieval–traditionally used solely for retrieval–as features from the retrieval modality. By employing a modality fusion methodology, xRAG seamlessly integrates these embeddings into the language model representation space, effectively eliminating the need for their textual counterparts and achieving an extreme compression rate. In xRAG, the only trainable component is the modality bridge, while both the retriever and the language model remain frozen. This design choice allows for the reuse of offline-constructed document embeddings and preserves the plug-and-play nature of retrieval augmentation. Experimental results demonstrate that xRAG achieves an average improvement of over 10% across six knowledge-intensive tasks, adaptable to various language model backbones, ranging from a dense 7B model to an 8x7B Mixture of Experts configuration. xRAG not only significantly outperforms previous context compression methods but also matches the performance of uncompressed models on several datasets, while reducing overall FLOPs by a factor of 3.53. Our work pioneers new directions in retrieval-augmented generation from the perspective of multimodality fusion, and we hope it lays the foundation for future efficient and scalable retrieval-augmented systems

Submission history

From: Xin Cheng [view email]
[v1]
Wed, 22 May 2024 16:15:17 UTC (1,104 KB)
[v2]
Mon, 9 Dec 2024 06:07:03 UTC (2,181 KB)

Source link

#Extreme #Context #Compression #Retrievalaugmented #Generation #Token