View a PDF of the paper titled Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models, by Alireza Salemi and 1 other authors
Abstract:Despite its substantial impact on various search, recommendation, and question answering tasks, privacy-preserving methods for personalizing large language models (LLMs) have received relatively limited exploration. There is one primary approach in this area through retrieval-augmented generation (RAG), which generates personalized outputs by enriching the input prompt with information retrieved from the user’s personal data. This paper studies an orthogonal approach to RAG that involves learning user-dependent LLM parameters through parameter-efficient fine-tuning (PEFT). This paper presents the first systematic study for exploration of PEFT for LLM personalization and provides an extensive comparisons between RAG- and PEFT-based solutions, across a broad set of seven diverse datasets from the LaMP benchmark. Our results demonstrate that, on average, both RAG- and PEFT-based personalization methods yield 14.92% and 1.07% improvements over non-personalized LLMs, respectively. When combining RAG with PEFT, we observe a further improvement of 15.98%, highlighting the effectiveness of their integration in enhancing personalized text generation. Additionally, we identify a positive correlation between the amount of user data available and the effectiveness of PEFT. This finding suggests that RAG is particularly beneficial for cold-start users — users with limited personal data — while PEFT performs better when more user-specific data is available.
Submission history
From: Alireza Salemi [view email]
[v1]
Sat, 14 Sep 2024 19:18:26 UTC (369 KB)
[v2]
Thu, 26 Jun 2025 03:19:56 UTC (130 KB)
Source link
#Comparing #RetrievalAugmentation #ParameterEfficient #FineTuning #PrivacyPreserving #Personalization #Large #Language #Models