...

Flexible Low Rank Adaptation for Large Language Models


View a PDF of the paper titled Flexora: Flexible Low Rank Adaptation for Large Language Models, by Chenxing Wei and 2 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) are driving advancements in artificial intelligence by increasing the scale of model parameters, which has significantly enhanced generalization ability and unlocked new capabilities in practice. However, their performance in specific downstream tasks is usually hindered by their knowledge boundaries on these tasks. Thus, fine-tuning techniques, especially the widely used Low-Rank Adaptation (LoRA) method, have been introduced to expand the boundaries on these tasks, whereas LoRA would underperform on certain tasks owing to its potential overfitting on these tasks. To overcome this overfitting and improve the performance of LoRA, we propose the flexible low rank adaptation (Flexora) method to automatically and flexibly select the most important layers needing to be fine-tuned to achieve the best performance on different downstream tasks. Specifically, Flexora firstly frames this layer selection problem as a well-defined hyperparameter optimization (HPO) problem, then addresses it using the unrolled differentiation (UD) method, and finally selects the most useful layers based on the optimized hyperparameters. Our extensive experiments on many pretrained models and natural language tasks show that Flexora is able to consistently improve over the existing baselines, indicating the effectiveness of our Flexora in practice. We additionally provide insightful theoretical results and many ablation studies to deliver a comprehensive understanding of our Flexora.

Submission history

From: Chenxing Wei [view email]
[v1]
Tue, 20 Aug 2024 12:13:04 UTC (1,063 KB)
[v2]
Wed, 21 Aug 2024 06:48:16 UTC (1,065 KB)
[v3]
Tue, 18 Feb 2025 13:53:51 UTC (845 KB)
[v4]
Tue, 1 Jul 2025 02:38:26 UTC (846 KB)
[v5]
Fri, 17 Oct 2025 07:10:06 UTC (847 KB)

Source link

#Flexible #Rank #Adaptation #Large #Language #Models