...

Optimizing Rotation Transformation for Quantization for Free


View a PDF of the paper titled Grouped Sequency-arranged Rotation: Optimizing Rotation Transformation for Quantization for Free, by Euntae Choi and 3 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) face deployment challenges due to high computational costs, and while Post-Training Quantization (PTQ) offers a solution, existing rotation-based methods struggle at very low bit-widths like 2-bit. We introduce a novel, training-free approach to construct an improved rotation matrix, addressing the limitations of current methods. The key contributions include leveraging the Walsh-Hadamard transform with sequency ordering, which clusters similar frequency components to reduce quantization error compared to standard Hadamard matrices, significantly improving performance. Furthermore, we propose a Grouped Sequency-arranged Rotation (GSR) using block-diagonal matrices with smaller Walsh blocks, effectively isolating outlier impacts and achieving performance comparable to optimization-based methods without requiring any training. Our method demonstrates robust performance on reasoning tasks and Perplexity (PPL) score on WikiText-2. Our method also enhances results even when applied over existing learned rotation techniques.

Submission history

From: Sumin Song [view email]
[v1]
Fri, 2 May 2025 11:51:29 UTC (397 KB)
[v2]
Thu, 14 Aug 2025 07:02:58 UTC (166 KB)

Source link

#Optimizing #Rotation #Transformation #Quantization #Free