View a PDF of the paper titled HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization, by Zhijian Zhuo and Yutao Zeng and Ya Wang and Sijun Zhang and Jian Yang and Xiaoqing Li and Xun Zhou and Jinwen Ma
Abstract:Transformers have become the de facto architecture for a wide range of machine learning tasks, particularly in large language models (LLMs). Despite their remarkable performance, many challenges remain in training deep transformer networks, especially regarding the position of the layer normalization. While Pre-Norm structures facilitate more stable training owing to their stronger identity path, they often lead to suboptimal performance compared to Post-Norm. In this paper, we propose $\textbf{HybridNorm}$, a simple yet effective hybrid normalization strategy that integrates the advantages of both Pre-Norm and Post-Norm. Specifically, HybridNorm employs QKV normalization within the attention mechanism and Post-Norm in the feed-forward network (FFN) of each transformer block. We provide both theoretical insights and empirical evidence to demonstrate that HybridNorm improves the gradient flow and the model robustness. Extensive experiments on large-scale transformer models, including both dense and sparse variants, show that HybridNorm consistently outperforms both Pre-Norm and Post-Norm approaches across multiple benchmarks. These findings highlight the potential of HybridNorm as a more stable and effective technique for improving the training and performance of deep transformer models. Code is available at this https URL.
Submission history
From: Zhijian Zhuo [view email]
[v1]
Thu, 6 Mar 2025 16:40:48 UTC (3,505 KB)
[v2]
Mon, 24 Mar 2025 15:27:13 UTC (2,990 KB)
[v3]
Thu, 22 May 2025 14:53:31 UTC (7,314 KB)
[v4]
Mon, 8 Dec 2025 16:22:01 UTC (7,317 KB)
Source link
#Stable #Efficient #Transformer #Training #Hybrid #Normalization

























