View a PDF of the paper titled MuQ: Self-Supervised Music Representation Learning with Mel Residual Vector Quantization, by Haina Zhu and 8 other authors
Abstract:Recent years have witnessed the success of foundation models pre-trained with self-supervised learning (SSL) in various music informatics understanding tasks, including music tagging, instrument classification, key detection, and more. In this paper, we propose a self-supervised music representation learning model for music understanding. Distinguished from previous studies adopting random projection or existing neural codec, the proposed model, named MuQ, is trained to predict tokens generated by Mel Residual Vector Quantization (Mel-RVQ). Our Mel-RVQ utilizes residual linear projection structure for Mel spectrum quantization to enhance the stability and efficiency of target extraction and lead to better performance. Experiments in a large variety of downstream tasks demonstrate that MuQ outperforms previous self-supervised music representation models with only 0.9K hours of open-source pre-training data. Scaling up the data to over 160K hours and adopting iterative training consistently improve the model performance. To further validate the strength of our model, we present MuQ-MuLan, a joint music-text embedding model based on contrastive learning, which achieves state-of-the-art performance in the zero-shot music tagging task on the MagnaTagATune dataset. Code and checkpoints are open source in this https URL.
Submission history
From: Haina Zhu [view email]
[v1]
Thu, 2 Jan 2025 07:08:29 UTC (1,574 KB)
[v2]
Fri, 3 Jan 2025 08:35:34 UTC (1,574 KB)
Source link
#SelfSupervised #Music #Representation #Learning #Mel #Residual #Vector #Quantization