...

Bootstrapping Audio-Language Alignment with Synthetic Data


View a PDF of the paper titled From Alignment to Advancement: Bootstrapping Audio-Language Alignment with Synthetic Data, by Chun-Yi Kuan and Hung-yi Lee

View PDF
HTML (experimental)

Abstract:Audio-aware large language models (ALLMs) have recently made great strides in understanding and processing audio inputs. These models are typically adapted from text-based large language models (LLMs) through additional training on audio-related tasks. However, this adaptation process presents two major limitations. First, ALLMs often suffer from catastrophic forgetting, where crucial textual capabilities like instruction-following are lost after training on audio data. In some cases, models may even hallucinate sounds that are not present in the input audio, raising concerns about reliability. Second, achieving cross-modal alignment between audio and language typically relies on large collections of task-specific question-answer pairs for instruction tuning, making it resource-intensive. To address these issues, previous works have leveraged the backbone LLMs to synthesize general-purpose, caption-style alignment data. In this paper, we propose a data generation framework that produces contrastive-like training data, designed to enhance ALLMs’ ability to differentiate between present and absent sounds. We further extend our approach to multi-audio scenarios, enabling the model to either explain differences between audio inputs or produce unified captions that describe all inputs, thereby enhancing audio-language alignment. We refer to the entire ALLM training framework as bootstrapping audio-language alignment via synthetic data generation from backbone LLMs (BALSa). Experimental results indicate that our method effectively mitigates audio hallucinations while reliably maintaining strong performance on audio understanding and reasoning benchmarks, as well as instruction-following skills. Moreover, incorporating multi-audio training further enhances the model’s comprehension and reasoning capabilities. Overall, BALSa offers an efficient and scalable approach to developing ALLMs.

Submission history

From: Chun-Yi Kuan [view email]
[v1]
Mon, 26 May 2025 16:08:41 UTC (1,035 KB)
[v2]
Mon, 30 Jun 2025 06:48:46 UTC (1,036 KB)

Source link

#Bootstrapping #AudioLanguage #Alignment #Synthetic #Data