View a PDF of the paper titled DACP: Domain-Adaptive Continual Pre-Training of Large Language Models for Phone Conversation Summarization, by Xue-Yong Fu and 4 other authors
Abstract:Large language models (LLMs) have achieved impressive performance in text summarization, yet their performance often falls short when applied to specialized domains that differ from their original pre-training distribution. While fine-tuning can improve summarization quality, it typically relies on costly and scarce high-quality labeled data. In this work, we explore continual pre-training as a scalable, self-supervised approach to adapt LLMs for downstream summarization tasks, particularly in the context of noisy real-world conversation transcripts. We conduct extensive experiments using large-scale, unlabeled business conversation data to investigate whether continual pre-training enhances model capabilities in conversational summarization. Our results demonstrate that continual pre-training yields substantial gains in both in-domain and out-of-domain summarization benchmarks, while maintaining strong generalization and robustness. We also analyze the effects of data selection strategies, providing practical guidelines for applying continual pre-training in summarization-focused industrial applications.
Submission history
From: Md Tahmid Rahman Laskar [view email]
[v1]
Tue, 7 Oct 2025 12:26:19 UTC (7,085 KB)
[v2]
Wed, 8 Oct 2025 01:55:53 UTC (7,085 KB)
[v3]
Thu, 9 Oct 2025 12:35:24 UTC (7,085 KB)
Source link
#DomainAdaptive #Continual #PreTraining #Large #Language #Models #Phone #Conversation #Summarization