...

Combining Base and Instruction-Tuned Language Models for Better Synthetic Data Generation

[ad_1]

View a PDF of the paper titled BARE: Combining Base and Instruction-Tuned Language Models for Better Synthetic Data Generation, by Alan Zhu and 7 other authors

View PDF
HTML (experimental)

Abstract:As the demand for high-quality data in model training grows, researchers and developers are increasingly generating synthetic data to tune and train LLMs. A common assumption about synthetic data is that sampling from instruct-tuned models is sufficient; however, these models struggle to produce diverse outputs-a key requirement for generalization. Despite various prompting methods, in this work we show that achieving meaningful diversity from instruct-tuned models remains challenging. In contrast, we find base models without post-training exhibit greater diversity, but are less capable at instruction following and hence of lower quality. Leveraging this insight, we propose Base-Refine (BARE), a synthetic data generation method that combines the diversity of base models with the quality of instruct-tuned models through a two-stage process. With minimal few-shot examples and curation, BARE generates diverse and high-quality datasets, improving downstream task performance. We show that fine-tuning with as few as 1,000 BARE-generated samples can reach performance comparable to the best similarly sized models on LiveCodeBench tasks. Furthermore, fine-tuning with BARE-generated data achieves a 101% improvement over instruct-only data on GSM8K and a 18.4% improvement over SOTA methods on RAFT.

Submission history

From: Parth Asawa [view email]
[v1]
Mon, 3 Feb 2025 00:12:40 UTC (170 KB)
[v2]
Wed, 5 Feb 2025 04:15:19 UTC (170 KB)

Source link

#Combining #Base #InstructionTuned #Language #Models #Synthetic #Data #Generation

[ad_2]