...

[2509.08814] Merge-of-Thought Distillation


View a PDF of the paper titled Merge-of-Thought Distillation, by Zhanming Shen and 8 other authors

View PDF
HTML (experimental)

Abstract:Efficient reasoning distillation for long chain-of-thought (CoT) models is increasingly constrained by the assumption of a single oracle teacher, despite the practical availability of multiple candidate teachers and growing CoT corpora. We revisit teacher selection and observe that different students have different “best teachers,” and even for the same student, the best teacher can vary across datasets. Therefore, to unify multiple teachers’ reasoning abilities into a student to overcome conflicts among various teachers’ supervision, we propose Merge-of-Thought Distillation (MoT), a lightweight framework that alternates between teacher-specific supervised fine-tuning branches and weight-space merging of the resulting student variants. On competition math benchmarks, using only about 200 CoT samples, applying MoT to a Qwen3-14B student surpasses strong models including Deepseek-R1, Qwen3-32B, and OpenAI-O1, demonstrating substantial gains. Besides, MoT consistently outperforms the best single-teacher distillation, improves general reasoning beyond mathematics while reducing catastrophic forgetting, and shows robustness to distribution-shifted and peer-level teachers. Finally, we have demonstrated MoT possesses consensus CoT by eliminating teacher-specific inductive biases and inter-teacher conflicts while repeatedly reinforcing the learning of consensus reasoning features. These results position MoT as a simple, effective route to efficiently distilling long CoT capabilities from diverse teachers into compact students.

Submission history

From: ZhanMing Shen [view email]
[v1]
Wed, 10 Sep 2025 17:46:57 UTC (672 KB)
[v2]
Thu, 11 Sep 2025 03:32:59 UTC (672 KB)
[v3]
Thu, 16 Oct 2025 15:43:35 UTC (2,443 KB)

Source link

#MergeofThought #Distillation