View a PDF of the paper titled Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs, by Jun Bai and 4 other authors
Abstract:Context faithfulness is essential for reliable reasoning in context-dependent scenarios. However, large language models often struggle to ground their outputs in the provided context, resulting in irrelevant responses. Inspired by the emergent expert specialization observed in mixture-of-experts architectures, this work investigates whether certain experts exhibit specialization in context utilization, offering a potential pathway toward targeted optimization for improved context faithfulness. To explore this, we propose Router Lens, a method that accurately identifies context-faithful experts. Our analysis reveals that these experts progressively amplify attention to relevant contextual information, thereby enhancing context grounding. Building on this insight, we introduce Context-faithful Expert Fine-Tuning (CEFT), a lightweight optimization approach that selectively fine-tunes context-faithful experts. Experiments across a wide range of benchmarks and models demonstrate that CEFT matches or surpasses the performance of full fine-tuning while being significantly more efficient.
Submission history
 From: Jun Bai [view email]      
 [v1]
        Wed, 27 Aug 2025 06:07:13 UTC (1,153 KB)
 [v2]
        Tue, 16 Sep 2025 08:17:06 UTC (1,153 KB)
Source link 
 
 #Understanding #Leveraging #Expert #Specialization #Context #Faithfulness #MixtureofExperts #LLMs
 
   
![[2508.19594] Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs [2508.19594] Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs](https://i1.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?ssl=1) 
 








