View a PDF of the paper titled Enhancing Reasoning Skills in Small Persian Medical Language Models Can Outperform Large-Scale Data Training, by Mehrdad Ghassabi and Sadra Hakim and Hamidreza Baradaran Kashani and Pedram Rostami
Abstract:Enhancing reasoning capabilities in small language models is critical for specialized applications such as medical question answering, particularly in underrepresented languages like Persian. In this study, we employ Reinforcement Learning with AI Feedback (RLAIF) and Direct preference optimization (DPO) to improve the reasoning skills of a general-purpose Persian language model. To achieve this, we translated a multiple-choice medical question-answering dataset into Persian and used RLAIF to generate rejected-preferred answer pairs, which are essential for DPO training. By prompting both teacher and student models to produce Chain-of-Thought (CoT) reasoning responses, we compiled a dataset containing correct and incorrect reasoning trajectories. This dataset, comprising 2 million tokens in preferred answers and 2.5 million tokens in rejected ones, was used to train a baseline model, significantly enhancing its medical reasoning capabilities in Persian. Remarkably, the resulting model outperformed its predecessor, gaokerena-V, which was trained on approximately 57 million tokens, despite leveraging a much smaller dataset. These results highlight the efficiency and effectiveness of reasoning-focused training approaches in developing domain-specific language models with limited data availability.
Submission history
From: Mehrdad Ghassabi [view email]
[v1]
Wed, 22 Oct 2025 22:22:59 UTC (268 KB)
[v2]
Thu, 30 Oct 2025 17:28:47 UTC (299 KB)
[v3]
Tue, 25 Nov 2025 08:37:33 UTC (299 KB)
[v4]
Wed, 10 Dec 2025 16:00:11 UTC (242 KB)
Source link
#Enhancing #Reasoning #Skills #Small #Persian #Medical #Language #Models #Outperform #LargeScale #Data #Training
![[2510.20059] Enhancing Reasoning Skills in Small Persian Medical Language Models Can Outperform Large-Scale Data Training [2510.20059] Enhancing Reasoning Skills in Small Persian Medical Language Models Can Outperform Large-Scale Data Training](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=150&resize=150,150&ssl=1)





%20comparison%20top%20art%20122025%20updated%20SOURCE%20Amazon.jpg?w=150&resize=150,150&ssl=1)


