Despite their usefulness, large language models still have a reliability problem. A new study shows that a team of AIs working together can score up to 97 percent on US medical licensing exams, outperforming any single AI.
While recent progress in large language models (LLMs) has led to systems capable of passing professional and academic tests, their performance remains inconsistent. Theyโre still prone to hallucinationsโplausible sounding but incorrect statementsโwhich has limited their use in high-stakes area like medicine and finance.
Nonetheless, LLMs have scored impressive results on medical exams, suggesting the technology could be useful in this area if their inconsistencies can be controlled. Now, researchers have shown that getting a โcouncilโ of five AI models to deliberate over their answers rather than working alone can lead to record-breaking scores in the US Medical Licensing Examination (USMLE).
โOur study shows that when multiple AIs deliberate together, they achieve the highest-ever performance on medical licensing exams,โ Yahya Shaikh, from John Hopkins University, said in a press release. โThis demonstrates the power of collaboration and dialogue between AI systems to reach more accurate and reliable answers.โ
The researchersโ approach takes advantage of a quirk in the models, rooted in the non-deterministic way they come up with responses. Ask the same model the same medical question twice, and it might produce two different answersโsometimes correct, sometimes not.
In a paper in PLOS Medicine, the team describes how they harnessed this characteristic to create their AI โcouncil.โ They spun up five instances of OpenAIโs GPT-4 and prompted them to discuss answers to each question in a structured exchange overseen by a facilitator algorithm.
When their responses diverged, the facilitator summarized the differing rationales and got the group to reconsider the answer, repeating the process until consensus emerged.
When tested on 325 publicly available questions from the three stages of the USMLE, the AI council achieved 97 percent, 93 percent, and 94 percent accuracy respectively. These scores not only exceed the performance of any individual GPT-4 instance but also surpass the average human passing thresholds for the same tests.
โOur work provides the first clear evidence that AI systems can self-correct through structured dialogue, with a performance of the collective better that the performance of any single AI,โ says Shaikh.
In a testament to the effectiveness of the approach, when the models initially disagreed, the deliberation process corrected more than half of their earlier errors. Overall, the council ultimately reached the correct conclusion 83 percent of the time when there wasnโt a unanimous initial answer.
โThis study isnโt about evaluating AIโs USMLE test-taking prowess,โ co-author Zishan Siddiqui notes, also from John Hopkins, said in the press release. โWe describe a method that improves accuracy by treating AIโs natural response variability as a strength. It allows the system to take a few tries, compare notes, and self-correct, and it should be built into future tools for education and, where appropriate, clinical care.โ
The team notes that their results come from controlled testing, not real-world clinical environments, so thereโs a long way before the AI council could be deployed in the real world. But they suggest that the approach could prove useful in other domains as well.
It seems like the old adage that two heads are better than one remains true even when those heads arenโt human.
Source link
#Council #Aced #Medical #Licensing #Exam

























