Generative AI Shows Diagnostic Accuracy Comparable to Non-Specialist Physicians, Study Finds.

Generative AI Shows Diagnostic Accuracy Comparable to Non-Specialist Physicians, Study Finds.

A new comprehensive study has revealed that generative artificial intelligence (AI) models are approaching the diagnostic accuracy of non-specialist medical practitioners. This finding stems from a meta-analysis conducted by a team of researchers led by Dr. Hirotaka Takita and Associate Professor Daiju Ueda from the Graduate School of Medicine at Osaka Metropolitan University.

The team analyzed 83 academic papers published between June 2018 and June 2024, which examined the diagnostic performance of generative AI across a variety of medical disciplines. Among the large language models (LLMs) reviewed, ChatGPT emerged as the most frequently evaluated.

Due to inconsistent evaluation criteria across individual studies, there has been a need for a unified analysis to assess the real-world utility of generative AI in clinical environments. This meta-analysis provided that broader perspective, offering insight into the strengths and limitations of AI in comparison to human physicians.

The findings indicate that while medical specialists still outperform AI by an average margin of 15.8%, generative AI models achieved an overall diagnostic accuracy rate of 52.1%. Notably, newer iterations of these models demonstrated performance levels comparable to those of general practitioners without specialist training.

“This research shows that generative AI's diagnostic capabilities are comparable to non-specialist doctors,” explained Dr. Takita. “It could be used in medical education to support non-specialist doctors and assist in diagnostics in areas with limited medical resources.”

Despite the promising results, the researchers emphasized the need for continued investigation. Future studies should focus on evaluating AI performance in more complex clinical scenarios, applying real-world medical records, increasing transparency in AI decision-making, and ensuring validation across diverse patient populations.

These steps are essential to further refine the role of AI in healthcare and to confirm its reliability and effectiveness in everyday medical practice.

Source:https://www.sciencedaily.com/releases/2025/04/250418112808.html

This is non-financial/medical advice and made using AI so could be wrong.

Follow US

Top Categories

Please Accept Cookies for Better Performance