AI Treatment Recommendations May Reflect Socioeconomic Bias, Study Finds

Artificial intelligence tools used in healthcare may recommend different treatments for identical medical conditions based solely on a patient’s socioeconomic and demographic background, a new study published in Nature Medicine has revealed.
Researchers at the Icahn School of Medicine at Mount Sinai created profiles for 36 fictional patients and asked nine AI healthcare models how to manage each one across 1,000 emergency room scenarios. Despite identical clinical symptoms, the AI recommendations varied depending on patients’ personal characteristics—impacting decisions on triage, diagnostic testing, treatment plans, and mental health assessments.
In one notable finding, high-income patients were more likely to be advised to undergo advanced diagnostics like CT scans and MRIs, while low-income individuals were often told no further testing was necessary—mirroring real-world healthcare disparities. These discrepancies were found across both proprietary and open-source AI systems.
“AI can transform healthcare, but only if used ethically and transparently,” said study co-lead Dr. Girish Nadkarni. Coauthor Dr. Eyal Klang emphasized the need for oversight to prevent algorithmic bias and ensure equitable care.