Click Sign-Up Today to register an account to access your prescriptions. Please note: Your previous login credentials will not work with this new service. Download our NEW Mobile App! See details below!
1224 Gay St, Dandridge, TN 37725 Phone: (865) 397-3444 | Fax: (865) 397-6279 Mon-Fri 8:00am - 6:00pm | Sat 9:00am - 2:00pm | Sun Closed
Tinsley Bible Drug Co. Logo

Get Healthy!

AI Displays Racial Bias Evaluating Mental Health Cases
  • Posted July 9, 2025

AI Displays Racial Bias Evaluating Mental Health Cases

AI programs can exhibit racial bias when evaluating patients for mental health problems, a new study says.

Psychiatric recommendations from four large language models (LLMs) changed when a patient’s record noted they were African American, researchers recently reported in the journal NPJ Digital Medicine.

“Most of the LLMs exhibited some form of bias when dealing with African American patients, at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient,” said senior researcher Elias Aboujaoude, director of the Program in Internet, Health and Society at Cedars-Sinai in Los Angeles.

“This bias was most evident in cases of schizophrenia and anxiety,” Aboujaoude added in a news release.

LLMs are trained on enormous amounts of data, which enables them to understand and generate human language, researchers said in background notes.

These AI programs are being tested for their potential to quickly evaluate patients and recommend diagnoses and treatments, researchers said.

For this study, researchers ran 10 hypothetical cases through four popular LLMs, including ChatGPT-4o, Google’s Gemini 1.5 Pro, Claude 3.5 Sonnet, and NewMes-v15, a freely available version of a Meta LLM.

For each case, the AI programs received three different versions of patient records: One that omitted reference to race, one that explicitly noted a patient was African American, and one that implied a patient’s race based on their name.

The AI often proposed different treatments when the records said or implied that a patient was African American, results show:

  • Two programs omitted medication recommendations for ADHD when race was explicitly stated.

  • Another AI suggested guardianship for Black depression patients.

  • One LLM showed increased focus on reducing alcohol use when evaluating African Americans with anxiety.

Aboujaoude theorizes the AIs displayed racial bias, because they picked it up from the content used to train them — essentially perpetuating inequalities that already exist in mental health care.

“The findings of this important study serve as a call to action for stakeholders across the healthcare ecosystem to ensure that LLM technologies enhance health equity rather than reproduce or worsen existing inequities,” David Underhill, chair of biomedical sciences at Cedars-Sinai, said in a news release.

“Until that goal is reached, such systems should be deployed with caution and consideration for how even subtle racial characteristics may affect their judgment,” added Underhill, who was not involved in the research.

More information

The Cleveland Clinic has more on AI in health care.

SOURCE: Cedars-Sinai, news release, June 30, 2025

HealthDay
Health News is provided as a service to Tinsley Bible Drug Co. site users by HealthDay. Tinsley Bible Drug Co. nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2025 HealthDay All Rights Reserved.