A Systematic review on evaluating bias and equity in large language model (LLM) applications for patient communication in healthcare

Loading...
Thumbnail Image

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Department of Computer Science & Engineering

Abstract

The rapid advancement of AI has transformed healthcare, with Large Language Models (LLMs) like ChatGPT and Med-PaLM enhancing patient communication by automating responses, summarizing medical documents, and aiding clinical decision-making, particularly in resource-limited settings. However, biases and inequities remain as critical challenges, undermining their effectiveness and fairness. Systemic biases can be culturally inappropriate, exacerbate healthcare disparities, and marginalize certain groups. Additionally, structural and economic factors contribute to these issues, necessitating urgent attention to demographic, cultural, and linguistic discrimination. This research aims to systematically evaluate these biases, their consequences, and potential solutions, offering recommendations to enhance AI driven healthcare communication systems, making them more just, efficient, and reliable.

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By