标准与规范
ENGLISH ABSTRACT
消除健康数据中的算法偏见:STANDING Together共识的启示
孙琦
吕晗
王振常
作者及单位信息
·
DOI: 10.3760/cma.j.cn101909-20241231-00240
Eliminating algorithmic bias in health data: insights from the STANDING Together consensus
Sun Qi
Lyu Han
Wang Zhenchang
Authors Info & Affiliations
Sun Qi
Department of Radiology, Beijing Friendship Hospital, Capital Medical University & Precision and Intelligence Medical Imaging Lab, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
Lyu Han
Department of Radiology, Beijing Friendship Hospital, Capital Medical University & Precision and Intelligence Medical Imaging Lab, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
Wang Zhenchang
Department of Radiology, Beijing Friendship Hospital, Capital Medical University & Precision and Intelligence Medical Imaging Lab, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
·
DOI: 10.3760/cma.j.cn101909-20241231-00240
112
25
0
0
0
0
PDF下载
APP内阅读
摘要

随着人工智能(AI)在医疗领域的广泛应用,算法偏见已成为影响医疗公平的重要问题。健康数据作为AI模型训练的基础,其偏见不仅会降低模型准确性,还可能加剧健康不平等。如果健康数据不能充分代表所有社会群体,尤其是少数族裔、老年人和残障人士等边缘群体,基于这些数据训练的AI模型可能会导致不公平的决策。为应对这一问题,STANDING Together共识提出了一系列建议,旨在提高健康数据的透明度,并评估数据集的公平性。本文通过分析相关文献,探讨健康数据中的偏见问题,并分析STANDING Together共识建议的潜在影响。文章认为,提高健康数据的多样性、透明度,以及主动评估数据集的公平性,有助于消除算法偏见,推动医疗公平,并为各类群体提供更加公正的医疗服务。

算法伦理;健康数据;人工智能;STANDING Together共识;数据透明度;医疗公平
ABSTRACT

With the widespread application of artificial intelligence (AI) in healthcare, algorithmic bias has become a significant issue affecting medical fairness. Health data, as the foundation for training AI models, can reduce accuracy and exacerbate health inequalities if biased. When health data fails to represent all social groups, including minorities, the elderly, and people with disabilities, AI models trained on such data may produce unfair decisions. To address this issue, the STANDING Together consensus has proposed a series of recommendations aimed at enhancing the transparency of health data and actively assessing the fairness of datasets. By analyzing relevant literature, this paper explores the issue of bias in health data and discusses the potential impacts of the STANDING Together consensus recommendations. The paper argues that improving the diversity and transparency of health data, as well as proactively evaluating dataset fairness, will help eliminate algorithmic bias and promote fairness in healthcare, thereby providing more equitable medical services to all groups.

Algorithmic ethic;Health data;Artificial intelligence;STANDING Together consensus;Data transparency;Equitable healthcare
Wang Zhenchang, Email: mocdef.3ab61.pivhchzw.rjc
引用本文

孙琦,吕晗,王振常. 消除健康数据中的算法偏见:STANDING Together共识的启示[J]. 数字医学与健康,2025,03(01):10-11.

DOI:10.3760/cma.j.cn101909-20241231-00240

PERMISSIONS

Request permissions for this article from CCC.

评价本文
*以上评分为匿名评价
随着人工智能(artificial intelligence,AI)在医疗领域的广泛应用,算法偏见问题日益凸显,成为阻碍医疗公平和数字化转型进程的关键挑战。作为训练AI模型的基础,健康数据中的偏见不仅会影响模型准确性,还可能加剧健康不平等。因此,消除健康数据中的算法偏见是实现公平医疗的关键。为应对这一问题,STANDING Together共识 1提出了29项建议,旨在提高健康数据的透明度,并评估数据集的公平性。本文将结合相关文献,探讨健康数据中的偏见问题,并分析STANDING Together共识建议的潜在影响。
试读结束,您可以通过登录机构账户或个人账户后获取全文阅读权限。
参考文献
[1]
Alderman JE , Palmer J , Laws E ,et al. Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations[J]. Lancet Digit Health, 2025,7(1):e64-e88. DOI: 10.1016/S2589-7500(24)00224-3 .
返回引文位置Google Scholar
百度学术
万方数据
[2]
Obermeyer Z , Powers B , Vogeli C ,et al. Dissecting racial bias in an algorithm used to manage the health of populations[J]. Science, 2019,366(6464):447-453. DOI: 10.1126/science.aax2342 .
返回引文位置Google Scholar
百度学术
万方数据
[3]
Riccardo G , Anna M , Salvatore R ,et al. A survey of methods for explaining black box models[J]. ACM Computing Surveys, 2018,51(5):1-42. DOI: 10.1145/3236009 .
返回引文位置Google Scholar
百度学术
万方数据
[4]
陈龙,曾凯,李莎,. 人工智能算法偏见与健康不公平的成因与对策分析[J]. 中国全科医学, 2023,26(19):2423-2427. DOI: 10.12114/j.issn.1007-9572.2023.0007 .
返回引文位置Google Scholar
百度学术
万方数据
Chen L , Zeng K , Li S ,et al. Causes and countermeasures of algorithmic bias and health inequity[J]. Chinese General Practice, 2023,26(19):2423- 242 7 . DOI: 10.12114/j.issn.1007-9572.2023.0007 .
Goto CitationGoogle Scholar
Baidu Scholar
Wanfang Data
[5]
Coots M , Linn KA , Goel S ,et al. Racial bias in clinical and population health algorithms: a critical review of current debates[J]. Annual Review of Public Health, 2025, /10.1146/annurev-publhealth-071823-112058 . DOI:.
返回引文位置Google Scholar
百度学术
万方数据
备注信息
A
王振常,Email: mocdef.3ab61.pivhchzw.rjc
B
孙琦, 吕晗, 王振常. 消除健康数据中的算法偏见:STANDING Together共识的启示[J]. 数字医学与健康, 2025, 3(1): 10-11. DOI: 10.3760/cma.j.cn101909-20241231-00240.
C
所有作者声明无利益冲突
评论 (0条)
注册
登录
时间排序
暂无评论,发表第一条评论抢沙发
MedAI助手(体验版)
文档即答
智问智答
机器翻译
回答内容由人工智能生成,我社无法保证其准确性和完整性,该生成内容不代表我们的态度或观点,仅供参考。
生成快照
文献快照

你好,我可以帮助您更好的了解本文,请向我提问您关注的问题。

0/2000

《中华医学会杂志社用户协议》 | 《隐私政策》

《SparkDesk 用户协议》 | 《SparkDesk 隐私政策》

网信算备340104764864601230055号 | 网信算备340104726288401230013号

技术支持:

历史对话
本文全部
还没有聊天记录
设置
模式
纯净模式沉浸模式
字号