随着人工智能(AI)在医疗领域的广泛应用,算法偏见已成为影响医疗公平的重要问题。健康数据作为AI模型训练的基础,其偏见不仅会降低模型准确性,还可能加剧健康不平等。如果健康数据不能充分代表所有社会群体,尤其是少数族裔、老年人和残障人士等边缘群体,基于这些数据训练的AI模型可能会导致不公平的决策。为应对这一问题,STANDING Together共识提出了一系列建议,旨在提高健康数据的透明度,并评估数据集的公平性。本文通过分析相关文献,探讨健康数据中的偏见问题,并分析STANDING Together共识建议的潜在影响。文章认为,提高健康数据的多样性、透明度,以及主动评估数据集的公平性,有助于消除算法偏见,推动医疗公平,并为各类群体提供更加公正的医疗服务。
With the widespread application of artificial intelligence (AI) in healthcare, algorithmic bias has become a significant issue affecting medical fairness. Health data, as the foundation for training AI models, can reduce accuracy and exacerbate health inequalities if biased. When health data fails to represent all social groups, including minorities, the elderly, and people with disabilities, AI models trained on such data may produce unfair decisions. To address this issue, the STANDING Together consensus has proposed a series of recommendations aimed at enhancing the transparency of health data and actively assessing the fairness of datasets. By analyzing relevant literature, this paper explores the issue of bias in health data and discusses the potential impacts of the STANDING Together consensus recommendations. The paper argues that improving the diversity and transparency of health data, as well as proactively evaluating dataset fairness, will help eliminate algorithmic bias and promote fairness in healthcare, thereby providing more equitable medical services to all groups.
孙琦,吕晗,王振常. 消除健康数据中的算法偏见:STANDING Together共识的启示[J]. 数字医学与健康,2025,03(01):10-11.
DOI:10.3760/cma.j.cn101909-20241231-00240版权归中华医学会所有。
未经授权,不得转载、摘编本刊文章,不得使用本刊的版式设计。
除非特别声明,本刊刊出的所有文章不代表中华医学会和本刊编委会的观点。

你好,我可以帮助您更好的了解本文,请向我提问您关注的问题。