随着大语言模型发展和应用的不断深入,大模型幻觉所带来的问题日益显露,在医学领域尤其危险。如何更好地理解幻觉原因并予以减轻,对医学大模型落地和推广至关重要。本文通过文献综述和实践总结,围绕大模型幻觉来源、类型和评估等内容进行阐述,并讨论了在生成阶段、训练阶段可采取应对策略减轻大模型幻觉。实践证明,在医学场景中,检索增强生成(RAG)是减轻大模型幻觉的重要手段。医学大模型有广泛的应用前景,需持续创新不断减轻幻觉问题,提高大模型的准确性和计算性能,为推动医疗领域发展和实现健康中国战略作出更大贡献。
With the development and application of large language models(LLMs), the problems caused by hallucinations in these models have become increasingly apparent, posing critical risks in the medical field. It is crucial to better understand the causes of hallucinations and find ways to mitigate them in order to facilitate the implementation and promotion of LLMs in the medical domain. Based on literature review and practical experience, this research aims to elaborate on the sources, types, and assessments of hallucinations in LLMs, with a particular focus on the countermeasures to mitigate hallucinations during the generation and training stages. It is proved that the retrieval-augmented generation is an important measure to be taken to that end. Medical LLMs have broad application prospects, but continuous innovation is required to mitigate hallucinations, improve the accuracy and computational performance of LLMs, and contribute to the advancement of the healthcare sector and the realization of the Healthy China strategy.
黄子扬,董超,姜会珍,等. 医学大模型幻觉问题及应对策略的研究与实践[J]. 数字医学与健康,2025,03(01):54-58.
DOI:10.3760/cma.j.cn101909-20240510-00102版权归中华医学会所有。
未经授权,不得转载、摘编本刊文章,不得使用本刊的版式设计。
除非特别声明,本刊刊出的所有文章不代表中华医学会和本刊编委会的观点。

你好,我可以帮助您更好的了解本文,请向我提问您关注的问题。