目的开发一种多尺度融合与注意力机制结合的头颈部肿瘤放疗危及器官图像分割方法。
方法基于U-Net卷积神经网络,为增强分割模型的特征表达能力,将空间和通道注意力模块与U-Net模型相结合,提高与分割任务相关性更大的特征通道权重;在网络模型编码阶段引入本文提出的多尺度特征融合算法,补充模型下采样过程中损失的特征信息。使用戴斯相似性系数(DSC)和95%豪斯多夫距离(HD)作为不同深度学习模型之间比较的性能评估标准。
结果在医学图像计算和计算机辅助干预国际会议(MICCAI)StructSeg 2019数据集上进行头颈部22个危及器官的分割。相比于已有方法,本文提出的分割方法平均DSC提升了3%~6%,22种头颈部危及器官的分割平均DSC为78.90%,平均95%HD为6.23 mm。
结论基于多尺度融合和注意力机制的U-Net卷积神经网络对头颈部危及器官达到了更好的分割精度,有望在临床应用中提高医生的工作效率。
ObjectiveTo develop a multi-scale fusion and attention mechanism based image automatic segmentation method of organs at risk (OAR) from head and neck carcinoma radiotherapy.
MethodsWe proposed a new OAR segmentation method for medical images of heads and necks based on the U-Net convolution neural network. Spatial and channel squeeze excitation (csSE) attention block were combined with the U-Net, aiming to enhance the feature expression ability. We also proposed a multi-scale block in the U-Net encoding stage to supplement characteristic information. Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) were used as evaluation criteria for deep learning performance.
ResultsThe segmentation of 22 OAR in the head and neck was performed according to the medical image computing computer assisted intervention (MICCAI) StructSeg2019 dataset. The proposed method improved the average segmentation accuracy by 3%-6% compared with existing methods. The average DSC in the segmentation of 22 OAR in the head and neck was 78.90% and the average 95%HD was 6.23 mm.
ConclusionAutomatic segmentation of OAR from the head and neck CT using multi-scale fusion and attention mechanism achieves high segmentation accuracy, which is promising for enhancing the accuracy and efficiency of radiotherapy in clinical practice.
林小惟,杨瑞杰,李霓,等. 多尺度融合与注意力结合的头颈部危及器官自动分割[J]. 中华放射肿瘤学杂志,2023,32(04):319-324.
DOI:10.3760/cma.j.cn113030-20220128-00047版权归中华医学会所有。
未经授权,不得转载、摘编本刊文章,不得使用本刊的版式设计。
除非特别声明,本刊刊出的所有文章不代表中华医学会和本刊编委会的观点。
林小惟:算法设计、模型训练和论文撰写;杨瑞杰、齐琦、李霓:课题设计、问题模型的构建、算法思想论证及论文修改

你好,我可以帮助您更好的了解本文,请向我提问您关注的问题。