%0 Journal Article %T DouDil-UNet++:基于Dil-UNet++网络的双分支编码器视网膜血管分割网络模型
DouDil UNet++: A Network Model for Retinal Vessel Segmentation Based on Dil-UNet++ Network with Double Branch Encoder %A 米文辉 %A 佘海州 %A 李鹤 %J Journal of Image and Signal Processing %P 311-327 %@ 2325-6745 %D 2024 %I Hans Publishing %R 10.12677/jisp.2024.133027 %X 视网膜血管的精确分割对于辅助医生在临床诊断眼科疾病具有重要意义。针对眼底视网膜血管图像中对比度不足、噪声干扰多、血管细节不明显等问题,在Dil-UNet++网络的基础上进行改进提出了DouDil- UNet++网络对视网膜血管进行准确分割。本文提出的DouDil-UNet++网络采用了双分支编码器U形结构,Dil-UNet++作为主分割网络提取视网膜血管图像的空间细节特征信息,Tr-Net作为辅分割网络用于提取视网膜血管图像的全局语义特征信息;在Tr-Net辅分割网络中使用由五层卷积组成的特征序列化模块与使用边缘多头注意力的Transformer特征提取模块来提取图像的全局特征信息;同时使用特征融合模块对主分割网络与辅分割网络提取的特征信息进行聚合。为了验证网络的分割效果,在DRIVE和STARE视网膜血管数据集上进行了分割实验,在DRIVE数据集上该模型的Dice系数、准确度和精确度分别达到87.93%、96.39%和93.52%;在STARE数据集上该模型的Dice系数、准确度和精确度分别达到88.71%、97.79%和87.08%。结果表明本文提出的网络在分割视网膜血管图像任务中有着良好的性能,有一定的使用价值。
A network named DouDil-UNet++ is proposed in this paper to address the challenges of low contrast, high noise, and indistinct vascular details in retinal fundus images, aiming to achieve accurate segmentation of retinal vessels, which is crucial for assisting clinicians in the diagnosis of ophthalmic diseases. The DouDil-UNet++ network is an improved version based on the Dil-UNet++ network and incorporates a dual-branch encoder U-shaped structure. The Dil-UNet++ serves as the main segmentation network to extract spatial detailed features from the retinal vascular images, while the Tr-Net functions as the auxiliary segmentation network to capture global semantic features. The Tr-Net employs a feature sequence module consisting of five convolutional layers and a Transformer feature extraction module with edge multi-head attention to capture the global features of the images. Additionally, a feature fusion module is utilized to aggregate the feature information extracted by the main and auxiliary segmentation networks. To assess the segmentation performance, experiments were conducted on the DRIVE and STARE retinal vessel datasets. On the DRIVE dataset, the model achieves a Dice coefficient of 87.93%, an accuracy of 96.39%, and a precision of 93.52%. On the STARE dataset, the model achieves a Dice coefficient of 88.71%, an accuracy of 97.79%, and a precision of 87.08%.The results indicate that the proposed network exhibits good performance in retinal vessel segmentation tasks, demonstrating its utility and effectiveness. %K Dil-UNet++,注意力机制,Transformer,视网膜血管分割
Dil-UNet++ %K Attention Module %K Transformer %K Retinal Vessel Segmentation %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=91418