一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于改進(jìn)U型神經(jīng)網(wǎng)絡(luò)的腦出血CT圖像分割

胡敏 周秀東 黃宏程 張光華 陶洋

胡敏, 周秀東, 黃宏程, 張光華, 陶洋. 基于改進(jìn)U型神經(jīng)網(wǎng)絡(luò)的腦出血CT圖像分割[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 127-137. doi: 10.11999/JEIT200996
引用本文: 胡敏, 周秀東, 黃宏程, 張光華, 陶洋. 基于改進(jìn)U型神經(jīng)網(wǎng)絡(luò)的腦出血CT圖像分割[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 127-137. doi: 10.11999/JEIT200996
HU Min, ZHOU Xiudong, HUANG Hongcheng, ZHANG Guanghua, TAO Yang. Computed-Tomography Image Segmentation of Cerebral Hemorrhage Based on Improved U-shaped Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(1): 127-137. doi: 10.11999/JEIT200996
Citation: HU Min, ZHOU Xiudong, HUANG Hongcheng, ZHANG Guanghua, TAO Yang. Computed-Tomography Image Segmentation of Cerebral Hemorrhage Based on Improved U-shaped Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(1): 127-137. doi: 10.11999/JEIT200996

基于改進(jìn)U型神經(jīng)網(wǎng)絡(luò)的腦出血CT圖像分割

doi: 10.11999/JEIT200996
基金項(xiàng)目: 國(guó)家重點(diǎn)研發(fā)計(jì)劃(2019YFB2102001),山西省回國(guó)留學(xué)人員科研項(xiàng)目(2020-149)
詳細(xì)信息
    作者簡(jiǎn)介:

    胡敏:女,1971年生,教授,研究方向?yàn)閿?shù)字媒體技術(shù)、人機(jī)交互理論與技術(shù)應(yīng)用

    周秀東:男,1995年生,碩士生,研究方向?yàn)橹悄芏嗝襟w信息處理

    黃宏程:男,1979年生,副教授,研究方向?yàn)槿藱C(jī)融合計(jì)算智能、智能多媒體信息處理

    張光華:男,1986年生,副教授,主要研究方向?yàn)榱孔狱c(diǎn)微型多光譜成像技術(shù)、多光譜圖像處理、醫(yī)學(xué)圖像處理

    陶洋:男,1964年生,教授,研究方向?yàn)槿斯ぶ悄?、大?shù)據(jù)與計(jì)算智能

    通訊作者:

    黃宏程 ? ?huanghc@cqupt.edu.cn

  • 中圖分類號(hào): TN911.73; TP391.41

Computed-Tomography Image Segmentation of Cerebral Hemorrhage Based on Improved U-shaped Neural Network

Funds: The National Key Research and Development Program of China(2019YFB2102001), The Research Project of Shanxi Scholarship Council of China (2020-149)
  • 摘要: 針對(duì)腦出血CT圖像病灶部位的多尺度性導(dǎo)致分割精度較低的問(wèn)題,該文提出一種基于改進(jìn)U型神經(jīng)網(wǎng)絡(luò)的圖像分割模型(AU-Net+)。首先,該模型利用U-Net中的編碼器對(duì)腦出血CT圖像特征編碼,將提出的殘差八度卷積(ROC)塊應(yīng)用到U型神經(jīng)網(wǎng)絡(luò)的跳躍連接部分,使不同層次的特征更好地融合;其次,對(duì)融合后的特征,分別引入混合注意力機(jī)制,用以提高對(duì)目標(biāo)區(qū)域的特征提取能力;最后,通過(guò)改進(jìn)Dice損失函數(shù)進(jìn)一步加強(qiáng)模型對(duì)腦出血CT圖像中小目標(biāo)區(qū)域的特征學(xué)習(xí)力度。為驗(yàn)證模型的有效性,在腦出血CT圖像數(shù)據(jù)集上進(jìn)行實(shí)驗(yàn),同U-Net, Attention U-Net, UNet++以及CE-Net相比,mIoU指標(biāo)分別提升了20.9%, 3.6%, 7.0%, 3.1%,表明AU-Net+模型具有更好的分割效果。
  • 圖  1  AU-Net+網(wǎng)絡(luò)框架

    圖  2  混合注意力機(jī)制

    圖  3  位置注意力機(jī)制

    圖  4  通道注意力機(jī)制

    圖  5  八度卷積計(jì)算過(guò)程

    圖  6  殘差八度卷積模塊(ROC)

    圖  7  實(shí)驗(yàn)流程圖

    圖  8  實(shí)驗(yàn)預(yù)處理效果對(duì)比圖

    圖  9  典型病例的分割結(jié)果

    圖  10  AU-Net+模型訓(xùn)練曲線

    圖  11  分割效果圖

    圖  12  實(shí)驗(yàn)結(jié)果分割效果圖

    圖  13  ${y_{{\rm{pred}}}}$的指數(shù)對(duì)分割的影響

    表  1  AU-Net+網(wǎng)絡(luò)結(jié)構(gòu)

    編碼器-解碼器跳躍連接
    conv2d_1 (UConv2D)up_sampling2d_4 (Conv2DTrans)
    max_pooling2d_1 (MaxPooling2D)concatenate_4 (Concatenate)
    conv2d_2 (UConv2D)roc_1(Roc)
    max_pooling2d_2 (MaxPooling2D)up_sampling2d_5 (Conv2DTrans)
    conv2d_3 (UConv2D)concatenate_5 (Concatenate)
    max_pooling2d_3 (MaxPooling2D)roc_2(Roc)
    conv2d_4 (UConv2D)up_sampling2d_6 (Conv2DTrans)
    dropout_1 (Dropout)add_1 (Add)
    up_sampling2d_1 (Conv2DTrans)att_1(Attention)
    concatenate_1 (Concatenate)up_sampling2d_7 (Conv2DTrans)
    conv2d_5 (UConv2D)concatenate_6 (Concatenate)
    up_sampling2d_2 (Conv2DTrans)roc_3(Roc)
    concatenate_2 (Concatenate)up_sampling2d_8 (Conv2DTrans)
    conv2d_6 (UConv2D)add_2 (Add)
    up_sampling2d_3 (Conv2DTrans)att_2(Attention)
    concatenate_3 (Concatenate)up_sampling2d_9 (Conv2DTrans)
    conv2d_7 (UConv2D)add_3 (Add)
    conv2d_8 (EConv2D)att_3(Attention)
    下載: 導(dǎo)出CSV

    表  2  分類結(jié)果的混淆矩陣

    預(yù)測(cè)值\實(shí)際值正樣本負(fù)樣本
    正樣本${\rm{TP}}$${\rm{FP}}$
    負(fù)樣本${\rm{FN}}$${\rm{TN}}$
    下載: 導(dǎo)出CSV

    表  3  評(píng)價(jià)指標(biāo)的統(tǒng)計(jì)結(jié)果

    評(píng)價(jià)指標(biāo)mIoUVOERecallDICESpecificity
    均值0.8620.0210.9120.9240.987
    方差0.0090.0010.0040.0020.002
    中值0.9010.0230.9350.9530.998
    下載: 導(dǎo)出CSV

    表  4  實(shí)驗(yàn)結(jié)果對(duì)比

    方法(參數(shù)量)迭代次數(shù)mIoUVOERecallDICESpecificity
    U-Net (31377858)46000.6530.0430.7310.7060.974
    Attention U-Net(31901542)46000.8260.0210.8610.9050.977
    U-Net++(36165192)48000.7920.0250.8330.8830.976
    CE-Net (29003094)45000.8310.0220.8730.9110.981
    AU-Net+(37646416)50000.8620.0210.9120.9240.987
    下載: 導(dǎo)出CSV

    表  5  混合注意力機(jī)制和ROC結(jié)構(gòu)分析指標(biāo)對(duì)比

    模型mIoUVOERecallDICESpecificity
    Network_10.6610.0420.7350.7140.976
    Network_20.8350.0250.8410.8930.974
    Network_30.7810.0410.7440.7230.985
    Network_40.8420.0230.8620.9050.986
    AU-Net+0.8620.0210.9120.9240.987
    下載: 導(dǎo)出CSV

    表  6  實(shí)驗(yàn)結(jié)果對(duì)比

    模型參數(shù)量mIoUVOERecallDICESpecificity
    Attention U-Net*386544160.8040.0270.8530.8960.956
    Attention U-Net319015420.8260.0210.8610.9050.977
    AU-Net+376464160.8620.0210.9120.9240.987
    下載: 導(dǎo)出CSV
  • [1] 談山峰, 方芳, 陳兵, 等. 腦疝后腦梗塞預(yù)后因素分析[J]. 海南醫(yī)學(xué), 2014, 25(3): 400–402. doi: 10.3969/j.issn.1003-6350.2014.03.0152

    TAN Shanfeng, FANG Fang, CHEN Bing, et al. Analysis of prognostic factors of cerebral infarction after cerebral hernia[J]. Hainan Medical Journal, 2014, 25(3): 400–402. doi: 10.3969/j.issn.1003-6350.2014.03.0152
    [2] SUN Mingjie, HU R, YU Huimin, et al. Intracranial hemorrhage detection by 3D voxel segmentation on brain CT images[C]. 2015 International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, 2015: 1–5. doi: 10.1109/WCSP.2015.7341238.
    [3] WANG Nian, TONG Fei, TU Yongcheng, et al. Extraction of cerebral hemorrhage and calculation of its volume on CT image using automatic segmentation algorithm[J]. Journal of Physics: Conference Series, 2019, 1187(4): 042088. doi: 10.1088/1742-6596/1187/4/042088
    [4] BHADAURIA H S, SINGH A, and DEWAL M L. An integrated method for hemorrhage segmentation from brain CT Imaging[J]. Computers & Electrical Engineering, 2013, 39(5): 1527–1536. doi: 10.1016/j.compeleceng.2013.04.010
    [5] SHAHANGIAN B and POURGHASSEM H. Automatic brain hemorrhage segmentation and classification in CT scan images[C]. 2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP), Zanjan, Iran, 2013: 467–471. doi: 10.1109/IranianMVIP.2013.6780031.
    [6] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386
    [7] WANG Shuxin, CAO Shilei, WEI Dong, et al. LT-Net: Label transfer by learning reversible voxel-wise correspondence for one-shot medical image segmentation[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 9159–9168. doi: 10.1109/CVPR42600.2020.00918.
    [8] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    [9] 彭佳林, 揭萍. 基于序列間先驗(yàn)約束和多視角信息融合的肝臟CT圖像分割[J]. 電子與信息學(xué)報(bào), 2018, 40(4): 971–978. doi: 10.11999/JEIT170933

    PENG Jialin and JIE Ping. Liver segmentation from CT image based on sequential constraint and multi-view information fusion[J]. Journal of Electronics &Information Technology, 2018, 40(4): 971–978. doi: 10.11999/JEIT170933
    [10] MILLETARI F, NAVAB N, and AHMADI S A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation[C]. 2016 Fourth International Conference on 3D Vision (3DV), Stanford, USA, 2016: 565–571. doi: 10.1109/3DV.2016.79.
    [11] GUAN S, KHAN A A, SIKDAR S, et al. Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 24(2): 568–576. doi: 10.1109/JBHI.2019.2912935
    [12] XIAO Xiao, LIAN Shen, LUO Zhiming, et al. Weighted Res-UNet for high-quality retina vessel segmentation[C]. 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 2018: 327–331. doi: 10.1109/ITME.2018.00080.
    [13] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: Learning where to look for the pancreas[C]. The 1st Conference on Medical Imaging with Deep Learning, Amsterdam, Netherlands, 2018: 1–10.
    [14] IBTEHAZ N and RAHMAN M S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation[J]. Neural Networks, 2020, 121: 74–87. doi: 10.1016/j.neunet.2019.08.025
    [15] ALOM M Z, YAKOPCIC C, TAHA T M, et al. Nuclei segmentation with recurrent residual convolutional neural networks based U-Net (R2U-Net)[C]. NAECON 2018-IEEE National Aerospace and Electronics Conference, Dayton, USA, 2018: 228–233. doi: 10.1109/NAECON.2018.8556686.
    [16] ZHOU Zongwei, RAHMAN M M, TAJBAKHSH N, et al. UNet++: Redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transactions on Medical Imaging, 2020, 39(6): 1856–1867. doi: 10.1109/TMI.2019.2959609
    [17] GU Zaiwang, CHENG Jun, FU Huazhu, et al. CE-Net: Context encoder network for 2D medical image segmentation[J]. IEEE Transactions on Medical Imaging, 2019, 38(10): 2281–2292. doi: 10.1109/TMI.2019.2903562
    [18] FU Jun, LIU Jing, TIAN Haijie, et al. Dual attention network for scene segmentation[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 3141–3149. doi: 10.1109/CVPR.2019.00326.
    [19] CHEN Yunpeng, FAN Haoqi, XU Bing, et al. Drop an Octave: Reducing spatial redundancy in convolutional neural networks with Octave convolution[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019: 3434–3443. doi: 10.1109/ICCV.2019.00353.
  • 加載中
圖(13) / 表(6)
計(jì)量
  • 文章訪問(wèn)數(shù):  1563
  • HTML全文瀏覽量:  1081
  • PDF下載量:  231
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2020-11-25
  • 修回日期:  2021-05-27
  • 網(wǎng)絡(luò)出版日期:  2021-08-16
  • 刊出日期:  2022-01-10

目錄

    /

    返回文章
    返回