一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于階梯結(jié)構(gòu)的U-Net結(jié)腸息肉分割算法

時(shí)永剛 李祎 周治國 張?jiān)?/a>,  夏卓巖

時(shí)永剛, 李祎, 周治國, 張?jiān)? 夏卓巖. 基于階梯結(jié)構(gòu)的U-Net結(jié)腸息肉分割算法[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 39-47. doi: 10.11999/JEIT210916
引用本文: 時(shí)永剛, 李祎, 周治國, 張?jiān)? 夏卓巖. 基于階梯結(jié)構(gòu)的U-Net結(jié)腸息肉分割算法[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 39-47. doi: 10.11999/JEIT210916
SHI Yonggang, LI Yi, ZHOU Zhiguo, ZHANG Yue, XIA Zhuoyan. Polyp Segmentation Using Stair-structured U-Net[J]. Journal of Electronics & Information Technology, 2022, 44(1): 39-47. doi: 10.11999/JEIT210916
Citation: SHI Yonggang, LI Yi, ZHOU Zhiguo, ZHANG Yue, XIA Zhuoyan. Polyp Segmentation Using Stair-structured U-Net[J]. Journal of Electronics & Information Technology, 2022, 44(1): 39-47. doi: 10.11999/JEIT210916

基于階梯結(jié)構(gòu)的U-Net結(jié)腸息肉分割算法

doi: 10.11999/JEIT210916
基金項(xiàng)目: 國家自然科學(xué)基金(60971133, 61271112)
詳細(xì)信息
    作者簡(jiǎn)介:

    時(shí)永剛:男,1969年生,副教授,研究方向?yàn)獒t(yī)學(xué)圖像分割、目標(biāo)檢測(cè)識(shí)別、目標(biāo)分類、圖像復(fù)原和超分辨率重建

    李祎:女,1996年生,碩士生,研究方向?yàn)獒t(yī)學(xué)圖像分割

    周治國:男,1977年生,副教授,研究方向?yàn)橹悄芨兄c導(dǎo)航

    張?jiān)溃耗校?996年生,碩士生,研究方向?yàn)獒t(yī)學(xué)圖像處理、深度學(xué)習(xí)

    夏卓巖:男,1997年生,碩士生,研究方向?yàn)閳D像分割、目標(biāo)檢測(cè)與分類、目標(biāo)識(shí)別

    通訊作者:

    時(shí)永剛 ygshi@bit.edu.cn

  • 中圖分類號(hào): TN911.73; R735.34

Polyp Segmentation Using Stair-structured U-Net

Funds: The National Natural Science Foundation of China (60971133, 61271112)
  • 摘要: 結(jié)腸息肉的精確分割對(duì)結(jié)直腸癌的診斷和治療具有重要意義,目前的分割方法普遍存在有偽影、分割精度低等問題。該文提出一種基于階梯結(jié)構(gòu)的U-Net結(jié)腸息肉分割算法(SU-Net),使用U-Net的U型結(jié)構(gòu),利用Kronecker乘積來擴(kuò)展標(biāo)準(zhǔn)空洞卷積核,構(gòu)成Kronecker空洞卷積下采樣有效擴(kuò)大感受野,彌補(bǔ)傳統(tǒng)空洞卷積容易丟失的細(xì)節(jié)特征;應(yīng)用具有階梯結(jié)構(gòu)的融合模塊,遵循擴(kuò)展和堆疊原則形成階梯狀的分層結(jié)構(gòu),有效捕獲上下文信息并從多個(gè)尺度聚合特征;在解碼器引入卷積重構(gòu)上采樣模塊生成密集的像素級(jí)預(yù)測(cè)圖,捕獲雙線性插值上采樣中缺少的精細(xì)信息。在Kvasir-SEG數(shù)據(jù)集和CVC-EndoSceneStill數(shù)據(jù)集上對(duì)模型進(jìn)行了測(cè)試,相似系數(shù)(Dice)指標(biāo)和交并比(IoU)指標(biāo)分別達(dá)到了87.51%, 88.75%和82.30%, 85.64%。實(shí)驗(yàn)結(jié)果表明,該文所提方法改善了因過度曝光、低對(duì)比度引起的分割精度低的問題,同時(shí)消除了邊界外部的圖像偽影和圖像內(nèi)部不連貫的現(xiàn)象,優(yōu)于其他息肉分割方法。
  • 圖  1  SU-Net整體框架

    圖  2  不同類型卷積核和KACD模塊

    圖  3  階梯結(jié)構(gòu)的融合模塊

    圖  4  卷積重構(gòu)上采樣模塊

    圖  5  SU-Net與其他分割模型在EndoSceneStill數(shù)據(jù)集上的分割結(jié)果

    圖  6  SU-Net與其他分割模型在Kvasir-SEG數(shù)據(jù)集上的分割結(jié)果

    表  1  SU-Net消融實(shí)驗(yàn)列表

    序號(hào)實(shí)驗(yàn)描述
    1基線
    2僅將基線里的空洞卷積替換為Kronecker空洞卷積
    3將實(shí)驗(yàn)2中下采樣替換為Kronecker空洞卷積下采樣
    4在實(shí)驗(yàn)3中的編碼器解碼器之間加入階梯結(jié)構(gòu)的融合模塊
    5SU-Net
    下載: 導(dǎo)出CSV

    表  2  在EndoSceneStill數(shù)據(jù)集上各實(shí)驗(yàn)的量化結(jié)果

    評(píng)估標(biāo)準(zhǔn)消融實(shí)驗(yàn)編號(hào)
    12345
    召回率0.78190.81950.80280.80270.8237
    特異性0.99310.99080.99460.99470.9929
    精確率0.91850.87470.91790.91190.9007
    F10.78990.79940.80880.81740.8230
    F20.77910.80250.79800.80460.8175
    IoU0.71940.72140.73600.74500.7499
    IoUB0.96010.95990.92690.96270.9630
    IoUM0.83970.84070.84940.85380.8564
    Dice0.78990.79940.80880.81740.8230
    下載: 導(dǎo)出CSV

    表  3  在Kvasir-SEG數(shù)據(jù)集上各實(shí)驗(yàn)的量化結(jié)果

    評(píng)估標(biāo)準(zhǔn)消融實(shí)驗(yàn)編號(hào)
    12345
    召回率0.86640.86310.86360.87500.8752
    特異性0.98400.98540.98440.98580.9866
    精確率0.89210.90060.91630.90210.9207
    F10.85600.86070.86540.86890.8751
    F20.85740.86730.86020.86810.8718
    IoU0.78660.79200.79570.80320.8173
    IoUB0.95340.95390.95200.95320.9577
    IoUM0.87000.87300.87380.87820.8875
    Dice0.85600.86070.86540.86890.8751
    下載: 導(dǎo)出CSV

    表  4  不同模型在EndoSceneStill數(shù)據(jù)集中的量化評(píng)估結(jié)果

    模型召回率特異性精確率F1F2IoUIoUBIoUMDice
    U-Net0.68390.99540.92220.71130.69100.63140.95150.79140.7113
    Attention unet0.67440.99620.93730.70840.68330.62600.95040.78820.7084
    TKCN0.81100.98660.85650.78190.78750.70230.95360.82800.7819
    Xception0.80170.99200.89640.79400.79060.72200.95750.83980.7940
    DeepLabV3+0.76110.99190.85430.75420.75050.68330.95450.81890.7542
    PraNet0.79730.99370.92150.80160.79450.73490.96100.84800.8016
    SU-Net0.82370.99290.90070.82300.81750.74990.96300.85640.8230
    下載: 導(dǎo)出CSV

    表  5  不同模型在Kvasir-SEG數(shù)據(jù)集中的量化評(píng)估結(jié)果

    模型召回率特異性精確率F1F2IoUIoUBIoUMDice
    U-Net0.84080.97070.83150.80170.81610.70990.93310.82150.8017
    Attention unet0.85760.96820.83170.81050.82830.72490.93400.82940.8105
    TKCN0.86510.98260.89890.85520.85670.78110.94730.86420.8552
    Xception0.87020.98310.90410.86620.86390.79820.95040.87430.8662
    DeepLabV3+0.88790.98120.89380.87250.87700.81100.95500.88300.8725
    PraNet0.87630.98590.91540.87430.87180.81100.95570.88330.8743
    SU-Net0.87520.98660.92070.87510.87180.81730.95770.88750.8751
    下載: 導(dǎo)出CSV
  • [1] GSCHWANTLER M, KRIWANEK S, LANGNER E, et al. High-grade dysplasia and invasive carcinoma in colorectal adenomas: A multivariate analysis of the impact of adenoma and patient characteristics[J]. European Journal of Gastroenterology & Hepatology, 2002, 14(2): 183–188. doi: 10.1097/00042737-200202000-00013
    [2] ARNOLD M, SIERRA M S, LAVERSANNE M, et al. Global patterns and trends in colorectal cancer incidence and mortality[J]. Gut, 2017, 66(4): 683–691. doi: 10.1136/gutjnl-2015-310912
    [3] PUYAL J G B, BHATIA K K, BRANDAO P, et al. Endoscopic polyp segmentation using a hybrid 2D/3D CNN[C]. 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 295–305.
    [4] TASHK A, HERP J, and NADIMI E. Fully automatic polyp detection based on a novel u-net architecture and morphological post-process[C]. 2019 IEEE International Conference on Control, Artificial Intelligence, Robotics & Optimization, Athens, Greece, 2019: 37–41.
    [5] WANG Pu, XIAO Xiao, BROWN J R G, et al. Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy[J]. Nature Biomedical Engineering, 2018, 2(10): 741–748. doi: 10.1038/s41551-018-0301-3
    [6] SORNAPUDI S, MENG F, and YI S. Region-based automated localization of colonoscopy and wireless capsule endoscopy polyps[J]. Applied Sciences, 2019, 9(12): 2404. doi: 10.3390/app9122404
    [7] FAN Dengping, JI Gepeng, ZHOU Tao, et al. PraNet: Parallel reverse attention network for polyp segmentation[C]. 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 2020: 263–273.
    [8] FENG Ruiwei, LEI Biwen, WANG Wenzhe, et al. SSN: A stair-shape network for real-time polyp segmentation in colonoscopy images[C]. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, USA, 2020: 225–229.
    [9] JI Gepeng, CHOU Yucheng, FAN Dengping, et al. Progressively normalized self-attention network for video polyp segmentation[J]. arXiv: 2105.08468, 2021.
    [10] LIN Ailiang, CHEN Bingzhi, XU Jiayu, et al. DS-TransUNet: Dual swin transformer U-Net for medical image segmentation[J]. arXiv: 2106.06716, 2021.
    [11] ZHANG Yundong, LIU Huiye, and HU Qiang. TransFuse: Fusing transformers and CNNs for medical image segmentation[J]. arXiv: 2102.08005, 2021.
    [12] JHA D, SMEDSRUD P H, RIEGLER M A, et al. Kvasir-SEG: A segmented polyp dataset[C]. 26th International Conference on Multimedia Modeling, Daejeon, Korea, 2020: 451–462.
    [13] VáZQUEZ D, BERNAL J, SáNCHEZ F J, et al. A benchmark for endoluminal scene segmentation of colonoscopy images[J]. Journal of Healthcare Engineering, 2017, 2017: 4037190. doi: 10.1155/2017/4037190
    [14] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [15] LONG J, SHELHAMER E, and DARRELL T. Fully convolutional networks for semantic segmentation[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, 2015: 3431–3440.
    [16] WU Tianyi, TANG Sheng, ZHANG Rui, et al. Tree-structured kronecker convolutional network for semantic segmentation[C]. 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 2019: 940–945.
    [17] CHOLLET F. Xception: Deep learning with depthwise separable convolutions[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 1800–1807.
    [18] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778.
    [19] SHI Wenzhe, CABALLERO J, HUSZáR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 1874–1883.
    [20] KINGMA D P and BA J. Adam: A method for stochastic optimization[J]. arXiv: 1412.6980, 2017.
    [21] OKTAY O, SCHLEMPER J, LE FOLGOC L, et al. Attention U-Net: Learning where to look for the pancreas[J]. arXiv: 1804.03999v3, 2018.
    [22] CHEN L C, ZHU Yukun, PAPANDREOU G, et al. Encoder-decoder with Atrous separable convolution for semantic image segmentation[C]. The 15th European Conference on Computer Vision (ECCV), Munich, Germany, 2018: 833–851.
  • 加載中
圖(6) / 表(5)
計(jì)量
  • 文章訪問數(shù):  1152
  • HTML全文瀏覽量:  677
  • PDF下載量:  200
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2021-09-01
  • 修回日期:  2021-12-21
  • 錄用日期:  2021-12-21
  • 網(wǎng)絡(luò)出版日期:  2021-12-27
  • 刊出日期:  2022-01-10

目錄

    /

    返回文章
    返回