一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

全局感知與稀疏特征關(guān)聯(lián)圖像級弱監(jiān)督病理圖像分割

張印輝 張金凱 何自芬 劉珈岑 吳琳 李振輝 陳光晨

張印輝, 張金凱, 何自芬, 劉珈岑, 吳琳, 李振輝, 陳光晨. 全局感知與稀疏特征關(guān)聯(lián)圖像級弱監(jiān)督病理圖像分割[J]. 電子與信息學(xué)報(bào), 2024, 46(9): 3672-3682. doi: 10.11999/JEIT240364
引用本文: 張印輝, 張金凱, 何自芬, 劉珈岑, 吳琳, 李振輝, 陳光晨. 全局感知與稀疏特征關(guān)聯(lián)圖像級弱監(jiān)督病理圖像分割[J]. 電子與信息學(xué)報(bào), 2024, 46(9): 3672-3682. doi: 10.11999/JEIT240364
ZHANG Yinhui, ZHANG Jinkai, HE Zifen, LIU Jiacen, WU Lin, LI Zhenhui, CHEN Guangchen. Global Perception and Sparse Feature Associate Image-level Weakly Supervised Pathological Image Segmentation[J]. Journal of Electronics & Information Technology, 2024, 46(9): 3672-3682. doi: 10.11999/JEIT240364
Citation: ZHANG Yinhui, ZHANG Jinkai, HE Zifen, LIU Jiacen, WU Lin, LI Zhenhui, CHEN Guangchen. Global Perception and Sparse Feature Associate Image-level Weakly Supervised Pathological Image Segmentation[J]. Journal of Electronics & Information Technology, 2024, 46(9): 3672-3682. doi: 10.11999/JEIT240364

全局感知與稀疏特征關(guān)聯(lián)圖像級弱監(jiān)督病理圖像分割

doi: 10.11999/JEIT240364
基金項(xiàng)目: 國家自然科學(xué)基金(62061022, 62171206)
詳細(xì)信息
    作者簡介:

    張印輝:男,博士,教授,研究方向?yàn)閳D像處理、機(jī)器視覺及機(jī)器智能

    張金凱:男,碩士生,研究方向?yàn)獒t(yī)學(xué)圖像處理

    何自芬:女,博士,教授,研究方向?yàn)閳D像處理和機(jī)器視覺

    劉珈岑:男,碩士生,研究方向?yàn)獒t(yī)學(xué)圖像處理

    吳琳:女,碩士,副主任醫(yī)師,研究方向?yàn)槲改c病理、腫瘤病理

    李振輝:男,博士,主治醫(yī)師,研究方向?yàn)槲改c道腫瘤影像組學(xué)

    陳光晨:男,博士生,研究方向?yàn)橛?jì)算機(jī)視覺

    通訊作者:

    何自芬 zyhhzf1998@163.com

  • 中圖分類號: TN911.73; TP391.41

Global Perception and Sparse Feature Associate Image-level Weakly Supervised Pathological Image Segmentation

Funds: The National Natural Science Foundation of China (62061022, 62171206)
  • 摘要: 弱監(jiān)督語義分割方法可以節(jié)省大量的人工標(biāo)注成本,在病理全切片圖像(WSI)的分析中有著廣泛應(yīng)用。針對弱監(jiān)督多實(shí)例學(xué)習(xí)(MIL)方法在病理圖像分析中存在的像素實(shí)例相互獨(dú)立缺乏依賴關(guān)系,分割結(jié)果局部不一致和圖像級標(biāo)簽監(jiān)督信息不充分的問題,該文提出一種全局感知與稀疏特征關(guān)聯(lián)圖像級弱監(jiān)督的端到端多實(shí)例學(xué)習(xí)方法(DASMob-MIL)。首先,為克服像素實(shí)例之間的獨(dú)立性,使用局部感知網(wǎng)絡(luò)提取特征以建立局部像素依賴,并級聯(lián)交叉注意力模塊構(gòu)建全局信息感知分支(GIPB)以建立全局像素依賴關(guān)系。其次,引入像素自適應(yīng)細(xì)化模塊(PAR),通過多尺度鄰域局部稀疏特征之間的相似性構(gòu)建親和核,解決了弱監(jiān)督語義分割結(jié)果局部不一致的問題。最后,設(shè)計(jì)深度關(guān)聯(lián)監(jiān)督模塊(DAS),通過對多階段特征圖生成的分割圖進(jìn)行加權(quán)融合,并使用權(quán)重因子關(guān)聯(lián)損失函數(shù)以優(yōu)化訓(xùn)練過程,以降低弱監(jiān)督圖像級標(biāo)簽監(jiān)督信息不充分的影響。DASMob-MIL模型在自建的結(jié)直腸癌數(shù)據(jù)集YN-CRC和公共弱監(jiān)督組織病理學(xué)圖像數(shù)據(jù)集LUAD-HistoSeg-BC上與其他模型相比展示出了先進(jìn)的分割性能,模型權(quán)重僅為14 MB,在YN-CRC數(shù)據(jù)集上F1 Score達(dá)到了89.5%,比先進(jìn)的多層偽監(jiān)督(MLPS)模型提高了3%。實(shí)驗(yàn)結(jié)果表明,DASMob-MIL僅使用圖像級標(biāo)簽實(shí)現(xiàn)了像素級的分割,有效改善了弱監(jiān)督組織病理學(xué)圖像的分割性能。
  • 圖  1  基于MIL的病理圖像弱監(jiān)督語義分割示意圖

    圖  2  所提出的DASMob-MIL模型總體框架

    圖  3  交叉注意力結(jié)構(gòu)與全局依賴關(guān)系建立過程

    圖  4  不同模型在YN-CRC數(shù)據(jù)集上的分割結(jié)果

    圖  5  不同模型在LUAD-HistoSeg-BC數(shù)據(jù)集上的分割結(jié)果

    表  1  不同模型在YN-CRC數(shù)據(jù)集上的分割性能對比

    模型 F1 EC (%) F1 NEC (%) F1 Score (%) HD EC Precision (%) Recall (%) 權(quán)重 (MB) 推理時(shí)間(s)
    全監(jiān)督 U-Net 91.4 99.6 93.0 5.973 95.1 91.4 33.0 0.0112
    MobileUNetv3 91.6 99.6 93.1 5.378 95.2 91.6 26.6 0.0056
    弱監(jiān)督 SA-MIL 35.4 87.5 45.3 42.103 61.8 43.0 7.07 0.1218
    DWS-MIL 76.7 98.7 80.9 27.690 89.5 82.4 6.65 0.0144
    Swin-MIL 82.9 99.6 86.1 18.915 90.3 86.3 105 0.0279
    MLPS 83.4 99.8 86.5 41.701 83.8 91.7 453 0.0220
    本文(DASMob-MIL) 87.3 99.0 89.5 23.576 86.5 94.6 14.0 0.0712
    下載: 導(dǎo)出CSV

    表  2  不同模型在LUAD-HistoSeg-BC數(shù)據(jù)集上的分割性能對比

    模型F1 TM (%)F1 NTM (%)F1 Score (%)HD TMPrecision (%)Recall (%)權(quán)重(MB)推理時(shí)間(s)
    弱監(jiān)督MLPS56.999.961.838.02976.456.74530.0133
    SA-MIL65.910069.819.01278.670.87.070.0268
    DWS-MIL68.594.971.519.57876.975.96.650.0079
    Swin-MIL71.699.474.719.14874.582.51050.0209
    本文(DASMob-MIL)73.498.576.323.51573.684.614.00.0378
    下載: 導(dǎo)出CSV

    表  3  不同局部特征提取主干對分割精度的影響

    主干F1 EC(%)F1 NEC(%)F1 Score(%)HD ECPrecision(%)Recall(%)權(quán)重(MB)推理時(shí)間(s)
    VGG-1659.910067.5159.92957.298.41000.0624
    ResNet5070.799.876.242.56574.685.82810.0349
    EfficientNetv273.299.678.278.89472.091.32120.0463
    ShuffleNetv275.599.480.073.64275.590.069.00.0185
    U-Net78.298.482.164.23174.095.565.90.0364
    MobileNetv380.199.483.726.62186.286.313.30.0143
    下載: 導(dǎo)出CSV

    表  4  所提出的模塊對分割精度的影響

    模型 模塊 評價(jià)指標(biāo)
    GIPB PAR DAS F1 EC (%) F1 NEC (%) F1 Score (%) HD EC Precision (%) Recall (%) 權(quán)重(MB) 推理時(shí)間(s)
    基準(zhǔn) 80.1 99.4 83.7 26.621 86.2 86.3 13.3 0.0143
    消融1 82.4 99.6 85.7 15.667 86.3 87.3 13.8 0.0150
    消融2 83.4 99.5 86.4 22.674 88.4 87.6 13.5 0.0285
    消融3 84.5 99.7 87.4 28.712 86.8 90.5 13.3 0.0427
    消融4 83.8 98.3 86.5 18.664 82.0 93.8 14.0 0.0316
    消融5 85.4 99.5 88.1 27.261 89.5 89.2 13.5 0.0625
    消融6 86.0 99.3 88.6 25.358 84.6 95.1 13.9 0.0448
    DASMob-MIL 87.3 99.0 89.5 23.576 86.5 94.6 14.0 0.0712
    下載: 導(dǎo)出CSV

    表  5  PAR模塊中迭代次數(shù)對分割精度的影響

    $ T $ F1 EC(%) F1 NEC(%) F1 Score(%) HD EC Precision(%) Recall(%) 推理時(shí)間(s)
    基準(zhǔn) 80.1 99.4 83.7 26.621 86.2 86.3 0.0143
    5 80.6 99.8 84.3 38.394 84.5 88.8 0.0341
    10 84.5 99.7 87.4 28.712 86.8 90.5 0.0427
    15 83.7 99.7 86.7 33.183 86.5 90.7 0.0529
    20 79.9 99.4 83.6 41.216 83.2 88.5 0.0640
    下載: 導(dǎo)出CSV

    表  6  不同GIPB配置對分割精度的影響

    編碼器數(shù)F1 EC(%)F1 NEC(%)F1 Score(%)HD ECPrecision(%)Recall(%)權(quán)重(MB)推理時(shí)間(s)
    基準(zhǔn)80.199.483.726.62186.286.313.30.0143
    180.199.883.916.78384.987.013.40.0278
    278.399.482.335.05582.287.013.40.0265
    383.499.586.422.67488.487.613.50.0285
    479.998.983.430.95584.087.814.10.0328
    575.699.880.234.78284.282.516.00.0346
    下載: 導(dǎo)出CSV

    表  7  DAS結(jié)構(gòu)中不同側(cè)分支權(quán)重系數(shù)對分割精度的影響

    分組 權(quán)重系數(shù) F1 EC(%) F1 NEC(%) F1 Score(%) HD EC Precision(%) Recall(%) 推理時(shí)間(s)
    基準(zhǔn) 80.1 99.4 83.7 26.621 86.2 86.3 0.0143
    1 [0.15,0.15,0.2,0.5] 82.4 99.6 85.7 15.667 86.3 87.3 0.0150
    2 [0.1,0.1,0.3,0.5] 81.2 99.7 84.7 19.489 88.0 86.5 0.0151
    3 [0.15,0.15,0.3,0.4] 74.3 99.7 79.1 21.112 75.2 88.8 0.0149
    4 [0.2,0.2,0.3,0.3] 80.6 97.9 83.9 21.314 80.3 90.5 0.0152
    5 [0.2,0.2,0.25,0.35] 81.2 99.6 84.7 20.356 83.0 90.6 0.0150
    下載: 導(dǎo)出CSV
  • [1] BRAY F, FERLAY J, SOERJOMATARAM I, et al. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries[J]. CA: A Cancer Journal for Clinicians, 2018, 68(6): 394–424. doi: 10.3322/caac.21492.
    [2] ZIDAN U, GABER M M, and ABDELSAMEA M M. SwinCup: Cascaded swin transformer for histopathological structures segmentation in colorectal cancer[J]. Expert Systems with Applications, 2023, 216: 119452. doi: 10.1016/j.eswa.2022.119452.
    [3] JIA Zhipeng, HUANG Xingyi, CHANG E I C, et al. Constrained deep weak supervision for histopathology image segmentation[J]. IEEE Transactions on Medical Imaging, 2017, 36(11): 2376–2388. doi: 10.1109/TMI.2017.2724070.
    [4] CAI Hongmin, YI Weiting, LI Yucheng, et al. A regional multiple instance learning network for whole slide image segmentation[C]. 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, USA, 2022: 922–928. doi: 10.1109/BIBM55620.2022.9995017.
    [5] LI Kailu, QIAN Ziniu, HAN Yingnan, et al. Weakly supervised histopathology image segmentation with self-attention[J]. Medical Image Analysis, 2023, 86: 102791. doi: 10.1016/j.media.2023.102791.
    [6] ZHOU Yanzhao, ZHU Yi, YE Qixiang, et al. Weakly supervised instance segmentation using class peak response[C]. The 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3791–3800. doi: 10.1109/CVPR.2018.00399.
    [7] ZHONG Lanfeng, WANG Guotai, LIAO Xin, et al. HAMIL: High-resolution activation maps and interleaved learning for weakly supervised segmentation of histopathological images[J]. IEEE Transactions on Medical Imaging, 2023, 42(10): 2912–2923. doi: 10.1109/TMI.2023.3269798.
    [8] HAN Chu, LIN Jiatai, MAI Jinhai, et al. Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels[J]. Medical Image Analysis, 2022, 80: 102487. doi: 10.1016/j.media.2022.102487.
    [9] DIETTERICH T G, LATHROP R H, and LOZANO-PéREZ T. Solving the multiple instance problem with axis-parallel rectangles[J]. Artificial Intelligence, 1997, 89(1/2): 31–71. doi: 10.1016/S0004-3702(96)00034-3.
    [10] XU Gang, SONG Zhigang, SUN Zhuo, et al. CAMEL: A weakly supervised learning framework for histopathology image segmentation[C]. The 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 10681–10690. doi: 10.1109/ICCV.2019.01078.
    [11] 徐金東, 趙甜雨, 馮國政, 等. 基于上下文模糊C均值聚類的圖像分割算法[J]. 電子與信息學(xué)報(bào), 2021, 43(7): 2079–2086. doi: 10.11999/JEIT200263.

    XU Jindong, ZHAO Tianyu, FENG Guozheng, et al. Image segmentation algorithm based on context fuzzy C-means clustering[J]. Journal of Electronics & Information Technology, 2021, 43(7): 2079–2086. doi: 10.11999/JEIT200263.
    [12] 杭昊, 黃影平, 張栩瑞, 等. 面向道路場景語義分割的移動窗口變換神經(jīng)網(wǎng)絡(luò)設(shè)計(jì)[J]. 光電工程, 2024, 51(1): 230304. doi: 10.12086/oee.2024.230304.

    HANG Hao, HUANG Yingping, ZHANG Xurui, et al. Design of Swin Transformer for semantic segmentation of road scenes[J]. Opto-Electronic Engineering, 2024, 51(1): 230304. doi: 10.12086/oee.2024.230304.
    [13] QIAN Ziniu, LI Kailu, LAI Maode, et al. Transformer based multiple instance learning for weakly supervised histopathology image segmentation[C]. The 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 2022: 160–170. doi: 10.1007/978-3-031-16434-7_16.
    [14] HUANG Zilong, WANG Xinggang, HUANG Lichao, et al. CCNet: Criss-cross attention for semantic segmentation [C]. The IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 603–612. doi: 10.1109/ICCV.2019.00069.
    [15] RU Lixiang, ZHAN Yibing, YU Baosheng, et al. Learning affinity from attention: End-to-end weakly-supervised semantic segmentation with transformers[C]. The 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 16825–16834. doi: 10.1109/CVPR52688.2022.01634.
    [16] XIE Yuhan, ZHANG Zhiyong, CHEN Shaolong, et al. Detect, Grow, Seg: A weakly supervision method for medical image segmentation based on bounding box[J]. Biomedical Signal Processing and Control, 2023, 86: 105158. doi: 10.1016/j.bspc.2023.105158.
    [17] KWEON H, YOON S H, KIM H, et al. Unlocking the potential of ordinary classifier: Class-specific adversarial erasing framework for weakly supervised semantic segmentation[C]. The 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 6974–6983. doi: 10.1109/ICCV48922.2021.00691.
    [18] HOWARD A, SANDLER M, CHEN Bo, et al. Searching for MobileNetV3[C]. The 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 1314–1324. doi: 10.1109/ICCV.2019.00140.
    [19] VIOLA P, PLATT J C, and ZHANG Cha. Multiple instance boosting for object detection[J]. Proceedings of the 18th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2005: 1417–1424.
    [20] RONNEBERGER O, FISCHER P, and BROX T. U-net: Convolutional networks for biomedical image segmentation[C]. The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241. doi: 10.1007/978-3-319-24574-4_28.
    [21] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [22] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [23] MA Ningning, ZHANG Xiangyu, ZHENG Haitao, et al. ShuffleNet v2: Practical guidelines for efficient CNN architecture design[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 122–138. doi: 10.1007/978-3-030-01264-9_8.
    [24] TAN Mingxing and LE Q V. EfficientNetV2: Smaller models and faster training[C]. The 38th International Conference on Machine Learning, 2021: 10096–10106.
  • 加載中
圖(5) / 表(7)
計(jì)量
  • 文章訪問數(shù):  280
  • HTML全文瀏覽量:  87
  • PDF下載量:  57
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-05-09
  • 修回日期:  2024-07-17
  • 網(wǎng)絡(luò)出版日期:  2024-08-02
  • 刊出日期:  2024-09-26

目錄

    /

    返回文章
    返回