一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

一種平衡準確性以及高效性的顯著性目標檢測深度卷積網(wǎng)絡模型

張文明 姚振飛 高雅昆 李海濱

張文明, 姚振飛, 高雅昆, 李海濱. 一種平衡準確性以及高效性的顯著性目標檢測深度卷積網(wǎng)絡模型[J]. 電子與信息學報, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229
引用本文: 張文明, 姚振飛, 高雅昆, 李海濱. 一種平衡準確性以及高效性的顯著性目標檢測深度卷積網(wǎng)絡模型[J]. 電子與信息學報, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229
Wenming ZHANG, Zhenfei YAO, Yakun GAO, Haibin LI. A Deep Convolutional Network for Saliency Object Detection with Balanced Accuracy and High Efficiency[J]. Journal of Electronics & Information Technology, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229
Citation: Wenming ZHANG, Zhenfei YAO, Yakun GAO, Haibin LI. A Deep Convolutional Network for Saliency Object Detection with Balanced Accuracy and High Efficiency[J]. Journal of Electronics & Information Technology, 2020, 42(5): 1201-1208. doi: 10.11999/JEIT190229

一種平衡準確性以及高效性的顯著性目標檢測深度卷積網(wǎng)絡模型

doi: 10.11999/JEIT190229
基金項目: 河北省自然科學基金(F2015203212, F2019203195)
詳細信息
    作者簡介:

    張文明:男,1979年生,副教授,研究方向為工業(yè)過程控制、機器視覺

    姚振飛:男,1992年生,碩士生,研究方向為機器視覺與圖像處理

    高雅昆:男,1988年生,博士生,研究方向為機器視覺與圖像處理

    李海濱:男,1978年生,教授,研究方向為工業(yè)過程控制.、機器視覺、人工智能

    通訊作者:

    高雅昆 gaoyakun6@163.com

  • 中圖分類號: TN911.73; TP391.41

A Deep Convolutional Network for Saliency Object Detection with Balanced Accuracy and High Efficiency

Funds: The Nature Science Foundation of Hebei Province (F2015203212, F2019203195)
  • 摘要:

    當前的顯著性目標檢測算法在準確性和高效性兩方面不能實現(xiàn)良好的平衡,針對這一問題,該文提出了一種新的平衡準確性以及高效性的顯著性目標檢測深度卷積網(wǎng)絡模型。首先,通過將傳統(tǒng)的卷積替換為可分解卷積,大幅減少計算量,提高檢測效率。其次,為了更好地利用不同尺度的特征,采用了稀疏跨層連接結構及多尺度融合結構來提高模型檢測精度。廣泛的評價表明,與現(xiàn)有方法相比,所提的算法在效率和精度上都取得了領先的性能。

  • 圖  1  整體框架圖

    圖  2  卷積分解示意圖

    圖  3  直連與稀疏跨層連接網(wǎng)絡結構對比圖

    圖  4  不同連接結構效果對比圖

    圖  5  多尺度融合示意圖

    圖  6  不同模型視覺對比圖

    圖  7  5種數(shù)據(jù)集上不同算法P-R曲線圖

    表  1  不同卷積結構對比

    結構參數(shù)量(106)準確率(%)使用時間(s)
    2維卷積5.1689.30.026
    分解卷積3.7589.70.017
    下載: 導出CSV

    表  2  不同卷積結構對比

    結構準確率(%)使用時間(s)
    無跨層連接89.70.017
    跨層連接91.70.023
    下載: 導出CSV

    表  3  整體網(wǎng)絡結構詳表

    結構名稱類型輸出尺寸輸出編號結構名稱類型輸出尺寸輸出編號
    convblock1reconv$ \times $2448$ \times $448$ \times $161cross-layerconv3rate=12224$ \times $224$ \times $256$5" $
    cross-layerconv3rate=16448$ \times $448$ \times $32$1' $convblock4maxpool下采樣
    cross-layerconv3rate=24448$ \times $448$ \times $256$1'' $reconv$ \times $356$ \times $56$ \times $1286
    convblock2maxpool下采樣concat3融合56$ \times $56$ \times $256$(5'+6) $
    reconv$ \times $2224$ \times $224$ \times $322conv1降維56$ \times $56$ \times $1287
    concat1融合224$ \times $224$ \times $64$(1'+2) $cross-layerconv3rate=656$ \times $56$ \times $256$7'' $
    conv1降維224$ \times $224$ \times $323convblock5maxpool下采樣
    cross-layerconv3rate=8224$ \times $224$ \times $64$3′ $reconv$ \times $328$ \times $28$ \times $2568
    cross-layerconv3rate=18224$ \times $224$ \times $256$3" $concat4融合28$ \times $28$ \times $1280$(1''+3''+5''+7''+8) $
    convblock3maxpool下采樣conv1降維28$ \times $28$ \times $2569
    reconv$ \times $3112$ \times $112$ \times $644upblock1deconv上采樣
    concat2融合112$ \times $112$ \times $128$(3'+4) $reconv$ \times $3112$ \times $112$ \times $64
    conv1降維112$ \times $112$ \times $645upblock2deconv上采樣448$ \times $448$ \times $2final
    ross-layerconv3rate=4224$ \times $224$ \times $128$5' $
    下載: 導出CSV

    表  4  F-measure(F-m)和MAE得分表

    算法MSRAECSSDPASCAL-SSODHKU-IS
    F-mMAE F-mMAE F-mMAE F-mMAE F-mMAE
    本文方法0.9140.0450.8930.0600.8140.1130.8320.1190.8930.036
    DCL0.9050.0520.8900.0880.8050.1250.8200.1390.8850.072
    ELD0.9040.0620.8670.0800.7710.1210.7600.1540.8390.074
    NLDF0.9110.0480.9050.0630.8310.0990.8100.1430.9020.048
    MST0.8390.1280.6530.1710.5840.236
    DSR0.8120.1190.7370.1730.6460.2040.6550.2340.7350.140
    下載: 導出CSV

    表  5  不同算法處理時間對比(s)

    模型本文方法DCLELDNLDFMSTDSR
    時間0.0231.2000.3000.0800.02513.580
    環(huán)境GTX1080GTX1080GTX1080Titan Xi7 CPUi7 CPU
    尺寸448$ \times $448300$ \times $400400$ \times $300300$ \times $400300$ \times $400400$ \times $300
    下載: 導出CSV
  • WANG Lijun, LU Huchuan, RUAN Xiang, et al. Deep networks for saliency detection via local estimation and global search[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3183–3192. doi: 10.1109/CVPR.2015.7298938.
    LI Guanbin and YU Yizhou. Visual saliency based on multiscale deep features[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 5455–5463. doi: 10.1109/CVPR.2015.7299184.
    LEE G, TAI Y W, and KIM J. Deep saliency with encoded low level distance map and high level features[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 660–668. doi: 10.1109/CVPR.2016.78.
    LIU Nian and HAN Junwei. DHSNet: Deep hierarchical saliency network for salient object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 678–686. doi: 10.1109/CVPR.2016.80.
    WANG Linzhao, WANG Lijun, LU Huchuan, et al. Saliency detection with recurrent fully convolutional networks[C]. The 14th European Conference on Computer Vision, Amsterdam, Netherlands, 2016: 825–841. doi: 10.1007/978-3-319-46493-0_50.
    ZHANG Xinsheng, GAO Teng, and GAO Dongdong. A new deep spatial transformer convolutional neural network for image saliency detection[J]. Design Automation for Embedded Systems, 2018, 22(3): 243–256. doi: 10.1007/s10617-018-9209-0
    ZHANG Jing, ZHANG Tong, DAI Yuchao, et al. Deep unsupervised saliency detection: A multiple noisy labeling perspective[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9029–9038. doi: 10.1109/CVPR.2018.00941.
    CAO Feilong, LIU Yuehua, and WANG Dianhui. Efficient saliency detection using convolutional neural networks with feature selection[J]. Information Sciences, 2018, 456: 34–49. doi: 10.1016/j.ins.2018.05.006
    ZHU Dandan, DAI Lei, LUO Ye, et al. Multi-scale adversarial feature learning for saliency detection[J]. Symmetry, 2018, 10(10): 457–471. doi: 10.3390/sym10100457
    ZENG Yu, ZHUGE Yunzhi, LU Huchuan, et al. Multi-source weak supervision for saliency detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 6067–6076.
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. 2014, arXiv: 1409.1556.
    ALVAREZ J and PETERSSON L. DecomposeMe: Simplifying convNets for end-to-end learning[J]. 2016, arXiv: 1606.05426v1.
    LIU Tie, YUAN Zejian, SUN Jian, et al. Learning to detect a salient object[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(2): 353–367. doi: 10.1109/TPAMI.2010.70
    YAN Qiong, XU Li, SHI Jianping, et al. Hierarchical saliency detection[C]. 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 1155–1162. doi: 10.1109/CVPR.2013.153.
    LI Yin, HOU Xiaodi, KOCH C, et al. The secrets of salient object segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 280–287. doi: 10.1109/CVPR.2014.43.
    MOVAHEDI V and ELDER J H. Design and perceptual validation of performance measures for salient object segmentation[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 49–56. doi: 10.1109/CVPRW.2010.5543739.
    LI Guanbin and YU Yizhou. Deep contrast learning for salient object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 478–487. doi: 10.1109/CVPR.2016.58.
    LUO Zhiming, MISHRA A, ACHKAR A, et al. Non-local deep features for salient object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6593–6601. doi: 10.1109/CVPR.2017.698.
    TU W C, HE Shengfeng, YANG Qingxiong, et al. Real-time salient object detection with a minimum spanning tree[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2334–2342. doi: 10.1109/CVPR.2016.256.
    LI Xiaohui, LU Huchuan, ZHANG Lihe, et al. Saliency detection via dense and sparse reconstruction[C]. 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 2013: 2976–2983. doi: 10.1109/ICCV.2013.370.
  • 加載中
圖(7) / 表(5)
計量
  • 文章訪問數(shù):  4131
  • HTML全文瀏覽量:  3455
  • PDF下載量:  143
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-04-08
  • 修回日期:  2019-08-30
  • 網(wǎng)絡出版日期:  2020-01-21
  • 刊出日期:  2020-06-04

目錄

    /

    返回文章
    返回