一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言?xún)?nèi)容
驗(yàn)證碼

上下文信息融合與分支交互的SAR圖像艦船無(wú)錨框檢測(cè)

曲海成 高健康 劉萬(wàn)軍 王曉娜

曲海成, 高健康, 劉萬(wàn)軍, 王曉娜. 上下文信息融合與分支交互的SAR圖像艦船無(wú)錨框檢測(cè)[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 380-389. doi: 10.11999/JEIT201059
引用本文: 曲海成, 高健康, 劉萬(wàn)軍, 王曉娜. 上下文信息融合與分支交互的SAR圖像艦船無(wú)錨框檢測(cè)[J]. 電子與信息學(xué)報(bào), 2022, 44(1): 380-389. doi: 10.11999/JEIT201059
QU Haicheng, GAO Jiankang, LIU Wanjun, WANG Xiaona. An Anchor-free Method Based on Context Information Fusion and Interacting Branch for Ship Detection in SAR Images[J]. Journal of Electronics & Information Technology, 2022, 44(1): 380-389. doi: 10.11999/JEIT201059
Citation: QU Haicheng, GAO Jiankang, LIU Wanjun, WANG Xiaona. An Anchor-free Method Based on Context Information Fusion and Interacting Branch for Ship Detection in SAR Images[J]. Journal of Electronics & Information Technology, 2022, 44(1): 380-389. doi: 10.11999/JEIT201059

上下文信息融合與分支交互的SAR圖像艦船無(wú)錨框檢測(cè)

doi: 10.11999/JEIT201059
基金項(xiàng)目: 國(guó)家自然科學(xué)基金青年基金(41701479),遼寧省教育廳基金(LJ2019JL010),遼寧工程技術(shù)大學(xué)學(xué)科創(chuàng)新團(tuán)隊(duì)(LNTU20TD-23)
詳細(xì)信息
    作者簡(jiǎn)介:

    曲海成:男,1981年生,副教授,研究方向?yàn)檫b感影像高性能計(jì)算、視覺(jué)信息計(jì)算、目標(biāo)檢測(cè)與識(shí)別

    高健康:男,1996年生,碩士生,研究方向?yàn)檫b感圖像目標(biāo)檢測(cè)

    劉萬(wàn)軍:男,1959年生,教授,研究方向?yàn)閿?shù)字圖像處理、運(yùn)動(dòng)目標(biāo)檢測(cè)與跟蹤

    王曉娜:女,1994年生,碩士生,研究方向?yàn)閿?shù)字圖像處理

    通訊作者:

    高健康 gjk_0825@163.com

  • 1) SSDD數(shù)據(jù)集下載鏈接:https://zhuanlan.zhihu.com/p/1437944682) SAR-Ship-Dataset數(shù)據(jù)集下載:https://pan.baidu.com/s/1PhSMkXVcuRM8M8xL15iBIQ
  • 中圖分類(lèi)號(hào): TN911.73; TP751

An Anchor-free Method Based on Context Information Fusion and Interacting Branch for Ship Detection in SAR Images

Funds: The Young Scientists Fund of National Natural Science Foundation of China (41701479), The Department of Education Fund Item (LJ2019JL010) of Liaoning Province, The Discipline Innovation Team of Liaoning Technical University (LNTU20TD-23)
  • 摘要: SAR圖像中艦船目標(biāo)稀疏分布、錨框的設(shè)計(jì),對(duì)現(xiàn)有基于錨框的SAR圖像目標(biāo)檢測(cè)方法的精度和泛化性有較大影響,因此該文提出一種上下文信息融合與分支交互的SAR圖像艦船目標(biāo)無(wú)錨框檢測(cè)方法,命名為CI-Net。考慮到SAR圖中艦船尺度的多樣性,在特征提取階段設(shè)計(jì)上下文融合模塊,以自底向上的方式融合高低層信息,結(jié)合目標(biāo)上下文信息,細(xì)化提取到的待檢測(cè)特征;其次,針對(duì)復(fù)雜場(chǎng)景中目標(biāo)定位準(zhǔn)確性不足的問(wèn)題,提出分支交互模塊,在檢測(cè)階段利用分類(lèi)分支優(yōu)化回歸分支的檢測(cè)框,改善目標(biāo)定位框的精準(zhǔn)性,同時(shí)將新增的IOU分支作用于分類(lèi)分支,提高檢測(cè)網(wǎng)絡(luò)分類(lèi)置信度,抑制低質(zhì)量的檢測(cè)框。實(shí)驗(yàn)結(jié)果表明:在公開(kāi)的SSDD和SAR-Ship-Dataset數(shù)據(jù)集上,該文方法均取得了較好的檢測(cè)效果,平均精度(AP)分別達(dá)到92.56%和88.32%,與其他SAR圖艦船檢測(cè)方法相比,該文方法不僅在精度上表現(xiàn)優(yōu)異,在摒棄了與錨框有關(guān)的復(fù)雜計(jì)算后,較快的檢測(cè)速度,對(duì)SAR圖像實(shí)時(shí)目標(biāo)檢測(cè)也有一定的現(xiàn)實(shí)意義。
  • 圖  1  無(wú)錨框的檢測(cè)模型

    圖  2  CI-Net檢測(cè)模型框架

    圖  3  上下文融合模塊

    圖  4  GCNet結(jié)構(gòu)

    圖  5  自注意力模塊

    圖  6  檢測(cè)結(jié)果對(duì)比圖

    圖  7  上下文融合模塊特征可視化

    圖  8  不同方法的P-R曲線圖

    表  1  艦船數(shù)據(jù)集的基本信息

    數(shù)據(jù)集傳感器來(lái)源空間分辨率(m)極化方式輸入圖像大小場(chǎng)景
    SSDDRadarSat-2, TerraSAR-X, Sentinel-11~15VV, HH, VH, HV500×500近海、近岸區(qū)域
    SAR-Ship DatasetGF-3, Sentinel-13, 5, 8, 10等VV, HH, VH, HV256×256遠(yuǎn)海區(qū)域
    下載: 導(dǎo)出CSV

    表  2  模型實(shí)驗(yàn)結(jié)果

    方法上下文融合(CF)分支交互(IB)召回率(%)準(zhǔn)確率(%)平均精度(%)F1(%)fps
    FCOS[14]××88.6488.4486.2788.5423
    本文×92.2386.6090.6989.3229
    FCOS[14]×90.3193.4188.4291.8322
    本文94.2792.0492.5693.1428
    注:“×”表示沒(méi)有采用該模塊。“√”表示采用該模塊。加粗值為每列最優(yōu)結(jié)果。
    下載: 導(dǎo)出CSV

    表  3  不同方法在SSDD數(shù)據(jù)集上檢測(cè)性能對(duì)比

    方法單階段無(wú)錨框召回率(%)準(zhǔn)確率(%)平均精度(%)F1(%)fps
    Faster R-CNN××85.3984.1883.0784.7811
    RetinaNet×89.4090.4387.9489.9116
    DCMSNN××91.5988.3389.3489.938
    本文CI-Net94.2792.0492.5693.1428
    下載: 導(dǎo)出CSV

    表  4  不同方法在SAR-Ship-Dataset上檢測(cè)性能對(duì)比

    方法單階段無(wú)錨框召回率(%)準(zhǔn)確率(%)平均精度(%)F1(%)fps
    Faster R-CNN××84.3084.4781.7784.3913
    RetinaNet×84.6085.8382.0285.2121
    DCMSNN××86.6488.0784.3687.359
    本文CI-Net90.2888.1488.3289.2034
    下載: 導(dǎo)出CSV
  • [1] 楊國(guó)錚, 禹晶, 肖創(chuàng)柏, 等. 基于形態(tài)字典學(xué)習(xí)的復(fù)雜背景SAR圖像艦船尾跡檢測(cè)[J]. 自動(dòng)化學(xué)報(bào), 2017, 43(10): 1713–1725. doi: 10.16383/j.aas.2017.c160274

    YANG Guozheng, YU Jing, XIAO Chuangbai, et al. Ship Wake detection in SAR images with complex background using morphological dictionary learning[J]. Acta Automatica Sinica, 2017, 43(10): 1713–1725. doi: 10.16383/j.aas.2017.c160274
    [2] 李健偉, 曲長(zhǎng)文, 彭書(shū)娟, 等. 基于生成對(duì)抗網(wǎng)絡(luò)和線上難例挖掘的SAR圖像艦船目標(biāo)檢測(cè)[J]. 電子與信息學(xué)報(bào), 2019, 41(1): 143–149. doi: 10.11999/JEIT180050

    LI Jianwei, QU Changwen, PENG Shujuan, et al. Ship detection in SAR images based on generative adversarial network and online hard examples mining[J]. Journal of Electronics &Information Technology, 2019, 41(1): 143–149. doi: 10.11999/JEIT180050
    [3] HOU Biao, CHEN Xingzhong, and JIAO Licheng. Multilayer CFAR detection of ship targets in very high resolution SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(4): 811–815. doi: 10.1109/LGRS.2014.2362955
    [4] LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. 2017 SAR in Big Data Era: Models, Methods and Applications, Beijing, China, 2017: 1–6. doi: 10.1109/BIGSARDATA.2017.8124934.
    [5] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    [6] JIAO Jiao, ZHANG Yue, SUN Hao, et al. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection[J]. IEEE Access, 2018, 6: 20881–20892. doi: 10.1109/ACCESS.2018.2825376
    [7] 胡昌華, 陳辰, 何川, 等. 基于深度卷積神經(jīng)網(wǎng)絡(luò)的SAR圖像艦船小目標(biāo)檢測(cè)[J]. 中國(guó)慣性技術(shù)學(xué)報(bào), 2019, 27(3): 397–405, 414. doi: 10.13695/j.cnki.12-1222/o3.2019.03.018

    HU Changhua, CHEN Chen, HE Chuan, et al. SAR detection for small target ship based on deep convolutional neural network[J]. Journal of Chinese Inertial Technology, 2019, 27(3): 397–405, 414. doi: 10.13695/j.cnki.12-1222/o3.2019.03.018
    [8] LIN T Y, DOLLáR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 936–944. doi: 10.1109/CVPR.2017.106.
    [9] CUI Zongyong, LI Qi, CAO Zongjie, et al. Dense attention pyramid networks for multi-scale ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 8983–8997. doi: 10.1109/TGRS.2019.2923988
    [10] LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, Netherlands, 2016: 21–37. doi: 10.1007/978-3-319-46448-0_2.
    [11] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 779–788. doi: 10.1109/CVPR.2016.91.
    [12] SHRIVASTAVA A, GUPTA A, and GIRSHICK R. Training region-based object detectors with online hard example mining[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 761–769. doi: 10.1109/CVPR.2016.89.
    [13] DUAN Kaiwen, BAI Song, XIE Lingxi, et al. CenterNet: Keypoint triplets for object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019: 6568–6577. doi: 10.1109/ICCV.2019.00667.
    [14] TIAN Zhi, SHEN Chunhua, CHEN Hao, et al. FCOS: Fully convolutional one-stage object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019: 9626–9635. doi: 10.1109/ICCV.2019.00972.
    [15] PANG Jiangmiao, CHEN Kai, SHI Jianping, et al. Libra R-CNN: Towards balanced learning for object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 821–830. doi: 10.1109/CVPR.2019.00091.
    [16] CAO Yue, XU Jiarui, LIN S, et al. GCNet: Non-local networks meet squeeze-excitation networks and beyond[C]. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, South Korea,2019: 1971–1980. doi: 10.1109/ICCVW.2019.00246.
    [17] WANG Xiaolong, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7794–7803. doi: 10.1109/CVPR.2018.00813.
    [18] HU Jie, SHEN Li, and SUN Gang. Squeeze-and-excitation networks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7132–7141. doi: 10.1109/CVPR.2018.00745.
    [19] LI Huan and TANG Jinglei. Dairy goat image generation based on improved-self-attention generative adversarial networks[J]. IEEE Access, 2020, 8: 62448–62457. doi: 10.1109/ACCESS.2020.2981496
    [20] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327. doi: 10.1109/TPAMI.2018.2858826
    [21] WANG Yuanyuan, WANG Chao, ZHANG Hong, et al. A SAR dataset of ship detection for deep learning under complex backgrounds[J]. Remote Sensing, 2019, 11(7): 765. doi: 10.3390/rs11070765
    [22] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
    [23] KANG Miao, JI Kefeng, LENG Xiangguang, et al. Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection[J]. Remote Sensing, 2017, 9(8): 860. doi: 10.3390/rs9080860
  • 加載中
圖(8) / 表(4)
計(jì)量
  • 文章訪問(wèn)數(shù):  1031
  • HTML全文瀏覽量:  442
  • PDF下載量:  100
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2020-12-16
  • 修回日期:  2021-05-27
  • 網(wǎng)絡(luò)出版日期:  2021-08-27
  • 刊出日期:  2022-01-10

目錄

    /

    返回文章
    返回