一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

SAR目標(biāo)增量識(shí)別中基于最大化非重合體積的樣例挑選方法

李斌 崔宗勇 汪浩瀚 周正 田宇 曹宗杰

李斌, 崔宗勇, 汪浩瀚, 周正, 田宇, 曹宗杰. SAR目標(biāo)增量識(shí)別中基于最大化非重合體積的樣例挑選方法[J]. 電子與信息學(xué)報(bào), 2024, 46(10): 3918-3927. doi: 10.11999/JEIT240217
引用本文: 李斌, 崔宗勇, 汪浩瀚, 周正, 田宇, 曹宗杰. SAR目標(biāo)增量識(shí)別中基于最大化非重合體積的樣例挑選方法[J]. 電子與信息學(xué)報(bào), 2024, 46(10): 3918-3927. doi: 10.11999/JEIT240217
LI Bin, CUI Zongyong, WANG Haohan, ZHOU Zheng, TIAN Yu, CAO Zongjie. Exemplar Selection Based on Maximizing Non-overlapping Volume in SAR Target Incremental Recognition[J]. Journal of Electronics & Information Technology, 2024, 46(10): 3918-3927. doi: 10.11999/JEIT240217
Citation: LI Bin, CUI Zongyong, WANG Haohan, ZHOU Zheng, TIAN Yu, CAO Zongjie. Exemplar Selection Based on Maximizing Non-overlapping Volume in SAR Target Incremental Recognition[J]. Journal of Electronics & Information Technology, 2024, 46(10): 3918-3927. doi: 10.11999/JEIT240217

SAR目標(biāo)增量識(shí)別中基于最大化非重合體積的樣例挑選方法

doi: 10.11999/JEIT240217
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(62271116)
詳細(xì)信息
    作者簡(jiǎn)介:

    李斌:男,博士生,研究方向?yàn)镾AR目標(biāo)增量學(xué)習(xí)

    崔宗勇:男,博士,研究方向?yàn)镾AR目標(biāo)檢測(cè)

    汪浩瀚:女,碩士生,研究方向?yàn)镾AR目標(biāo)增量學(xué)習(xí)

    周正:男,博士生,研究方向?yàn)樾颖維AR目標(biāo)檢測(cè)

    田宇:男,博士生,研究方向?yàn)樵隽縎AR目標(biāo)檢測(cè)

    曹宗杰:男,博士,研究方向?yàn)镾AR目標(biāo)成像與識(shí)別

    通訊作者:

    曹宗杰 zjcao@uestc.edu.cn

  • 中圖分類號(hào): TN957.52

Exemplar Selection Based on Maximizing Non-overlapping Volume in SAR Target Incremental Recognition

Funds: The National Natural Science Foundation of China (62271116)
  • 摘要: 為了確保合成孔徑雷達(dá)(SAR)自動(dòng)目標(biāo)識(shí)別(ATR)系統(tǒng)能夠迅速適應(yīng)新的應(yīng)用環(huán)境,其必須具備快速學(xué)習(xí)新類的能力。目前的SAR ATR系統(tǒng)在學(xué)習(xí)新類時(shí)需要不斷重復(fù)訓(xùn)練所有舊類樣本,這會(huì)造成大量存儲(chǔ)資源的浪費(fèi),同時(shí)識(shí)別模型無(wú)法快速更新。保留少量的舊類樣例進(jìn)行后續(xù)的增量訓(xùn)練是模型增量識(shí)別的關(guān)鍵。為了解決這個(gè)問(wèn)題,該文提出基于最大化非重合體積的樣例挑選方法(ESMNV),一種側(cè)重于分布非重合體積的樣例選擇算法。ESMNV將每個(gè)已知類的樣例選擇問(wèn)題轉(zhuǎn)化為分布非重合體積的漸近增長(zhǎng)問(wèn)題,旨在最大化所選樣例的分布的非重合體積。ESMNV利用分布之間的相似性來(lái)表示體積之間的差異。首先,ESMNV使用核函數(shù)將目標(biāo)類別的分布映射到重建核希爾伯特空間(RKHS),并使用高階矩來(lái)表示分布。然后,它使用最大均值差異(MMD)來(lái)計(jì)算目標(biāo)類別與所選樣例分布之間的差異。最后,結(jié)合貪心算法,ESMNV逐步選擇使樣例分布與目標(biāo)類別分布差異最小的樣例,確保在有限數(shù)量的樣例情況下最大化所選樣例的非重合體積。
  • 圖  1  所選樣例分布q與候選樣本分布$ e $之間的3種關(guān)系類型

    圖  2  所選樣例非重合體積的示意圖

    圖  3  ESMNV在MSTAR數(shù)據(jù)集類2S1上的示例性選擇可視化結(jié)果

    圖  4  所選樣例分布的進(jìn)一步分析

    圖  5  各方法在不同初始訓(xùn)練類數(shù)N下的平均增量準(zhǔn)確度

    圖  6  各方法在不同增量步長(zhǎng)T條件下的平均增量準(zhǔn)確率

    圖  7  各方法在SAR-AIRcraft-1.0上不同保存樣例m條件下的平均增量準(zhǔn)確度

    圖  8  各方法在OpenSARShip上不同保存樣例m條件下的平均增量準(zhǔn)確率

    表  1  保留樣例數(shù)量為5時(shí)的增量識(shí)別準(zhǔn)確率(%)

    模型45678910
    CBesIL99.6279.9667.2358.4453.8847.9341.88
    Random99.6283.3269.9861.5656.7349.9647.21
    Herding99.6283.9271.7761.9558.6752.1651.04
    DCBES99.6283.0971.9065.0759.5455.2754.42
    ESMNV99.6287.1575.6069.3266.9561.9060.49
    下載: 導(dǎo)出CSV

    表  2  保留樣例數(shù)量為10時(shí)的增量識(shí)別準(zhǔn)確率(%)

    模型45678910
    CBesIL99.6289.1380.1074.4470.5868.6365.31
    Random99.6290.0980.4678.0573.2570.3868.68
    Herding99.6289.6883.6479.0876.2074.3371.10
    DCBES99.6291.3385.0881.6779.1277.6975.97
    ESMNV99.6292.8786.0385.4083.5381.6281.05
    下載: 導(dǎo)出CSV

    表  3  保留樣例數(shù)量為15時(shí)的增量識(shí)別準(zhǔn)確率(%)

    模型45678910
    CBesIL99.6292.2585.7683.2180.5179.0475.35
    Random99.6293.0087.0686.2884.1980.6479.70
    Herding99.6293.4888.2486.3785.7083.0181.10
    DCBES99.6293.6389.4288.6587.0285.0983.70
    ESMNV99.6295.5392.1590.5989.7389.2887.31
    下載: 導(dǎo)出CSV
  • [1] LI Jianwei, YU Zhentao, YU Lu, et al. A comprehensive survey on SAR ATR in deep-learning era[J]. Remote Sensing, 2023, 15(5): 1454. doi: 10.3390/rs15051454.
    [2] WANG Chengwei, LUO Siyi, PEI Jifang, et al. Crucial feature capture and discrimination for limited training data SAR ATR[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 204: 291–305. doi: 10.1016/j.isprsjprs.2023.09.014.
    [3] 徐豐, 王海鵬, 金亞秋. 深度學(xué)習(xí)在SAR目標(biāo)識(shí)別與地物分類中的應(yīng)用[J]. 雷達(dá)學(xué)報(bào), 2017, 6(2): 136–148. doi: 10.12000/JR16130.

    XU Feng, WANG Haipeng, and JIN Yaqiu. Deep learning as applied in SAR target recognition and terrain classification[J]. Journal of Radars, 2017, 6(2): 136–148. doi: 10.12000/JR16130.
    [4] CARUANA R. Multitask learning[J]. Machine Learning, 1997, 28(1): 41–75. doi: 10.1023/A:1007379606734.
    [5] MCCLOSKEY M and COHEN N J. Catastrophic interference in connectionist networks: The sequential learning problem[J]. Psychology of Learning and Motivation, 1989, 24: 109–165. doi: 10.1016/S0079-7421(08)60536-8.
    [6] CHAUDHRY A, DOKANIA P K, AJANTHAN T, et al. Riemannian walk for incremental learning: Understanding forgetting and intransigence[C]. Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 2018: 556–572. doi: 10.1007/978-3-030-01252-6_33.
    [7] DANG Sihang, CAO Zongjie, CUI Zongyong, et al. Class boundary exemplar selection based incremental learning for automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(8): 5782–5792. doi: 10.1109/TGRS.2020.2970076.
    [8] MITTAL S, GALESSO S, and BROX T. Essentials for class incremental learning[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 3528–3517. doi: 10.1109/CVPRW53098.2021.00390.
    [9] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. iCaRL: Incremental classifier and representation learning[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5533–5542. doi: 10.1109/CVPR.2017.587.
    [10] DE LANGE M, ALJUNDI R, MASANA M, et al. A continual learning survey: Defying forgetting in classification tasks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(7): 3366–3385. doi: 10.1109/TPAMI.2021.3057446.
    [11] LI Zhizhong and HOIEM D. Learning without forgetting[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(12): 2935–2947. doi: 10.1109/TPAMI.2017.2773081.
    [12] RUSU A A, RABINOWITZ N C, DESJARDINS G, et al. Progressive neural networks[J]. arXiv: 1606.04671, 2016. doi: 10.48550/arXiv.1606.04671.
    [13] RUDD E M, JAIN L P, SCHEIRER W J, et al. The extreme value machine[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 762–768. doi: 10.1109/TPAMI.2017.2707495.
    [14] SHAO Junming, HUANG Feng, YANG Qinli, et al. Robust prototype-based learning on data streams[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(5): 978–991. doi: 10.1109/TKDE.2017.2772239.
    [15] LI Bin, CUI Zongyong, CAO Zongjie, et al. Incremental learning based on anchored class centers for SAR automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5235313. doi: 10.1109/TGRS.2022.3208346.
    [16] BORGWARDT K M, GRETTON A, RASCH M J, et al. Integrating structured biological data by kernel maximum mean discrepancy[J]. Bioinformatics, 2006, 22(14): e49–e57. doi: 10.1093/bioinformatics/btl242.
    [17] 王智睿, 康玉卓, 曾璇, 等. SAR-AIRcraft-1.0: 高分辨率SAR飛機(jī)檢測(cè)識(shí)別數(shù)據(jù)集[J]. 雷達(dá)學(xué)報(bào), 2023, 12(4): 906–922. doi: 10.12000/JR23043.

    WANG Zhirui, KANG Yuzhuo, ZENG Xuan, et al. SAR-AIRcraft-1.0: High-resolution SAR aircraft detection and recognition dataset[J]. Journal of Radars, 2023, 12(4): 906–922. doi: 10.12000/JR23043.
    [18] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672.
    [19] LI Bin, CUI Zongyong, SUN Yuxuan, et al. Density coverage-based exemplar selection for incremental SAR automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5211713. doi: 10.1109/TGRS.2023.3293509.
  • 加載中
圖(8) / 表(3)
計(jì)量
  • 文章訪問(wèn)數(shù):  229
  • HTML全文瀏覽量:  84
  • PDF下載量:  41
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-03-28
  • 修回日期:  2024-08-21
  • 網(wǎng)絡(luò)出版日期:  2024-08-30
  • 刊出日期:  2024-10-30

目錄

    /

    返回文章
    返回