一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號碼
標(biāo)題
留言內(nèi)容
驗證碼

零記憶增量學(xué)習(xí)的復(fù)合有源干擾識別

吳振華 崔金鑫 曹宜策 張強 張磊 楊利霞

吳振華, 崔金鑫, 曹宜策, 張強, 張磊, 楊利霞. 零記憶增量學(xué)習(xí)的復(fù)合有源干擾識別[J]. 電子與信息學(xué)報, 2025, 47(1): 188-200. doi: 10.11999/JEIT240521
引用本文: 吳振華, 崔金鑫, 曹宜策, 張強, 張磊, 楊利霞. 零記憶增量學(xué)習(xí)的復(fù)合有源干擾識別[J]. 電子與信息學(xué)報, 2025, 47(1): 188-200. doi: 10.11999/JEIT240521
WU Zhenhua, CUI Jinxin, CAO Yice, ZHANG Qiang, ZHANG Lei, YANG Lixia. Compound Active Jamming Recognition for Zero-memory Incremental Learning[J]. Journal of Electronics & Information Technology, 2025, 47(1): 188-200. doi: 10.11999/JEIT240521
Citation: WU Zhenhua, CUI Jinxin, CAO Yice, ZHANG Qiang, ZHANG Lei, YANG Lixia. Compound Active Jamming Recognition for Zero-memory Incremental Learning[J]. Journal of Electronics & Information Technology, 2025, 47(1): 188-200. doi: 10.11999/JEIT240521

零記憶增量學(xué)習(xí)的復(fù)合有源干擾識別

doi: 10.11999/JEIT240521
基金項目: 國家自然科學(xué)基金(62201007, 62401007),中國博士后科學(xué)基金(2020M681992),安徽省自然科學(xué)基金(2308085QF199)
詳細(xì)信息
    作者簡介:

    吳振華:男,副教授,研究方向為雷達(dá)成像、雷達(dá)信號處理、雷達(dá)智能干擾對抗

    崔金鑫:男,碩士生,研究方向為雷達(dá)干擾識別、雷達(dá)信號處理

    曹宜策:女,講師,研究方向為遙感圖像的智能解譯、雷達(dá)信號處理、計算機(jī)視覺

    張強:男,研究員,研究方向為天基信號處理

    張磊:男,教授,研究方向為雷達(dá)信號處理

    楊利霞:男,教授,研究方向為電磁散射與逆散射、電波傳播及天線理論與設(shè)計、計算電磁學(xué)

    通訊作者:

    曹宜策 yccao@ahu.edu.cn

  • 中圖分類號: TN974

Compound Active Jamming Recognition for Zero-memory Incremental Learning

Funds: The National Natural Science Foundation of China (62201007, 62401007), China Postdoctoral Science Foundation (2020M681992), The Natural Science Foundation of Anhui (2308085QF199)
  • 摘要: 非完備、高動態(tài)有源干擾對抗作戰(zhàn)環(huán)境下,現(xiàn)階段針對庫內(nèi)多類型單一有源干擾樣本所優(yōu)化訓(xùn)練的靜態(tài)模型,在面對庫外類型多樣、參數(shù)多變、組合方式多元的復(fù)合干擾時,模型無法快速更新且難以應(yīng)對測試樣本數(shù)非均衡問題。針對此問題,該文提出一種基于零記憶增量學(xué)習(xí)的雷達(dá)復(fù)合有源干擾識別方法。首先,利用元學(xué)習(xí)訓(xùn)練模式對庫內(nèi)單一干擾進(jìn)行原型學(xué)習(xí),訓(xùn)練出高效的特征提取器,使其具備對庫外復(fù)合干擾特征有效提取能力。進(jìn)而,基于超維空間和余弦相似度計算,構(gòu)建零記憶增量學(xué)習(xí)網(wǎng)絡(luò)(ZMILN),將復(fù)合干擾原型向量映射到超維空間并存儲,從而實現(xiàn)識別模型動態(tài)更新。此外,為解決樣本數(shù)非均衡下復(fù)合干擾識別問題,設(shè)計直推式信息最大化(TIM)測試模塊,通過在互信息損失函數(shù)中加入散度約束,對識別模型進(jìn)一步強化訓(xùn)練以應(yīng)對非均衡測試樣本。實驗結(jié)果表明,該文所提方法在非均衡測試條件下對4種單一干擾和7種復(fù)合干擾進(jìn)行增量學(xué)習(xí)后,平均識別準(zhǔn)確率達(dá)到了93.62%。該方法通過對庫內(nèi)多類型單一干擾知識充分提取,實現(xiàn)對多種組合條件下庫外復(fù)合干擾的快速動態(tài)識別。
  • 圖  1  雷達(dá)回波及干擾信號的時頻譜圖

    圖  2  零記憶增量學(xué)習(xí)網(wǎng)絡(luò)

    圖  3  原型學(xué)習(xí)網(wǎng)絡(luò)詳細(xì)結(jié)構(gòu)圖

    圖  4  原型空間更新示意圖

    圖  5  不同方法下的干擾識別準(zhǔn)確率曲線

    圖  6  1-way 5-shot設(shè)置下的t-SNE可視化圖

    圖  7  不同基礎(chǔ)設(shè)置下的干擾識別性能曲線

    表  1  雷達(dá)干擾參數(shù)

    信號 參數(shù) 數(shù)值范圍
    LFM 信號寬度
    帶寬
    采樣頻率
    10 μs
    50 μs
    125 MHz
    SMSP 干擾個數(shù) 3~7
    SNJ 轉(zhuǎn)發(fā)個數(shù)
    切片長度
    占空比
    3~5
    10 μs
    0.5~0.8
    DFTJ 假目標(biāo)個數(shù)
    假目標(biāo)時延
    3~7
    1~10 μs
    MISRJ 轉(zhuǎn)發(fā)個數(shù)
    切片長度
    占空比
    3~5
    10 μs
    0.5~0.8
    下載: 導(dǎo)出CSV

    表  2  增量干擾數(shù)據(jù)集配置

    階段 序號 干擾名稱 訓(xùn)練樣本個數(shù) 單次測試樣本個數(shù)
    基礎(chǔ) 1 SMSP 100 1
    2 SNJ 100 2
    3 DFTJ 100 2
    4 MISRJ 100 8
    增量 5 SNJ+SMSP 5 12
    6 SMSP+MISRJ 5 5
    7 DFTJ+MISRJ 5 9
    8 SMSP+DFTJ 5 5
    9 SNJ+DFTJ 5 12
    10 SNJ+MISRJ 5 7
    11 SNJ+SMSP+DFTJ 5 14
    下載: 導(dǎo)出CSV

    表  3  TIM模塊對模型的影響(%)

    測試樣本分布TIM基礎(chǔ)階段準(zhǔn)確率增量階段準(zhǔn)確率平均準(zhǔn)確率性能下降率
    12345678
    均衡100.00100.0097.9597.6096.6195.7294.8894.6297.175.38
    $ \times $100.0097.4292.9491.7090.4289.2187.6185.7291.8714.28
    非均衡100.00100.0098.9597.6096.6194.7293.8893.6296.926.38
    $ \times $99.8595.2293.7187.8386.4484.7272.5180.6687.6119.19
    下載: 導(dǎo)出CSV

    表  4  在1-way 5-shot設(shè)置下不同方法的干擾識別結(jié)果

    方法 基礎(chǔ)階段準(zhǔn)確率(%) 增量階段準(zhǔn)確率(%) 平均準(zhǔn)確率
    (%)
    性能下降率
    (%)
    訓(xùn)練平均時間
    (min)
    測試平均時間
    (s)
    1 2 3 4 5 6 7 8
    Ft-CNN[8] 100.00 92.22 83.33 74.16 67.77 61.33 55.55 51.55 73.23 48.45 50.59 15.73
    iCaRL[24] 100.00 91.14 85.72 85.83 83.51 82.66 80.75 78.47 86.01 21.53 185.15 20.79
    TOPIC[16] 99.31 89.77 83.54 84.91 84.67 81.03 78.75 75.98 84.74 23.33 141.96 19.86
    FACT[26] 97.55 97.77 93.80 92.57 91.29 88.74 85.45 83.12 91.28 14.43 98.15 12.02
    CEC[23] 98.44 96.74 93.94 92.70 90.61 88.72 85.88 84.62 91.45 13.82 169.64 16.95
    F2M[25] 100.00 95.71 93.44 87.70 86.41 84.72 82.45 80.74 88.89 19.26 78.96 9.93
    本文(ZMILN) 100.00 100.00 98.95 97.60 96.61 94.72 93.88 93.62 96.92 6.38 62.45 36.95
    下載: 導(dǎo)出CSV
  • [1] 魏毅寅, 楊文華. 海戰(zhàn)場典型干擾對抗場景及反艦導(dǎo)彈應(yīng)對策略研究[J]. 戰(zhàn)術(shù)導(dǎo)彈技術(shù), 2020(5): 1–8. doi: 10.16358/j.issn.1009-1300.2020.1.538.

    WEI Yiyin and YANG Wenhua. Study on typical jamming scenes in naval battle field and countermeasures of anti-ship missile[J]. Tactical Missile Technology, 2020(5): 1–8. doi: 10.16358/j.issn.1009-1300.2020.1.538.
    [2] ZHANG Xiang, LAN Lan, ZHU Shengqi, et al. Intelligent suppression of interferences based on reinforcement learning[J]. IEEE Transactions on Aerospace and Electronic Systems, 2024, 60(2): 1400–1415. doi: 10.1109/TAES.2023.3336643.
    [3] 崔國龍, 余顯祥, 魏文強, 等. 認(rèn)知智能雷達(dá)抗干擾技術(shù)綜述與展望[J]. 雷達(dá)學(xué)報, 2022, 11(6): 974–1002. doi: 10.12000/JR22191.

    CUI Guolong, YU Xianxiang, WEI Wenqiang, et al. An overview of antijamming methods and future works on cognitive intelligent radar[J]. Journal of Radars, 2022, 11(6): 974–1002. doi: 10.12000/JR22191.
    [4] 周紅平, 王子偉, 郭忠義. 雷達(dá)有源干擾識別算法綜述[J]. 數(shù)據(jù)采集與處理, 2022, 37(1): 1–20. doi: 10.16337/j.1004-9037.2022.01.001.

    ZHOU Hongping, WANG Ziwei, and GUO Zhongyi. Overview on recognition algorithms of radar active jamming[J]. Journal of Data Acquisition and Processing, 2022, 37(1): 1–20. doi: 10.16337/j.1004-9037.2022.01.001.
    [5] QU Qizhe, WEI Shunjun, LIU Shan, et al. JRNet: Jamming recognition networks for radar compound suppression jamming signals[J]. IEEE Transactions on Vehicular Technology, 2020, 69(12): 15035–15045. doi: 10.1109/TVT.2020.3032197.
    [6] ZHANG Jiaxiang, LIANG Zhennan, ZHOU Chao, et al. Radar compound jamming cognition based on a deep object detection network[J]. IEEE Transactions on Aerospace and Electronic Systems, 2023, 59(3): 3251–3263. doi: 10.1109/TAES.2022.3224695.
    [7] LV Qinzhe, QUAN Yinghui, FENG Wei, et al. Radar deception jamming recognition based on weighted ensemble CNN with transfer learning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5107511. doi: 10.1109/TGRS.2021.3129645.
    [8] DU Jinbiao, FAN Weiwei, GONG Chen, et al. Aggregated-attention deformable convolutional network for few-shot SAR jamming recognition[J]. Pattern Recognition, 2024, 146: 109990. doi: 10.1016/j.patcog.2023.109990.
    [9] LUO Zhenyu, CAO Yunhe, YEO T S, et al. Few-shot radar jamming recognition network via time-frequency self-attention and global knowledge distillation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5105612. doi: 10.1109/TGRS.2023.3280322.
    [10] ZHOU Hongping, WANG Lei, GUO Zhongyi. Recognition of radar compound jamming based on convolutional neural network[J]. IEEE Transactions on Aerospace and Electronic Systems, 2023, 59(6): 7380–7394. doi: 10.1109/TAES.2023.3288080.
    [11] LI Boran, ZHANG Lei, DAI Jingwei, et al. FETTrans: Analysis of compound interference identification based on bidirectional dynamic feature adaptation of improved transformer[J]. IEEE Access, 2022, 10: 66321–66331. doi: 10.1109/ACCESS.2022.3182010.
    [12] MENG Yunyun, YU Lei, and WEI Yinsheng. Multi-label radar compound jamming signal recognition using complex-valued CNN with jamming class representation fusion[J]. Remote Sensing, 2023, 15(21): 5180. doi: 10.3390/rs15215180.
    [13] ZHOU Hongping, WANG Lei, MA Minghui, et al. Compound radar jamming recognition based on signal source separation[J]. Signal Processing, 2024, 214: 109246. doi: 10.1016/j.sigpro.2023.109246.
    [14] LV Qinzhe, FAN Hanxin, LIU Junliang, et al. Multilabel deep learning-based lightweight radar compound jamming recognition method[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 2521115. doi: 10.1109/TIM.2024.3400337.
    [15] KONG Yukai, XIA Senlin, DONG Luxin, et al. Compound jamming recognition via contrastive learning for distributed MIMO radars[J]. IEEE Transactions on Vehicular Technology, 2024, 73(6): 7892–7907. doi: 10.1109/TVT.2024.3358996.
    [16] TAO Xiaoyu, HONG Xiaopeng, CHANG Xinyuan, et al. Few-shot class-incremental learning[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 12183–12192. doi: 10.1109/CVPR42600.2020.01220.
    [17] LI Bin, CUI Zongyong, WANG Haohan, et al. SAR incremental automatic target recognition based on mutual information maximization[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 4005305. doi: 10.1109/LGRS. 2024.3368063.
    [18] SERBES A. On the estimation of LFM signal parameters: Analytical formulation[J]. IEEE Transactions on Aerospace and Electronic Systems, 2018, 54(2): 848–860. doi: 10.1109/TAES.2017.2767978.
    [19] WANG J X. Meta-learning in natural and artificial intelligence[J]. Current Opinion in Behavioral Sciences, 2021, 38: 90–95. doi: 10.1016/j.cobeha.2021.01.002.
    [20] SANTORO A, BARTUNOV S, BOTVINICK M, et al. Meta-learning with memory-augmented neural networks[C]. The 33rd International Conference on International Conference on Machine Learning, New York, USA, 2016: 1842-1850.
    [21] KARUNARATNE G, SCHMUCK M, LE GALLO M, et al. Robust high-dimensional memory-augmented neural networks[J]. Nature Communications, 2021, 12(1): 2468. doi: 10.1038/s41467-021-22364-0.
    [22] VEILLEUX O, BOUDIAF M, PIANTANIDA P, et al. Realistic evaluation of transductive few-shot learning[C]. Proceedings of the 35th International Conference on Neural Information Processing Systems, 2021: 711.
    [23] ZHANG Chi, SONG Nan, LIN Guosheng, et al. Few-shot incremental learning with continually evolved classifiers[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 12455–12464. doi: 10.1109/CVPR46437.2021.01227.
    [24] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. iCaRL: Incremental classifier and representation learning[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2001–2010. doi: 10.1109/cvpr.2017.587.
    [25] SHI Guangyuan, CHEN Jiaxin, ZHANG Wenlong, et al. Overcoming catastrophic forgetting in incremental few-shot learning by finding flat minima[C]. The 35th International Conference on Neural Information Processing Systems, 2021: 517.
    [26] ZHOU Dawei, WANG Fuyun, YE Hanjia, et al. Forward compatible few-shot class-incremental learning[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 9046–9056. doi: 10.1109/cvpr52688.2022.00884.
  • 加載中
圖(7) / 表(4)
計量
  • 文章訪問數(shù):  202
  • HTML全文瀏覽量:  92
  • PDF下載量:  35
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-06-25
  • 修回日期:  2024-11-07
  • 網(wǎng)絡(luò)出版日期:  2024-11-13
  • 刊出日期:  2025-01-31

目錄

    /

    返回文章
    返回