SAR目標(biāo)增量識(shí)別中基于最大化非重合體積的樣例挑選方法
doi: 10.11999/JEIT240217
-
電子科技大學(xué)信息與通信工程學(xué)院 成都 611731
Exemplar Selection Based on Maximizing Non-overlapping Volume in SAR Target Incremental Recognition
-
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
-
摘要: 為了確保合成孔徑雷達(dá)(SAR)自動(dòng)目標(biāo)識(shí)別(ATR)系統(tǒng)能夠迅速適應(yīng)新的應(yīng)用環(huán)境,其必須具備快速學(xué)習(xí)新類的能力。目前的SAR ATR系統(tǒng)在學(xué)習(xí)新類時(shí)需要不斷重復(fù)訓(xùn)練所有舊類樣本,這會(huì)造成大量存儲(chǔ)資源的浪費(fèi),同時(shí)識(shí)別模型無(wú)法快速更新。保留少量的舊類樣例進(jìn)行后續(xù)的增量訓(xùn)練是模型增量識(shí)別的關(guān)鍵。為了解決這個(gè)問(wèn)題,該文提出基于最大化非重合體積的樣例挑選方法(ESMNV),一種側(cè)重于分布非重合體積的樣例選擇算法。ESMNV將每個(gè)已知類的樣例選擇問(wèn)題轉(zhuǎn)化為分布非重合體積的漸近增長(zhǎng)問(wèn)題,旨在最大化所選樣例的分布的非重合體積。ESMNV利用分布之間的相似性來(lái)表示體積之間的差異。首先,ESMNV使用核函數(shù)將目標(biāo)類別的分布映射到重建核希爾伯特空間(RKHS),并使用高階矩來(lái)表示分布。然后,它使用最大均值差異(MMD)來(lái)計(jì)算目標(biāo)類別與所選樣例分布之間的差異。最后,結(jié)合貪心算法,ESMNV逐步選擇使樣例分布與目標(biāo)類別分布差異最小的樣例,確保在有限數(shù)量的樣例情況下最大化所選樣例的非重合體積。
-
關(guān)鍵詞:
- SAR目標(biāo)增量識(shí)別 /
- 樣例挑選 /
- 非重合體積 /
- 最大均值差異 /
- 貪心算法
Abstract: To ensure the Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) system can quickly adapt to new application environments, it must possess the ability to rapidly learn new classes. Currently, SAR ATR systems require repetitive training of all old class samples when learning new classes, leading to significant waste of storage resources and preventing the recognition model from updating quickly. Preserving a small number of old class examples for subsequent incremental training is crucial for model incremental recognition. To address this issue, Exemplar Selection based on Maximizing Non-overlapping Volume (ESMNV) is proposed in this paper, an exemplar selection algorithm that emphasizes the non-overlapping volume of the distribution. ESMNV transforms the exemplar selection problem for each known class into an asymptotic growth problem of the Non-overlapping volume of the distribution, aiming to maximize the Non-overlapping volume of the distribution of the selected exemplars. ESMNV utilizes the similarity between distributions to represent differences in volume. Firstly, ESMNV uses a kernel function to map the distribution of the target class into a Reconstructed Kernel Hilbert Space (RKHS) and employs higher-order moments to represent the distribution. Then, it uses the Maximum Mean Discrepancy (MMD) to compute the difference between the distribution of the target class and the selected exemplars. Combined with a greedy algorithm, ESMNV progressively selects exemplars that minimize the difference in distribution between the selected exemplars and the target class, ensuring the maximum Non-overlapping volume of the selected exemplars with a limited number. -
表 1 保留樣例數(shù)量為5時(shí)的增量識(shí)別準(zhǔn)確率(%)
模型 4 5 6 7 8 9 10 CBesIL 99.62 79.96 67.23 58.44 53.88 47.93 41.88 Random 99.62 83.32 69.98 61.56 56.73 49.96 47.21 Herding 99.62 83.92 71.77 61.95 58.67 52.16 51.04 DCBES 99.62 83.09 71.90 65.07 59.54 55.27 54.42 ESMNV 99.62 87.15 75.60 69.32 66.95 61.90 60.49 下載: 導(dǎo)出CSV
表 2 保留樣例數(shù)量為10時(shí)的增量識(shí)別準(zhǔn)確率(%)
模型 4 5 6 7 8 9 10 CBesIL 99.62 89.13 80.10 74.44 70.58 68.63 65.31 Random 99.62 90.09 80.46 78.05 73.25 70.38 68.68 Herding 99.62 89.68 83.64 79.08 76.20 74.33 71.10 DCBES 99.62 91.33 85.08 81.67 79.12 77.69 75.97 ESMNV 99.62 92.87 86.03 85.40 83.53 81.62 81.05 下載: 導(dǎo)出CSV
表 3 保留樣例數(shù)量為15時(shí)的增量識(shí)別準(zhǔn)確率(%)
模型 4 5 6 7 8 9 10 CBesIL 99.62 92.25 85.76 83.21 80.51 79.04 75.35 Random 99.62 93.00 87.06 86.28 84.19 80.64 79.70 Herding 99.62 93.48 88.24 86.37 85.70 83.01 81.10 DCBES 99.62 93.63 89.42 88.65 87.02 85.09 83.70 ESMNV 99.62 95.53 92.15 90.59 89.73 89.28 87.31 下載: 導(dǎo)出CSV
-
[1] LI Jianwei, YU Zhentao, YU Lu, et al. A comprehensive survey on SAR ATR in deep-learning era[J]. Remote Sensing, 2023, 15(5): 1454. doi: 10.3390/rs15051454. [2] WANG Chengwei, LUO Siyi, PEI Jifang, et al. Crucial feature capture and discrimination for limited training data SAR ATR[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 204: 291–305. doi: 10.1016/j.isprsjprs.2023.09.014. [3] 徐豐, 王海鵬, 金亞秋. 深度學(xué)習(xí)在SAR目標(biāo)識(shí)別與地物分類中的應(yīng)用[J]. 雷達(dá)學(xué)報(bào), 2017, 6(2): 136–148. doi: 10.12000/JR16130.XU Feng, WANG Haipeng, and JIN Yaqiu. Deep learning as applied in SAR target recognition and terrain classification[J]. Journal of Radars, 2017, 6(2): 136–148. doi: 10.12000/JR16130. [4] CARUANA R. Multitask learning[J]. Machine Learning, 1997, 28(1): 41–75. doi: 10.1023/A:1007379606734. [5] MCCLOSKEY M and COHEN N J. Catastrophic interference in connectionist networks: The sequential learning problem[J]. Psychology of Learning and Motivation, 1989, 24: 109–165. doi: 10.1016/S0079-7421(08)60536-8. [6] CHAUDHRY A, DOKANIA P K, AJANTHAN T, et al. Riemannian walk for incremental learning: Understanding forgetting and intransigence[C]. Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 2018: 556–572. doi: 10.1007/978-3-030-01252-6_33. [7] DANG Sihang, CAO Zongjie, CUI Zongyong, et al. Class boundary exemplar selection based incremental learning for automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(8): 5782–5792. doi: 10.1109/TGRS.2020.2970076. [8] MITTAL S, GALESSO S, and BROX T. Essentials for class incremental learning[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 3528–3517. doi: 10.1109/CVPRW53098.2021.00390. [9] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. iCaRL: Incremental classifier and representation learning[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5533–5542. doi: 10.1109/CVPR.2017.587. [10] DE LANGE M, ALJUNDI R, MASANA M, et al. A continual learning survey: Defying forgetting in classification tasks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(7): 3366–3385. doi: 10.1109/TPAMI.2021.3057446. [11] LI Zhizhong and HOIEM D. Learning without forgetting[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(12): 2935–2947. doi: 10.1109/TPAMI.2017.2773081. [12] RUSU A A, RABINOWITZ N C, DESJARDINS G, et al. Progressive neural networks[J]. arXiv: 1606.04671, 2016. doi: 10.48550/arXiv.1606.04671. [13] RUDD E M, JAIN L P, SCHEIRER W J, et al. The extreme value machine[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 762–768. doi: 10.1109/TPAMI.2017.2707495. [14] SHAO Junming, HUANG Feng, YANG Qinli, et al. Robust prototype-based learning on data streams[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(5): 978–991. doi: 10.1109/TKDE.2017.2772239. [15] LI Bin, CUI Zongyong, CAO Zongjie, et al. Incremental learning based on anchored class centers for SAR automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5235313. doi: 10.1109/TGRS.2022.3208346. [16] BORGWARDT K M, GRETTON A, RASCH M J, et al. Integrating structured biological data by kernel maximum mean discrepancy[J]. Bioinformatics, 2006, 22(14): e49–e57. doi: 10.1093/bioinformatics/btl242. [17] 王智睿, 康玉卓, 曾璇, 等. SAR-AIRcraft-1.0: 高分辨率SAR飛機(jī)檢測(cè)識(shí)別數(shù)據(jù)集[J]. 雷達(dá)學(xué)報(bào), 2023, 12(4): 906–922. doi: 10.12000/JR23043.WANG Zhirui, KANG Yuzhuo, ZENG Xuan, et al. SAR-AIRcraft-1.0: High-resolution SAR aircraft detection and recognition dataset[J]. Journal of Radars, 2023, 12(4): 906–922. doi: 10.12000/JR23043. [18] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672. [19] LI Bin, CUI Zongyong, SUN Yuxuan, et al. Density coverage-based exemplar selection for incremental SAR automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5211713. doi: 10.1109/TGRS.2023.3293509. -