小樣本SAR目標(biāo)的雙重一致性因果識別方法
doi: 10.11999/JEIT240140
-
電子科技大學(xué)信息與通信工程學(xué)院 成都 611731
A Causal Interventional SAR ATR Method with Limited Data via Dual Consistency
-
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
-
摘要: 在小樣本條件下提升方法的泛化性能,是合成孔徑雷達自動目標(biāo)識別(SAR ATR)的重要研究方向。針對該方向中的基礎(chǔ)理論問題,該文建立了一個SAR ATR因果模型,證明了SAR圖像中背景、相干斑等干擾在充足樣本條件下可以被忽略;但在小樣本條件下,這些因素將成為識別中的混雜因子,在提取的SAR圖像特征中引入虛假相關(guān)性,影響SAR ATR性能。為了甄別和消除這些特征中的虛假效應(yīng),該文提出一個基于雙重一致性的小樣本SAR ATR方法,其中雙重一致性包括類內(nèi)一致性掩碼和效應(yīng)一致性損失。首先,基于鑒別特征應(yīng)具有類內(nèi)一致和類間差異的原則,利用類內(nèi)一致性掩碼,捕獲目標(biāo)的類內(nèi)一致鑒別特征,甄別出目標(biāo)特征中的混淆部分,準(zhǔn)確估計出干擾引入的虛假效應(yīng)。其次,基于不變風(fēng)險最小化的思想,利用效應(yīng)一致性損失,將經(jīng)驗風(fēng)險最小化數(shù)據(jù)量需求轉(zhuǎn)變?yōu)閷π?yīng)相似度的度量需求,降低虛假效應(yīng)消除對數(shù)據(jù)量的需求,消除特征中的虛假效應(yīng)。因而,所提基于雙重一致性的小樣本SAR ATR方法可實現(xiàn)特征提取中的真實因果,實現(xiàn)準(zhǔn)確的識別性能。兩個基準(zhǔn)數(shù)據(jù)集上的識別實驗,驗證了該方法的合理性和有效性,可提升小樣本條件下SAR目標(biāo)識別的性能。
-
關(guān)鍵詞:
- 合成孔徑雷達 /
- 自動目標(biāo)識別 /
- 小樣本 /
- 因果推斷
Abstract: Improving the generalization performance of methods under limited sample conditions is an important research direction in Synthetic Aperture Radar Automatic Target Recognition (SAR ATR). Addressing the fundamental problem in this field, a causal model is established in this paper for SAR ATR, demonstrating that interferences in SAR images, such as background and speckle, can be neglected under sufficient sample conditions. However, under limited sample conditions, these factors become confounding variables, introducing spurious correlations into the extracted SAR image features and affecting the generalization of SAR ATR. To accurately identify and eliminate these spurious effects in the features, this paper proposes a limited-sample SAR ATR method via dual consistency, which includes an intra-class feature consistency mask and effect-consistency loss. Firstly, based on the principle that discriminative features should have intra-class consistency and inter-class differences, the intra-class feature consistency mask is used to capture the consistent discriminative features of the target, subtracting the confounded part in the target features, and identifying the spurious effects introduced by interferences. Secondly, based on the idea of invariant risk minimization, the effect-consistency loss transforms the data requirement of empirical risk minimization into a need for labeling the similarity among effects of different samples, reducing the data demand for eliminating spurious effects and removing the spurious effects in the features. Thus, the limited-sample SAR ATR method proposed in this paper achieves true causal feature extraction and accurate recognition performance. Experiments on two benchmark datasets validate the effectiveness of this method which can achieve superior performance of SAR target recognition with limited sample. -
表 1 MSTAR數(shù)據(jù)集下的訓(xùn)練集與測試集
類別 訓(xùn)練集 測試集 數(shù)量 俯仰角 數(shù)量 俯仰角 BMP2- 9563 233 17° 195 15° BRDM2-E71 298 274 BTR60- 7532 256 195 BTR70-c71 233 196 D7-92 299 274 2S1-b01 299 274 T62-A51 299 273 T72-132 232 196 ZIL131-E12 299 274 ZSU234-d08 299 274 下載: 導(dǎo)出CSV
表 2 MSTAR數(shù)據(jù)集不同訓(xùn)練樣本數(shù)量下10類目標(biāo)的識別性能(%)
類別 每類目標(biāo)訓(xùn)練樣本數(shù) 5 10 20 25 30 40 60 80 100 BMP2 73.33 83.59 92.31 94.87 95.90 97.95 98.97 97.44 98.46 BRDM2 77.74 94.89 93.80 96.72 97.08 98.18 99.27 99.64 98.91 BTR60 83.59 86.67 91.79 94.36 94.87 95.90 98.97 96.92 97.44 BTR70 70.92 90.82 92.35 95.92 96.94 98.47 97.96 98.98 98.98 D7 68.98 83.58 92.70 95.26 95.99 97.45 98.54 99.64 99.64 2S1 81.75 80.29 93.80 95.99 97.08 98.54 98.91 99.64 98.91 T62 74.36 90.11 92.67 95.60 97.07 98.53 97.44 99.63 99.63 T72 89.80 86.22 91.84 95.41 96.94 98.47 98.98 98.98 98.98 ZIL131 68.25 85.77 93.07 96.72 97.81 98.54 98.91 99.64 99.64 ZSU234 74.45 80.29 93.80 96.72 97.08 98.18 98.54 99.64 99.64 平均值 76.13 86.38 93.00 96.07 97.01 98.24 98.79 99.32 99.35 下載: 導(dǎo)出CSV
表 3 OpenSARShip數(shù)據(jù)集中訓(xùn)練與測試集
類別 成像條件 訓(xùn)練樣本數(shù) 測試樣本數(shù) 總樣本數(shù) Bulk Carrier VH 和 VV, C 波段
分辨率=5~20 m
入射角=20°~45°
仰角=±11°
Rg20 m×az22 m200 475 675 Container Ship 200 811 1011 Tanker 200 354 554 下載: 導(dǎo)出CSV
表 4 OpenSARShip數(shù)據(jù)集不同訓(xùn)練樣本數(shù)量下3類艦船目標(biāo)的識別性能(%)
類別 每類訓(xùn)練樣本數(shù) 10 20 30 40 50 60 70 80 100 200 Bulk Carrier 65.89 63.58 65.68 75.16 60.21 66.11 68.21 65.26 72.84 73.47 Container Ship 71.27 75.46 75.96 76.57 82.49 79.04 80.02 85.20 83.35 90.75 Tanker 79.10 82.77 81.92 74.01 84.46 86.16 84.46 81.92 78.81 82.77 平均值 71.51 73.75 74.41 75.66 76.58 76.92 77.66 78.86 79.41 84.12 下載: 導(dǎo)出CSV
表 5 消融實驗:20個訓(xùn)練樣本下不同消融配置的識別性能(%)
方法
配置特征 效應(yīng) BMP2 BRDM2 BTR60 BTR70 D7 2S1 T62 T72 ZIL131 ZSU234 平均值 V1 × × 80.51 75.91 74.87 80.10 73.72 76.28 78.75 82.14 84.31 86.86 79.59 V2 √ × 88.21 89.05 81.54 88.27 85.77 96.35 81.68 84.18 87.23 94.16 88.16 V3 √ √ 92.31 93.80 91.79 92.35 92.70 93.80 92.67 91.84 93.07 93.80 93.00 下載: 導(dǎo)出CSV
表 6 MSTAR數(shù)據(jù)集下SAR目標(biāo)識別性能對比(%)
方法 每類樣本數(shù) 10 20 40 55 80 110 165 220 All data 傳統(tǒng)方法 PCA+SVM [14] – 76.43 87.95 – 92.48 – – – 94.32 ADaboost [14] – 75.68 86.45 – 91.45 – – – 93.51 DGM [14] – 81.11 88.14 – 92.85 – – – 96.07 數(shù)據(jù)增強 GAN-CNN [14] – 81.80 88.35 – 93.88 – – – 97.03 MGAN-CNN [14] – 85.23 90.82 – 94.91 – – – 97.81 新穎模型 Deep CNN [15] – 77.86 86.98 – 93.04 – – – 95.54 Simple CNN [16] – 75.88 – – – – – – – Dens-CapsNet [17] 80.26 92.95 96.50 – – – – – 99.75 ASC-MACN [18] 62.85 79.46 – – – – – – 99.42 本文方法 86.38 93.00 98.24 – 99.32 – – – – 下載: 導(dǎo)出CSV
-
[1] CURLANDER J C and MCDONOUGH R N. Synthetic Aperture Radar: Systems and Signal Processing[M]. New York: Wiley, 1991. [2] KREUCHER C. SAR-ATR using EO-based deep networks[C]. 2023 IEEE Radar Conference (RadarConf23), San Antonio, USA, 2023: 1–5. doi: 10.1109/RadarConf2351548.2023.10149584. [3] MAO Deqing, YANG Jianyu, ZHANG Yongchao, et al. Angular superresolution of real aperture radar with high-dimensional data: Normalized projection array model and adaptive reconstruction[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5117216. doi: 10.1109/TGRS.2022.3203131. [4] MAO Deqing, YANG Jianyu, TUO Xingyu, et al. Angular superresolution of real aperture radar for target scale measurement using a generalized hybrid regularization approach[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5109314. doi: 10.1109/TGRS.2023.3315310. [5] CHOI J H, LEE M J, JEONG N H, et al. Fusion of target and shadow regions for improved SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5226217. doi: 10.1109/TGRS.2022.3165849. [6] HUANG Zhongling, YAO Xiwen, LIU Ying, et al. Physically explainable CNN for SAR image classification[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 190: 25–37. doi: 10.1016/j.isprsjprs.2022.05.008. [7] WANG Chenwei, LUO Siyi, PEI Jifang, et al. Crucial feature capture and discrimination for limited training data SAR ATR[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 204: 291–305. doi: 10.1016/j.isprsjprs.2023.09.014. [8] FENG Sijia, JI Kefeng, WANG Fulai, et al. PAN: Part attention network integrating electromagnetic characteristics for interpretable SAR vehicle target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5204617. doi: 10.1109/TGRS.2023.3256399. [9] WANG Chenwei, LUO Siyi, HUANG Yulin, et al. SAR ATR method with limited training data via an embedded feature augmenter and dynamic hierarchical-feature refiner[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5216215. doi: 10.1109/TGRS.2023.3314501. [10] PEARL J. Causal inference in statistics: An overview[J]. Statistics Surveys, 2009, 3: 96–146. doi: 10.1214/09-SS057. [11] SCH?LKOPF B, JANZING D, PETERS J, et al. On causal and anticausal learning[C]. Proceedings of the 29th International Conference on Machine Learning, Edinburgh, UK, 2012. [12] GREENLAND S, PEARL J, and ROBINS J M. Causal diagrams for epidemiologic research[J]. Epidemiology, 1999, 10(1): 37–48. doi: 10.1097/00001648-199901000-00008. [13] ZHANG Tianwen, ZHANG Xiaoling, KE Xiao, et al. HOG-ShipCLSNet: A novel deep learning network with hog feature fusion for SAR ship classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5210322. doi: 10.1109/TGRS.2021.3082759. [14] ZHENG Ce, JIANG Xue, and LIU Xingzhao. Semi-supervised SAR ATR via multi-discriminator generative adversarial network[J]. IEEE Sensors Journal, 2019, 19(17): 7525–7533. doi: 10.1109/JSEN.2019.2915379. [15] MORGAN D A E. Deep convolutional neural networks for ATR from SAR imagery[C]. SPIE 9475, Algorithms for Synthetic Aperture Radar Imagery XXII, Baltimore, USA, 2015: 94750F. doi: 10.1117/12.2176558. [16] LI Yibing, LI Xiang, SUN Qian, et al. SAR image classification using CNN embeddings and metric learning[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4002305. doi: 10.1109/LGRS.2020.3022435. [17] WANG Quan, XU Haixia, YUAN Liming, et al. Dense capsule network for SAR automatic target recognition with limited data[J]. Remote Sensing Letters, 2022, 13(6): 533–543. doi: 10.1080/2150704X.2022.2044089. [18] FENG Sijia, JI Kefeng, WANG Fulai, et al. Electromagnetic Scattering Feature (ESF) module embedded network based on ASC model for robust and interpretable SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5235415. doi: 10.1109/TGRS.2022.3208333. -