結(jié)合未知類特征生成與分類得分修正的SAR目標(biāo)開集識別方法
doi: 10.11999/JEIT240138
-
西安電子科技大學(xué)雷達(dá)信號處理全國重點(diǎn)實(shí)驗(yàn)室 西安 710071
An Open Set Recognition Method for SAR Targets Combining Unknown Feature Generation and Classification Score Modification
-
National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
-
摘要: 現(xiàn)有合成孔徑雷達(dá)(SAR)目標(biāo)識別方法大多局限于閉集假定,即認(rèn)為訓(xùn)練模板庫內(nèi)訓(xùn)練目標(biāo)類別包含全部待測目標(biāo)類別,不適用于庫內(nèi)已知類和庫外未知新類目標(biāo)共存的真實(shí)開放識別環(huán)境。針對訓(xùn)練模板庫目標(biāo)類別非完備情況下的SAR目標(biāo)識別問題,該文提出一種結(jié)合未知類特征生成與分類得分修正的SAR目標(biāo)開集識別方法。該方法在利用已知類學(xué)習(xí)原型網(wǎng)絡(luò)保證已知類識別精度的基礎(chǔ)上結(jié)合對潛在未知類特征分布的先驗(yàn)認(rèn)知,生成未知類特征更新網(wǎng)絡(luò),進(jìn)一步保證特征空間中已知類、未知類特征的鑒別性。原型網(wǎng)絡(luò)更新完成后,所提方法挑選各已知類邊界特征,并計(jì)算邊界特征到各自類原型的距離(極大距離),通過極值理論對各已知類極大距離進(jìn)行概率擬合確定了各已知類最大分布區(qū)域。測試階段在度量待測樣本特征與各已知類原型距離預(yù)測閉集分類得分的基礎(chǔ)上,計(jì)算了各距離在對應(yīng)已知類極大距離分布上的概率,并修正閉集分類得分,實(shí)現(xiàn)了拒判概率的自動確定?;贛STAR實(shí)測數(shù)據(jù)集的實(shí)驗(yàn)結(jié)果表明,所提方法能夠有效表征真實(shí)未知類特征分布并提升網(wǎng)絡(luò)特征空間已知類與未知類特征的鑒別性,可同時(shí)實(shí)現(xiàn)對庫內(nèi)已知類目標(biāo)的準(zhǔn)確識別和對庫外未知類新目標(biāo)的準(zhǔn)確拒判。
-
關(guān)鍵詞:
- SAR目標(biāo)識別 /
- 開集識別 /
- 未知類特征生成 /
- 極值理論 /
- 分類得分修正
Abstract: The existing Synthetic Aperture Radar (SAR) target recognition methods are mostly limited to the closed-set assumption, which considers that the training target categories in training template library cover all the categories to be tested and is not suitable for the open environment with the presence of both known and unknown classes. To solve the problem of SAR target recognition in the case of incomplete target categories in the training template library, an openset SAR target recognition method that combines unknown feature generation with classification score modification is proposed in this paper. Firstly, a prototype network is exploited to get high recognition accuracy of known classes, and then potential unknown features are generated based on prior knowledge to enhance the discrimination of known and unknown classes. After the prototype network being updated, the boundary features of each known class are selected and the distance of each boundary feature to the corresponding class prototype, i.e., maximum distance, is calculated, respectively. Subsequently the maximum distribution area for each known class is determined by the probability fitting of maximum distances for each known class by using extreme value theory. In the testing phase, on the basis of predicting closed-set classification scores by measuring the distance between the testing sample features and each known class prototype, the probability of each distance in the distribution of the corresponding known class’s maximum distance is calculated, and the closed-set classification scores are corrected to automatically determine the rejection probability. Experiments on measured MSTAR dataset show that the proposed method can effectively represent the distribution of unknown class features and enhance the discriminability of known and unknown class features in the feature space, thus achieving accurate recognition for both known class targets and unknown class targets. -
1 未知類仿制與模型更新流程
輸入:訓(xùn)練數(shù)據(jù)$\left\{ {{{\boldsymbol{X}}_i}} \right\}_{i = 1}^{{N_{\mathrm{k}}}}$;特征空間中的各類初始化原型${{\boldsymbol{M}}_0}$;特征提取網(wǎng)絡(luò)${\boldsymbol{E}}$的初始化參數(shù)${\theta _0}$;超參數(shù)$\lambda $;已知類類別數(shù)${N_k}$;已知類
內(nèi)側(cè)邊界特征挑選比例$K$;學(xué)習(xí)率$\mu $;Beta分布的參數(shù)$ \alpha $,$\beta $;迭代次數(shù)$t \leftarrow 0$。輸出:原型參數(shù)${\boldsymbol{M}}$,更新后的網(wǎng)絡(luò)參數(shù)$\theta $。 (1) while not converge do (2) $t \leftarrow t + 1$ (3) 按照比例$K$從各已知類中挑選較小模值的特征,記為$\left\{ {\left( {{{\boldsymbol{f}}_{{\theta ^t}}}\left( {{{\boldsymbol{X}}_{1,1}}} \right), \cdots ,{{\boldsymbol{f}}_{{\theta ^t}}}\left( {{{\boldsymbol{X}}_{1,k}}} \right), \cdots } \right), \cdots ,\left( {{{\boldsymbol{f}}_{{\theta ^t}}}\left( {{{\boldsymbol{X}}_{i,1}}} \right), \cdots ,{{\boldsymbol{f}}_{{\theta ^t}}}\left( {{{\boldsymbol{X}}_{i,m}}} \right), \cdots } \right)} \right\}$; (4) for i = 1: ${N_{\mathrm{k}}}$ do (5) for j = i + 1: ${N_{\mathrm{k}}}$ do (6) 任意挑選$k,m$ do (7) 利用已知類特征的凸組合,生成未知樣本特征${{\boldsymbol{X}}_{\rm{GEN} }} = \lambda {{\boldsymbol{f}}_{{\theta ^t}}}\left( {{{\boldsymbol{X}}_{i,k}}} \right) + \left( {1 - \lambda } \right){{\boldsymbol{f}}_{{\theta ^t}}}\left( {{{\boldsymbol{X}}_{j,m}}} \right)$,其中$\lambda \sim {\text{Beta}}\left( {\alpha ,\beta } \right)$; (8) 計(jì)算已知類原型損失 $L_{{\text{pro}}}^t = {L_{{\text{pro}}}}\left( {{\boldsymbol{X}};\theta ,{\boldsymbol{M}}} \right)$; (9) 計(jì)算未知類距離損失 $L_{ {\text{GEN}}}^t = {{H}}\left( {{{\mathrm{softmax}}} \left( {d\left( {{{\boldsymbol{X}}_{{\text{GEN}}}},{\boldsymbol{M}}} \right)} \right),{{{U}}_{{N_{\mathrm{k}}}}}} \right)$; (10) 計(jì)算總損失${L^t} = L_{ {\text{PRO}}}^t + \lambda L_{ {\text{GEN}}}^t$ (11) 更新原型、網(wǎng)絡(luò)參數(shù)$\left\{ {\theta ,{\boldsymbol{M}}} \right\}$:${\theta ^{t + 1}} = {\theta ^t} - \mu \dfrac{{\partial {L^t}}}{{\partial {\theta ^t}}}$, ${{\boldsymbol{M}}^{t + 1}} = {{\boldsymbol{M}}^t} - \mu \dfrac{{\partial {L^t}}}{{\partial {{\boldsymbol{M}}^t}}}$; (12) end while (13) end while 下載: 導(dǎo)出CSV
2 極值擬合流程
輸入:訓(xùn)練數(shù)據(jù)$\left\{ {{{\boldsymbol{X}}_i}} \right\}_{i = 1}^{{N_{\mathrm{k}}}}$;特征空間中的各類已知類原型${\boldsymbol{M}}$;特征提取網(wǎng)絡(luò)的參數(shù)$\theta $;用于擬合極值分布的極值個(gè)數(shù)挑選比例$L$; 輸出:各已知類特征離類原型的極大距離所服從的威布爾分布函數(shù)參數(shù)$ \left\{ {{F_{{\text{Weibull}}}}\left( {{\boldsymbol{X}};{\lambda _i},{k_i}} \right)} \right\}_{i = 1}^{{N_{\mathrm{k}}}} $ (1) for $ i = 1:{N_{\mathrm{k}}} $ (2) 將訓(xùn)練數(shù)據(jù)${x_i}$通過特征提取網(wǎng)絡(luò),得到對應(yīng)特征${{\boldsymbol{f}}_\theta }\left( {{{\boldsymbol{X}}_i}} \right)$; (3) 按類別計(jì)算樣本特征與對應(yīng)類原型的距離$ {d_i} = \left\| {\left. {{{\boldsymbol{f}}_\theta }\left( {{{\boldsymbol{X}}_i}} \right) - {{{\boldsymbol{m}}}'_i}} \right\|} \right._2^2 $,每類距離從小到大排序,按比例$L$選取取值較大的距離,構(gòu)成
距離向量$ {{\boldsymbol{D}}_i} = \left[ {{d_{i1}},{d_{i2}},\cdots} \right] $,其中$ {d_{ik}} $表示第$i$個(gè)已知類的第$k$個(gè)極大值樣本;(4) 采取極大似然估計(jì)的方法擬合各類的極值分布參數(shù)${\lambda _i}$, ${k_i}$。 (5) end 下載: 導(dǎo)出CSV
表 1 MSTAR數(shù)據(jù)集10類10型的訓(xùn)練測試集劃分
2S1 BMP2 BRDM2 BTR70 BTR60 D7 T62 T72 ZIL131 ZSU234 訓(xùn)練樣本數(shù) 299 233 298 233 256 299 299 232 299 299 測試樣本數(shù) 274 196 274 196 195 274 273 196 274 274 下載: 導(dǎo)出CSV
表 2 不同開放度下的已知類與未知類劃分
開放度(%) 隨機(jī)實(shí)驗(yàn) 已知類 未知類 9.25 實(shí)驗(yàn)1 BMP2 BTR70 T72 2S1 BRDM2 BTR60 D7 T62 ZIL131 ZSU234 實(shí)驗(yàn)2 BMP2 BTR60 2S1 D7 T62 ZIL131 ZSU234 BMR70 T72 BRDM2 實(shí)驗(yàn)3 BTR70 T72 BRDM2 BTR60 D7 T62 ZIL131 BMP2 2S1 ZSU234 實(shí)驗(yàn)4 2S1 BRDM2 BTR60 D7 T62 ZIL131 ZSU234 BMP2 BTR70 T72 實(shí)驗(yàn)5 BTR70 2S1 T72 BTR60 T62 ZIL131 ZSU234 BMP2 BRDM2 D7 13.40 實(shí)驗(yàn)1 BMP2 BTR70 T72 2S1 BRDM2 BTR60 D7 T62 ZIL131 ZSU234 實(shí)驗(yàn)2 BMP2 BTR60 2S1 D7 T62 ZIL131 ZSU234 BMR70 T72 BRDM2 實(shí)驗(yàn)3 BTR70 T72 BRDM2 BTR60 D7 T62 ZIL131 BMP2 2S1 ZSU234 實(shí)驗(yàn)4 2S1 BRDM2 ZSU234 D7 T62 ZIL131 BMP2 BTR70 T72 BTR60 實(shí)驗(yàn)5 BTR70 2S1 T72 BTR60 T62 ZSU234 ZIL131 BMP2 BRDM2 D7 24.41 實(shí)驗(yàn)1 BMP2 BTR70 BTR60 D7 BRDM2 T62 ZIL131 ZSU234 2S1 T72 實(shí)驗(yàn)2 BMP2 BTR60 2S1 D7 T62 ZIL131 ZSU234 BMR70 T72 BRDM2 實(shí)驗(yàn)3 BTR70 T72 BRDM2 BTR60 D7 T62 ZIL131 BMP2 2S1 ZSU234 實(shí)驗(yàn)4 2S1 BRDM2 ZSU234 D7 T62 ZIL131 BMP2 BTR70 T72 BTR60 實(shí)驗(yàn)5 BTR70 2S1 T72 ZSU234 BTR60 T62 ZIL131 BMP2 BRDM2 D7 下載: 導(dǎo)出CSV
表 3 所提方法及對比方法開集實(shí)驗(yàn)結(jié)果(%)
開集識別方法 開放度 Accuracy Precision Recall F1-score Softmax+概率閾值 9.25 82.14 84.68 90.58 87.82 13.40 76.09 68.45 86.92 70.45 24.41 60.45 52.82 84.28 61.26 GCPL[8] 9.25 88.41 94.48 93.88 93.69 13.40 80.52 76.14 88.21 78.86 24.41 64.85 60.74 86.52 68.43 RPL[23] 9.25 88.83 94.82 93.97 94.02 13.40 82.76 77.29 89.65 79.74 24.41 65.25 63.48 85.02 68.48 DIAS[12] 9.25 93.08 95.90 96.03 96.01 13.40 84.25 82.36 89.42 82.14 24.41 70.99 68.17 88.90 72.43 所提方法 9.25 94.38 96.03 96.58 96.86 13.40 84.62 83.40 92.38 84.68 24.41 73.99 70.25 89.24 74.28 下載: 導(dǎo)出CSV
表 4 所提方法與現(xiàn)有SAR開集識別方法對比(%)
下載: 導(dǎo)出CSV
表 5 消融實(shí)驗(yàn)設(shè)置
實(shí)驗(yàn)編號 未知類特征生成 分類得分修正 F1-score(%) (1) √ × 72.48 (2) √ √ 74.28 下載: 導(dǎo)出CSV
-
[1] 金亞秋. 多模式遙感智能信息與目標(biāo)識別: 微波視覺的物理智能[J]. 雷達(dá)學(xué)報(bào), 2019, 8(6): 710–716. doi: 10.12000/JR19083.JIN Yaqiu. Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision[J]. Journal of Radars, 2019, 8(6): 710–716. doi: 10.12000/JR19083. [2] LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91–110. doi: 10.1023/B:VISI.0000029664.99615.94. [3] 李璐, 杜蘭, 何浩男, 等. 基于深度森林的多級特征融合SAR目標(biāo)識別[J]. 電子與信息學(xué)報(bào), 2021, 43(3): 606–614. doi: 10.11999/JEIT200685.LI Lu, DU Lan, HE Haonan, et al. Multi-level feature fusion SAR automatic target recognition based on deep forest[J]. Journal of Electronics & Information Technology, 2021, 43(3): 606–614. doi: 10.11999/JEIT200685. [4] LIN Huiping, WANG Haipeng, XU Feng, et al. Target recognition for SAR images enhanced by polarimetric information[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5204516. doi: 10.1109/TGRS.2024.3361931. [5] ZENG Zhiqiang, SUN Jinping, YAO Xianxun, et al. SAR target recognition via information dissemination networks[C]. 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, USA, 2023: 7019–7022. doi: 10.1109/IGARSS52108.2023.10282727. [6] MENDES JúNIOR P R, DE SOUZA R M, DE O. WERNECK R, et al. Nearest neighbors distance ratio open-set classifier[J]. Machine Learning, 2017, 106(3): 359–386. doi: 10.1007/s10994-016-5610-8. [7] XIA Ziheng, WANG Penghui, DONG Ganggang, et al. Radar HRRP open set recognition based on extreme value distribution[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5102416. doi: 10.1109/TGRS.2023.3257879. [8] YANG Hongming, ZHANG Xuyao, YIN Fei, et al. Convolutional prototype network for open set recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(5): 2358–2370. doi: 10.1109/TPAMI.2020.3045079. [9] LU Jing, XU Yunlu, LI Hao, et al. PMAL: Open set recognition via robust prototype mining[C]. The 36th AAAI Conference on Artificial Intelligence, Vancouver, British Columbia Canada, 2022: 1872–1880. doi: 10.1609/aaai.v36i2.20081. [10] HUANG Hongzhi, WANG Yu, HU Qinghua, et al. Class-specific semantic reconstruction for open set recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(4): 4214–4228. doi: 10.1109/TPAMI.2022.3200384. [11] XIA Ziheng, WANG Penghui, DONG Ganggang, et al. Adversarial kinetic prototype framework for open set recognition[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(7): 9238–9251. doi: 10.1109/TNNLS.2022.3231924. [12] MOON W J, PARK J, SEONG H S, et al. Difficulty-aware simulator for open set recognition[C]. The 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 365–381. doi: 10.1007/978-3-031-19806-9_21. [13] ENGELBRECHT E R and DU PREEZ J A. On the link between generative semi-supervised learning and generative open-set recognition[J]. Scientific African, 2023, 22: e01903. doi: 10.1016/j.sciaf.2023.e01903. [14] DANG Sihang, CAO Zongjie, CUI Zongyong, et al. Open set SAR target recognition using class boundary extracting[C]. The 6th Asia-Pacific Conference on Synthetic Aperture Radar, Xiamen, China, 2019: 1–4. doi: 10.1109/APSAR46974.2019.9048316. [15] GENG Xiaojing, DONG Ganggang, XIA Ziheng, et al. SAR target recognition via random sampling combination in open-world environments[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 331–343. doi: 10.1109/JSTARS.2022.3225882. [16] LI Yue, REN Haohao, YU Xuelian, et al. Threshold-free open-set learning network for SAR automatic target recognition[J]. IEEE Sensors Journal, 2024, 24(5): 6700–6708. doi: 10.1109/JSEN.2024.3354966. [17] XIA Ziheng, WANG Penghui, DONG Ganggang, et al. Spatial location constraint prototype loss for open set recognition[J]. Computer Vision and Image Understanding, 2023, 229: 103651. doi: 10.1016/j.cviu.2023.103651. [18] HE Kaiming, ZHANG Xiangyu, REN Shaoping, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90. [19] ZHANG Hongyi, CISSé M, DAUPHIN Y N, et al. mixup: Beyond empirical risk minimization[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018. [20] FISHER R A and TIPPETT L H C. Limiting forms of the frequency distribution of the largest or smallest member of a sample[J]. Mathematical Proceedings of the Cambridge Philosophical Society, 1928, 24(2): 180–190. doi: 10.1017/S0305004100015681. [21] ROSS T D, WORRELL S W, VELTEN V J, et al. Standard SAR ATR evaluation experiments using the MSTAR public release data set[C]. SPIE 3370, Algorithms for Synthetic Aperture Radar Imagery V, Orlando, USA, 1998. doi: 10.1117/12.321859. [22] SCHEIRER W J, DE REZENDE ROCHA A, SAPKOTA A, et al. Toward open set recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(7): 1757–1772. doi: 10.1109/TPAMI.2012.256. [23] CHEN Guangyao, QIAO Limeng, SHI Yemin, et al. Learning open set network with discriminative reciprocal points[C]. The Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 2020: 507–522. doi: 10.1007/978-3-030-58580-8_30. -