基于低秩表示的魯棒判別特征子空間學(xué)習(xí)模型
doi: 10.11999/JEIT190164
-
1.
哈爾濱理工大學(xué)計(jì)算機(jī)科學(xué)與技術(shù)學(xué)院 哈爾濱 150080
-
2.
哈爾濱工業(yè)大學(xué)計(jì)算機(jī)科學(xué)與技術(shù)學(xué)院 哈爾濱 150001
Robust Discriminative Feature Subspace Learning Based on Low Rank Representation
-
1.
School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
-
2.
School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
-
摘要:
特征子空間學(xué)習(xí)是圖像識(shí)別及分類任務(wù)的關(guān)鍵技術(shù)之一,傳統(tǒng)的特征子空間學(xué)習(xí)模型面臨兩個(gè)主要的問題。一方面是如何使樣本在投影到特征空間后有效地保持其局部結(jié)構(gòu)和判別性。另一方面是當(dāng)樣本含噪時(shí)傳統(tǒng)學(xué)習(xí)模型所發(fā)生的失效問題。針對(duì)上述兩個(gè)問題,該文提出一種基于低秩表示(LRR)的判別特征子空間學(xué)習(xí)模型,該模型的主要貢獻(xiàn)包括:通過低秩表示探究樣本的局部結(jié)構(gòu),并利用表示系數(shù)作為樣本在投影空間的相似性約束,使投影子空間能夠更好地保持樣本的局部近鄰關(guān)系;為提高模型的抗噪能力,構(gòu)造了一種利用低秩重構(gòu)樣本的判別特征學(xué)習(xí)約束項(xiàng),同時(shí)增強(qiáng)模型的判別性和魯棒性;設(shè)計(jì)了一種基于交替優(yōu)化技術(shù)的迭代數(shù)值求解方案來保證算法的收斂性。該文在多個(gè)視覺數(shù)據(jù)集上進(jìn)行分類任務(wù)的對(duì)比實(shí)驗(yàn),實(shí)驗(yàn)結(jié)果表明所提算法在分類準(zhǔn)確度和魯棒性方面均優(yōu)于傳統(tǒng)特征學(xué)習(xí)方法。
-
關(guān)鍵詞:
- 圖像分類 /
- 子空間學(xué)習(xí) /
- 特征提取 /
- 低秩表示
Abstract:Feature subspace learning is a critical technique in image recognition and classification tasks. Conventional feature subspace learning methods include two main problems. One is how to preserve the local structures and discrimination when the samples are projected into the learned subspace. The other hand when the data are corrupted with noise, the conventional learning models usually do not work well. To solve the two problems, a discriminative feature learning method is proposed based on Low Rank Representation (LRR). The novel method includes three main contributions. It explores the local structures among samples via low rank representation, and the representation coefficients are used as the similarity measurement to preserve the local neighborhood existed in the samples; To improve the anti-noise performance, a discriminative learning item is constructed from the recovered samples via low rank representation, which can enhance the discrimination and robustness simultaneously; An iterative numerical scheme is developed with alternating optimization, and the convergence can be guaranteed effectively. Extensive experimental results on several visual datasets demonstrate that the proposed method outperforms conventional feature learning methods on both of accuracy and robustness.
-
算法1:綜合目標(biāo)函數(shù)的數(shù)值求解方案 輸入: 訓(xùn)練集X,類別標(biāo)簽Y, ${\lambda _1}$, ${\lambda _2}$, $\eta $, ${{Z}} = {{G}} = {{R}} = 0$,
${{E}} = 0$, ${{{Y}}_{\rm{1}}} = {{{Y}}_{\rm{2}}} = {{{Y}}_{\rm{3}}} = 0$, $\mu = 0.6$, ${\mu _{\max }} = {10^{10}}$, $\rho = 1.1$。輸出: ${{P}}$ While not convergence do 1. 使用式(5)—(9)進(jìn)行更新${{{P}}^{k + 1}}$, ${{{G}}^{k + 1}}$, ${{{R}}^{k + 1}}$, ${{{Z}}^{k + 1}}$, ${{{E}}^{k + 1}}$; 2. 更新拉格朗日乘子及參數(shù)$\mu $: ${{{Y}}_1}^{k + 1} = {{{Y}}_1}^k + \mu \left( {{{X}} - {{X}}{{{Z}}^{k + 1}} - {{{E}}^{k + 1}}} \right)$; ${{{Y}}_2}^{k + 1} = {{{Y}}_2}^k + \mu \left( {{{{Z}}^{k + 1}} - {{{G}}^{k + 1}}} \right)$; ${{{Y}}_3}^{k + 1} = {{{Y}}_3}^k + \mu \left( {{{{Z}}^{k + 1}} - {{{R}}^{k + 1}}} \right)$; $\mu = \min \left( {{\mu _{\max }},\rho \mu } \right)$; end while 下載: 導(dǎo)出CSV
-
張濤, 唐振民. 一種基于非負(fù)低秩稀疏圖的半監(jiān)督學(xué)習(xí)改進(jìn)算法[J]. 電子與信息學(xué)報(bào), 2017, 39(4): 915–921. doi: 10.11999/JEIT160559ZHANG Tao and TANG Zhenmin. Improved algorithm based on non-negative low rank and sparse graph for semi-supervised learning[J]. Journal of Electronics &Information Technology, 2017, 39(4): 915–921. doi: 10.11999/JEIT160559 成寶芝, 趙春暉, 張麗麗. 子空間稀疏表示高光譜異常檢測(cè)新算法[J]. 哈爾濱工程大學(xué)學(xué)報(bào), 2017, 38(4): 640–645. doi: 10.11990/jheu.201604006CHENG Baozhi, ZHAO Chunhui, and ZHANG Lili. An anomaly detection algorithm for hyperspectral images using subspace sparse representation[J]. Journal of Harbin Engineering University, 2017, 38(4): 640–645. doi: 10.11990/jheu.201604006 BELHUMEUR P N, HESPANHA J P, and KRIEGMAN D J. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 711–720. doi: 10.1109/34.598228 CAI Deng, HE Xiaofei, ZHOU Kun, et al. Locality sensitive discriminant analysis[C]. The 20th International Joint Conference on Artifical Intelligence, Hyderabad, India, 2007: 708–713. CAI Sijia, ZHANG Lei, ZUO Wangmeng, et al. A probabilistic collaborative representation based approach for pattern classification[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2950–2959. REN Jiahuan, ZHANG Zhao, LI Sheng, et al. Robust projective low-rank and sparse representation by robust dictionary learning[C]. The 24th International Conference on Pattern Recognition, Beijing, China, 2018: 1851–1856. RAZZAGHI P, RAZZAGHI P, and ABBASI K. Transfer subspace learning via low-rank and discriminative reconstruction matrix[J]. Knowledge-Based Systems, 2019, 163: 174–185. doi: 10.1016/j.knosys.2018.08.026 KANG Zhao, PENG Chong, and CHENG Qiang. Kernel-driven similarity learning[J]. Neurocomputing, 2017, 267: 210–219. doi: 10.1016/j.neucom.2017.06.005 LI Sheng, SHAO Ming, and FU Yun. Multi-view low-rank analysis with applications to outlier detection[J]. ACM Transactions on Knowledge Discovery from Data, 2018, 12(3): 32–53. doi: 10.1145/3168363 LIU Guangcan and YAN Shuicheng. Latent low-rank representation for subspace segmentation and feature extraction[C]. 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 2011: 1615–1622. FANG Xiaozhao, HAN Na, WU Jigang, et al. Approximate low-rank projection learning for feature extraction[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(11): 5228–5241. doi: 10.1109/TNNLS.2018.2796133 MA Long, WANG Chunheng, XIAO Baihua, et al. Sparse representation for face recognition based on discriminative low-rank dictionary learning[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2586–2593. LI Ao, LIU Xin, WANG Yanbing, et al. Subspace structural constraint-based discriminative feature learning via nonnegative low rank representation[J]. PLoS One, 2019, 14(5): e0215450. doi: 10.1371/journal.pone.0215450 PENG Chong, KANG Zhao, and CHENG Qiang. Subspace clustering via variance regularized ridge regression[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 21–26. ZHANG He and PATEL V M. Convolutional sparse and low-rank coding-based image decomposition[J]. IEEE Transactions on Image Processing, 2018, 27(5): 2121–2133. doi: 10.1109/TIP.2017.2786469 LIN Zhouchen, CHEN Minming, and MA Yi. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices[J]. 2010, arXiv: 1009.5055. WEN Zaiwen and YIN Wotao. A feasible method for optimization with orthogonality constraints[J]. Mathematical Programming, 2013, 142(1/2): 397–434. doi: 10.1007/s10107-012-0584-1 CANDèS E J, LI Xiaodong, MA Yi, et al. Robust principal component analysis?[J]. Journal of the ACM (JACM) , 2011, 58(3): 11–49. doi: 10.1145/1970392.1970395 YANG Junfeng and ZHANG Yin. Alternating direction algorithms for $\ell_1$ -problems in compressive sensing[J]. SIAM Journal on Scientific Computing, 2011, 33(1): 250–278. doi: 10.1137/090777761YANG Junfeng, YIN Wotao, ZHANG Yin, et al. A fast algorithm for edge-preserving variational multichannel image restoration[J]. SIAM Journal on Imaging Sciences, 2009, 2(2): 569–592. doi: 10.1137/080730421 -