融合L2范數(shù)最小化和壓縮Haar-like特征匹配的快速目標(biāo)跟蹤
doi: 10.11999/JEIT160122
-
2.
(武漢理工大學(xué)光纖傳感技術(shù)國家工程實驗室 武漢 430070) ②(三峽大學(xué)計算機與信息工程學(xué)院 宜昌 443002)
國家自然科學(xué)基金(51479159)
Fast Object Tracking Based on L2-norm Minimization andCompressed Haar-like Features Matching
-
2.
(The Key Laboratory of Fiber Optic Sensing Technology and Information Processing, Ministry of Education, Wuhan University of Technology, Wuhan 430070, China)
The National Natural Science Foundation of China (51479159)
-
摘要: 在貝葉斯推理框架下,基于PCA子空間和L2范數(shù)最小化的目標(biāo)跟蹤算法能較好地處理視頻場景中多種復(fù)雜的外觀變化,但在目標(biāo)出現(xiàn)旋轉(zhuǎn)或姿態(tài)變化時易發(fā)生跟蹤漂移現(xiàn)象。針對這一問題,該文提出一種融合L2范數(shù)最小化和壓縮Haar-like特征匹配的快速視覺跟蹤方法。該方法通過去除規(guī)模龐大的方塊模板集和簡化觀測似然度函數(shù)降低計算的復(fù)雜度;而壓縮Haar-like特征匹配技術(shù)則增強了算法對目標(biāo)姿態(tài)變化及旋轉(zhuǎn)的魯棒性。實驗結(jié)果表明:與目前流行的跟蹤方法相比,該方法對嚴(yán)重遮擋、光照突變、快速運動、姿態(tài)變化和旋轉(zhuǎn)等干擾均具有較強的魯棒性,且在多個測試視頻上可以達(dá)到29幀/s的速度,能滿足快速視頻跟蹤要求。
-
關(guān)鍵詞:
- 目標(biāo)跟蹤 /
- PCA子空間 /
- L2范數(shù)最小化 /
- 壓縮Haar-like特征 /
- 觀測似然度
Abstract: Under the framework of the Bayesian inference, tracking methods based on PCA subspace and L2-norm minimization can deal with some complex appearance changes in the video scene successfully. However, they are prone to drifting or failure when the target object undergoes pose variation or rotation. To deal with this problem, a fast visual tracking method is proposed based on L2-norm minimization and compressed Haar-like features matching. The proposed method not only removes square templates, but also presents a simple but effective observation likelihood, and its robustness to pose variation and rotation is strengthened by Haar-like features matching. Compared with other popular method, the proposed method has stronger robustness to abnormal changes (e.g. heavy occlusion, drastic illumination change, abrupt motion, pose variation and rotation, etc). Furthermore, it runs fast with a speed of about 29 frames/s. -
COMANICIU D, RAMESH V, and MEER P. Real-time tracking of non-rigid objects using mean shift[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head, SC, USA, 2000: 142-149. KATJA N, ESTHER K M, and LUC V G. An adaptive color-based filter[J]. Image Vision Computing, 2003, 21(1): 99-110. ROSS D, LIM J, LIN R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1-3): 125-141. doi: 10.1007/s11263- 007-0075-7. MEI Xue and LING Haibin. Robust visual tracking using minimization[C]. Proceedings of IEEE International Conference on Computer Vision, Kyoto, Japan, 2009: 1436-1443. MEI Xue, LING Haibin, WU Yi, et al. Minimum error bounded efficient tracker with occlusion detection[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Colorado, USA, 2011: 1257-1264. BAO Chenglong, WU Yi, LING Haibin, et al. Real time robust L1 tracker using accelerated proximal gradient approach[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Rhode Island, USA, 2012: 1830-1837. SHI Qinfeng, ERIKSSON A, VAN DEN HENGEL A, et al. Is face recognition really a compressive sensing problem?[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Colorado, USA, 2011: 553-560. XIAO Ziyang, LU Huchuan, and WANG Dong. Object tracking with L2_RLS[C]. Proceedings of 21st International Conference on Pattern Recognition, Tsukuba, Japan, 2012: 681-684. XIAO Ziyang, LU Huchuan, and WANG Dong. L2-RLS based object tracking[J]. IEEE Transactions on Circuits Systems for Video Technology, 2014, 24(8): 1301-1309. doi: 10.11834/jig.20140105. 齊美彬, 楊勛, 楊艷芳, 等. 基于L范數(shù)最小化的實時目標(biāo)跟蹤[J]. 中國圖象圖形學(xué)報, 2014, 19(1): 36-44. doi: 10.11834/jig.20140105. QI Meibin, YANG Xun, YANG Yanfang, et al. Real-time object tracking based on L-norm minimization[J]. Journal of Image and Graphics, 2014, 19(1): 36-44. doi: 10.11834/jig. 20140105. 袁廣林, 薛模根. L范數(shù)正則化魯棒性編碼視覺跟蹤[J]. 電子與信息學(xué)報, 2014, 36(8): 1838-1843. doi: 10.3724/SP.J. 1146.2013.01416. YUAN Guanglin and XUE Mogen. Robust coding via L-norm regularization for visual tracking[J]. Journal of Electronics Information Technology, 2014, 36(8): 1838-1843. doi: 10.3724/SP.J.1146.2013.01416. WU Zhengping, YANG Jie, LIU Haibo, et al. A real-time object tracking via L2-RLS and compressed Haar-like features matching[J]. Multimedia Tools and Applications, 2016: 1-17. doi: 10.1007/s11042-016-3356-8. HONG S and HAN B. Visual tracking by sampling tree-structured graphical models[C]. Proceedings of European Conference on Computer Vision, Zurich, Switzerland, 2014: 1-16. [14] ZHUANG Bohan, LU Huchuan, XIAO Ziyang, et al. Visual tracking via discriminative sparse similarity map[J]. IEEE Transactions on Image Processing, 2014, 23(4): 1872-1881. doi: 10.1109/TIP.2014.2308414. ZHANG Kaihua, ZHANG Lei, and YANG Minghsuan. Real-time compressive tracking[C]. Proceedings of European Conference on Computer Vision, Florence, Italy, 2012: 864-877. [16] HENRIQUES J, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596. doi: 10.1109/TPAMI. 2014.2345390. LI Hanxin, LI Yi, and FATIH P. Deep track: learning discriminative feature representations by convolutional neural networks for visual tracking[C]. Proceedings of the British Machine Vision Conference, Nottingham, United Kingdom, 2014: 110-119. WU Zhengping, YANG Jie, LIU Haibo, et al. Robust compressive tracking under occlusion[C]. Proceedings of International Conference on Consumer Electronics, Berlin, Germany, 2015: 298-302. WU Yi, LIM J, and YANG Minghsuan. Online object tracking: a benchmark[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Portland, ORegon, USA, 2013: 2411-2418. -
計量
- 文章訪問數(shù): 1367
- HTML全文瀏覽量: 164
- PDF下載量: 488
- 被引次數(shù): 0