基于樣本質(zhì)量估計(jì)的空間正則化自適應(yīng)相關(guān)濾波視覺(jué)跟蹤
doi: 10.11999/JEIT180921
-
1.
西安郵電大學(xué)計(jì)算機(jī)學(xué)院 ??西安 ??710121
-
2.
西安郵電大學(xué)陜西省網(wǎng)絡(luò)數(shù)據(jù)分析與智能處理重點(diǎn)實(shí)驗(yàn)室 ??西安 ??710121
-
3.
空軍工程大學(xué)電訊工程學(xué)院 西安 710077
Adaptive Regularized Correlation Filters for Visual Tracking Based on Sample Quality Estimation
-
1.
School of Computer Science & Technology, Xi’an University of Posts & Telecommunications, Xi’an 710121, China
-
2.
Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
-
3.
Institute of Information and Navigation, Air Force Engineering University, Xi’an 710077, China
-
摘要: 相關(guān)濾波(CF)方法應(yīng)用于視覺(jué)跟蹤領(lǐng)域中效果顯著,但是由于邊界效應(yīng)的影響,導(dǎo)致跟蹤效果受到限制,針對(duì)這一問(wèn)題,該文提出一種基于樣本質(zhì)量估計(jì)的正則化自適應(yīng)的相關(guān)濾波視覺(jué)跟蹤算法。首先,該算法在濾波器的訓(xùn)練過(guò)程中加入空間懲罰項(xiàng),構(gòu)建目標(biāo)與背景的顏色及灰度直方圖模板并計(jì)算樣本質(zhì)量系數(shù),使得空間正則項(xiàng)根據(jù)樣本質(zhì)量系數(shù)自適應(yīng)變化,不同質(zhì)量的樣本受到不同程度的懲罰,減小了邊界效應(yīng)對(duì)跟蹤的影響;其次,通過(guò)對(duì)樣本質(zhì)量系數(shù)的判定,合理優(yōu)化跟蹤結(jié)果及模型更新,提高了跟蹤的可靠性和準(zhǔn)確性。在OTB2013和OTB2015數(shù)據(jù)平臺(tái)上的實(shí)驗(yàn)數(shù)據(jù)表明,與近幾年主流的跟蹤算法相比,該文算法的成功率均為最高,且與空間正則化相關(guān)濾波(SRDCF)算法相比分別提高了9.3%和9.9%。
-
關(guān)鍵詞:
- 視覺(jué)跟蹤 /
- 相關(guān)濾波 /
- 正則化自適應(yīng) /
- 樣本質(zhì)量估計(jì)
Abstract: Correlation Filters (CF) are efficient in visual tracking, but their performance is badly affected by boundary effects. Focusing on this problem, the adaptive regularized correlation filters for visual tracking based on sample quality estimation are proposed. Firstly, the proposed algorithm adds spatial regularization matrix to the training process of the filters, and constructs color and gray histogram templates to compute the sample quality factor. Then, the regularization term adaptively changes with the sample quality coefficient, so that the samples of different quality are subject to different degrees of punishment. Then, by thresholding the sample quality coefficient, the tracking results and model update strategy are optimized. The experimental results on OTB2013 and OTB2015 indicate that, compared with the state-of-the-art tracking algorithm, the average success ratio of the proposed algorithm is the highest. The success ratio is raised by 9.3% and 9.9% contrasted with Spatially RegularizeD Correlation Filters(SRDCF) algorithm respectively on OTB2013 and OTB2015. -
表 1 自適應(yīng)正則化的相關(guān)濾波視覺(jué)跟蹤算法
輸入:圖像序列${{{I}}_1},{{{I}}_2}, ·\!·\!· ,{{{I}}_n}$,目標(biāo)初始位置${{{p}}_0} = ({x_0},{y_0})$,目標(biāo)
初始尺度${{{s}}_0} = ({w_0},{h_0})$。輸出:每幀圖像的跟蹤結(jié)果,即目標(biāo)位置${{{p}}_t} = ({x_t},{y_t})$,目標(biāo)尺度
估計(jì)${{{s}}_t} = ({w_t},{h_t})$對(duì)于$t = 1,2, ·\!·\!· ,n$, do: (1) 目標(biāo)定位及尺度估計(jì) (a) 利用前一幀目標(biāo)位置${{{p}}_{t - 1}}$以及尺度${{{s}}_{t - 1}}$確定第$t$幀ROI區(qū) 域; (b) 提取多尺度樣本${{{I}}_s} = \{ {{{I}}_{{s_1}}},{{{I}}_{{s_2}}}, ·\!·\!· {{{I}}_{{s_S}}}\} $; (c) 根據(jù)響應(yīng)圖確定第$t$幀中目標(biāo)的中心位置${{{p}}_t}$以及尺度${{{s}}_t}$; (2) 樣本質(zhì)量估計(jì)及正則化自適應(yīng) (a) 根據(jù)目標(biāo)中心位置及尺度提取目標(biāo)及背景統(tǒng)計(jì)直方圖; (b) 利用式(8)計(jì)算樣本質(zhì)量系數(shù)$Q$;之后,利用樣本質(zhì)量系數(shù) 計(jì)算空間正則化項(xiàng); (3) 模型更新 (a) 利用式(19)更新跟蹤濾波器模型${{{ω}}_t}$; (b) 利用式(17)、式(18)更新統(tǒng)計(jì)信息模型${{{h}}_t}$; 結(jié)束 下載: 導(dǎo)出CSV
表 2 閾值
${τ}$ 的選取與OTB2015實(shí)驗(yàn)結(jié)果的對(duì)比分析閾值${\rm{\tau }}$ 2500 2750 3000 3250 3500 3750 OTB2015跟蹤成功率 0.820 0.779 0.871 0.855 0.817 0.795 下載: 導(dǎo)出CSV
表 3 8組測(cè)試序列的中心誤差(像素)和成功率(%)
算法 CNN-SVM STRCF TGPR HCF KCF STCT DSST C-COT SMCF Girl2 7.6(98.0) 11.3(89.0) 30.9(87.0) 110.0(56.0) 118.8(8.0) 264.6(7.0) 319.1(8.0) 46.4(54.0) 8.4(96.0) 7.9(97.0) Soccer 17.5(81.0) 260(24.0) 19.6(62.0) 60.7(14.0) 13.5(53.0) 15.6(46.0) 46.9(18.0) 14.3(43.0) 12.1(83.0) 14.5(84.0) Bolt2 6.4(90.0) 151.4(48.0) 7.8(71.0) 304.0(1.0) 8.3(88.0) 329.8(1.0) 6.3(95.0) 115.5(1.0) 7.0(92.0) 6.8(90.0) KiteSurf 2.3(99.0) 25.2(51.0) 66.7(45.0) 61.7(38.0) 59.8(45.0) 40.6(31.0) 7.8(70.0) 56.7(43.0) 2.1(99.0) 2.3(99.0) Sylvester 5.5(96.0) 5.0(98.0) 5.5(96.0) 5.7(91.0) 12.9(83.0) 13.3(81.0) 14.8(82.0) 14.8(70.0) 4.5(99.0) 7.5(99.0) Basketball 3.8(99.0) 21.4(48.0) 14.1(11.0) 9.4(90.0) 3.7(100.0) 8.1(90.0) 3.9(98.0) 111.6(14.0) 5.0(97.0) 4.1(98.0) Dog1 3.0(100.0) 7.2(58.0) 3.6(100.0) 5.9(69.0) 4.4(67.0) 4.1(64.0) 4.7(97.0) 4.6(66.0) 4.0(98.0) 4.8(96.0) CarScale 7.4(77.0) 19.8(53.0) 8.7(72.0) 21.4(46.0) 29.3(73.0) 16.1(55.0) 15.2(77.0) 18.8(51.0) 5.3(87.0) 8.7(77.0) 平均 5.8(94.0) 51.7(58.0) 16.2(74.2) 70.6(62.0) 26.6(62.0) 71.3(50.8) 42.5(57.0) 38.9(47.0) 5.4(93.9) 6.1(93.5) 下載: 導(dǎo)出CSV
表 4 不同屬性下算法跟蹤成功率對(duì)此結(jié)果
IV (40) OPR (64) SV (66) OCC (50) DEF (44) MB (31) FM (41) IPR (31) OV (14) BC (33) LR (10) 本文算法 0.659 0.644 0.640 0.641 0.624 0.672 0.646 0.622 0.600 0.655 0.570 CNN-SVM 0.532 0.546 0.492 0.513 0.547 0.568 0.530 0.545 0.488 0.543 0.419 STRCF 0.646 0.628 0.637 0.618 0.607 0.666 0.634 0.604 0.585 0.639 0.561 TGPR 0.449 0.454 0.400 0.429 0.412 0.409 0.398 0.461 0.373 0.426 0.378 HCF 0.535 0.532 0.487 0.523 0.530 0.573 0.555 0.557 0.474 0.575 0.424 KCF 0.469 0.449 0.399 0.438 0.436 0.456 0.452 0.464 0.393 0.489 0.306 STCT 0.636 0.584 0.596 0.592 0.603 0.625 0.616 0.570 0.530 0.625 0.527 DSST 0.476 0.448 0.414 0.426 0.412 0.465 0.442 0.484 0.374 0.463 0.311 C-COT 0.641 0.637 0.654 0.639 0.637 0.688 0.610 0.635 0.613 0.666 0.583 SMCF 0.672 0.653 0.632 0.653 0.612 0.665 0.632 0.610 0.608 0.663 0.579 下載: 導(dǎo)出CSV
-
SMEULDERS A W M, CHU D M, CUCCHIARA R, et al. Visual tracking: An experimental survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442–1468. doi: 10.1109/TPAMI.2013.230 WANG Naiyan, SHI Jianping, YEUNG D Y, et al. Understanding and diagnosing visual tracking systems[C]. Proceedings of IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3101–3109. 黃立勤, 朱飄. 車載視頻下改進(jìn)的核相關(guān)濾波跟蹤算法[J]. 電子與信息學(xué)報(bào), 2018, 40(8): 1887–1894. doi: 10.11999/JEIT171109HUANG Liqin and ZHU Piao. Improved kernel correlation filtering tracking for vehicle video[J]. Journal of Electronics &Information Technology, 2018, 40(8): 1887–1894. doi: 10.11999/JEIT171109 BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 2544–2550. HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. doi: 10.1109/TPAMI.2014.2345390 FELZENSZWALB P F, GIRSHICK R B, MCALLESTER D, et al. Object detection with discriminatively trained part-based models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627–1645. doi: 10.1109/TPAMI.2009.167 DANELLJAN M, KHAN F S, FELSBERG M, et al. Adaptive color attributes for real-time visual tracking[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 1090–1097. ZHANG Kaihua, ZHANG Lei, LIU Qingshan, et al. Fast visual tracking via dense spatio-temporal context learning[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 127–141. MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074–2082. NAM H and HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 4293–4302. BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]. Computer Vision – ECCV 2016 Workshops, Amsterdam, the Netherlands, 2016: 850–865. DANELLJAN M, ROBINSON A, KHAN F S, et al. Beyond correlation filters: Learning continuous convolution operators for visual tracking[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 472–488. MA Chao, YANG Xiaokang, ZHANG Chongyang, et al. Long-term correlation tracking[C]. IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 5388–5396. DANELLJAN M, HAGER G, KHAN F S, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561–1575. doi: 10.1109/TPAMI.2016.2609928 LI Feng, YAO Yingjie, LI Peihua, et al. Integrating boundary and center correlation filters for visual tracking with aspect ratio variation[C]. IEEE International Conference on Computer Vision Workshops, Venice, Italy, 2017: 2001–2009. WANG Xin, HOU Zhiqiang, YU Wangsheng, et al. Online scale adaptive visual tracking based on multilayer convolutional features[J]. IEEE Transactions on Cybernetics, 2019, 49(1): 146–158. doi: 10.1109/TCYB.2017.2768570 DANELLJAN M, H?GER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]. IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 4310–4318. LI Feng, TIAN Cheng, ZUO Wangmeng, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 4904–4913. 畢篤彥, 庫(kù)濤, 查宇飛, 等. 基于顏色屬性直方圖的尺度目標(biāo)跟蹤算法研究[J]. 電子與信息學(xué)報(bào), 2016, 38(5): 1099–1106. doi: 10.11999/JEIT150921BI Duyan, KU Tao, ZHA Yufei, et al. Scale-adaptive Object tracking based on color names histogram[J]. Journal of Electronics &Information Technology, 2016, 38(5): 1099–1106. doi: 10.11999/JEIT150921 WU Yi, LIM J, and YANG M H. Online object tracking: A benchmark[C]. IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 2411–2418. WU Yi, LIM J, and YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226 MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. Proceedings of IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074–3082. WANG Lijun, OUYANG Wanli, WANG Xiaogang, et al. STCT: sequentially training convolutional networks for visual tracking[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1373–1381. HONG S, YOU T, KWAK S, et al. Online tracking by learning discriminative saliency map with convolutional neural network[C]. The 32nd International Conference on Machine Learning, Lille, France, 2015: 597–606. GAO Jin, LING Haibin, HU Weiming, et al. Transfer learning based visual tracking with Gaussian processes regression[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 188–203. -