一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標(biāo)題
留言內(nèi)容
驗證碼

基于空間可靠性約束的魯棒視覺跟蹤算法

蒲磊 馮新喜 侯志強 余旺盛

蒲磊, 馮新喜, 侯志強, 余旺盛. 基于空間可靠性約束的魯棒視覺跟蹤算法[J]. 電子與信息學(xué)報, 2019, 41(7): 1650-1657. doi: 10.11999/JEIT180780
引用本文: 蒲磊, 馮新喜, 侯志強, 余旺盛. 基于空間可靠性約束的魯棒視覺跟蹤算法[J]. 電子與信息學(xué)報, 2019, 41(7): 1650-1657. doi: 10.11999/JEIT180780
Lei PU, Xinxi FENG, Zhiqiang HOU, Wangsheng YU. Robust Visual Tracking Based on Spatial Reliability Constraint[J]. Journal of Electronics & Information Technology, 2019, 41(7): 1650-1657. doi: 10.11999/JEIT180780
Citation: Lei PU, Xinxi FENG, Zhiqiang HOU, Wangsheng YU. Robust Visual Tracking Based on Spatial Reliability Constraint[J]. Journal of Electronics & Information Technology, 2019, 41(7): 1650-1657. doi: 10.11999/JEIT180780

基于空間可靠性約束的魯棒視覺跟蹤算法

doi: 10.11999/JEIT180780
基金項目: 國家自然科學(xué)基金(61571458, 61473309, 41601436)
詳細(xì)信息
    作者簡介:

    蒲磊:男,1991年生,博士生,研究方向為計算機視覺、目標(biāo)跟蹤

    馮新喜:男,1964年生,教授,研究方向為信息融合、模式識別

    侯志強:男,1973年生,教授,研究方向為圖像處理、計算機視覺

    余旺盛:男,1985年生,講師,研究方向為圖像處理、模式識別

    通訊作者:

    蒲磊 warmstoner@163.com

  • 中圖分類號: TP391.4

Robust Visual Tracking Based on Spatial Reliability Constraint

Funds: The National Natural Science Foundation of China (61571458, 61473309, 41601436)
  • 摘要: 針對復(fù)雜背景下目標(biāo)容易發(fā)生漂移的問題,該文提出一種基于空間可靠性約束的目標(biāo)跟蹤算法。首先通過預(yù)訓(xùn)練卷積神經(jīng)網(wǎng)絡(luò)(CNN)模型提取目標(biāo)的多層深度特征,并在各層上分別訓(xùn)練相關(guān)濾波器,然后對得到的響應(yīng)圖進行加權(quán)融合。接著通過高層特征圖提取目標(biāo)的可靠性區(qū)域信息,得到一個二值注意力矩陣,最后將得到的二值矩陣用于約束融合后響應(yīng)圖的搜索范圍,范圍內(nèi)的最大響應(yīng)值即為目標(biāo)的中心位置。為了處理長時遮擋問題,該文提出一種基于首幀模板信息的隨機選擇更新策略。實驗結(jié)果表明,該算法在應(yīng)對相似背景干擾、遮擋、超出視野等多種場景均有良好的性能表現(xiàn)。
  • 圖  1  卷積深度特征可視化

    圖  2  算法流程圖

    圖  3  OTB100測試結(jié)果的精度曲線和成功率曲線

    圖  4  TempleColor128測試結(jié)果的精度曲線和成功率曲線

    表  1  基于空間可靠性約束的魯棒視覺跟蹤算法

     輸入:圖像序列I1, I2, ···, In,目標(biāo)初始位置p0=(x0, y0),目標(biāo)初
    始尺度s0=(w0, h0)。
     輸出:每幀圖像的跟蹤結(jié)果pt=(xt, yt), st=(wt, ht)。
    對于t=1, 2, ···, n, do:
     (1) 定位目標(biāo)中心位置
       (a) 利用前一幀目標(biāo)位置pt–1確定第t幀ROI區(qū)域,并提取其
    分層卷積特征;
       (b) 對于每一層的卷積特征,利用式(4)和式(5)計算其相關(guān)
    響應(yīng)圖;
       (c) 利用式(6)對多個相關(guān)響應(yīng)圖進行融合,得到最終的相
    關(guān)響應(yīng)圖;
       (d)通過式(7)和式(8)提取空間可靠性區(qū)域圖并將用于約束
    響應(yīng)圖搜索范圍;
       (e) 利用式(9)確定第t 幀中目標(biāo)的中心位置pt。
     (2) 確定目標(biāo)最佳尺度
       (a) 利用pt和前一幀目標(biāo)尺度st–1進行多尺度采樣,得到采樣
    圖像集Is={$ I_{s_1},\ I_{s_2},\ ·\!·\!·,\ I_{s_m}$};
       (b) 采用文獻[14]中的尺度估計方法確定第t幀中目標(biāo)的最佳
    尺度st。
     (3) 模型更新
       (a) 通過得到響應(yīng)圖計算最大響應(yīng)值;
       (b) 依據(jù)響應(yīng)值大小和式(10)—式(12)對濾波器進行更新。
     結(jié)束
    下載: 導(dǎo)出CSV

    表  2  不同屬性下算法的跟蹤精度對比結(jié)果

    算法SV(60)OCC(45)IV(34)BC(27)DEF(42)MB(29)FM(37)IPR(46)OPR(57)OV(13)LR(8)
    本文算法0.8270.7990.8550.8720.8010.8130.8000.8790.8440.7560.870
    HDT0.8110.7530.8030.8550.8170.7640.8000.8510.8040.6630.749
    HCF0.8000.7480.8050.8570.7880.7720.7880.8630.8070.6800.778
    下載: 導(dǎo)出CSV

    表  3  不同屬性下算法的跟蹤成功率對比結(jié)果

    算法SV(60)OCC(45)IV(34)BC(27)DEF(42)MB(29)FM(37)IPR(46)OPR(57)OV(13)LR(8)
    本文算法0.5800.5940.6350.6270.5700.6240.6090.6050.5970.5560.510
    HDT0.4910.5280.5400.5930.5460.5450.5490.5570.5330.5410.376
    HCF0.4900.5260.5470.6020.5320.5570.5500.5990.5340.5420.383
    下載: 導(dǎo)出CSV

    表  4  算法各部分對跟蹤性能影響對比實驗

    SRCTSRCT-SSRCT-RSRCT-S-R
    成功率0.6240.6180.6100.603
    跟蹤精度0.8640.8560.8410.838
    下載: 導(dǎo)出CSV
  • SMEULDERS A W M, CHU D M, CUCCHIARA R, et al. Visual tracking: An experimental survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442–1468. doi: 10.1109/TPAMI.2013.230
    WANG Naiyan, SHI Jianping, YEUNG D Y, et al. Understanding and diagnosing visual tracking systems[C]. Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3101–3109. doi: 1109/ICCV.2015.355.
    RAWAT W and WANG Zenghui. Deep convolutional neural networks for image classification: A comprehensive review[J]. Neural Computation, 2017, 29(9): 2352–2449. doi: 10.1162/neco_a_00990
    GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587.
    SHELHAMER E, LONG J, and DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640–651. doi: 10.1109/TPAMI.2016.2572683
    WANG Naiyan and YEUNG D Y. Learning a deep compact image representation for visual tracking[C]. Proceedings of the 26th International Conference on Neural Information Processing Systems, South Lake Tahoe, Nevada, USA, 2013: 809–817.
    HONG S, YOU T, KWAK S, et al. Online tracking by learning discriminative saliency map with convolutional neural network[C]. Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 2015: 597–606.
    NAM H and HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 4293–4302.
    李寰宇, 畢篤彥, 楊源, 等. 基于深度特征表達與學(xué)習(xí)的視覺跟蹤算法研究[J]. 電子與信息學(xué)報, 2015, 37(9): 2033–2039. doi: 10.11999/JEIT150031

    LI Huanyu, BI Duyan, YANG Yuan, et al. Research on visual tracking algorithm based on deep feature expression and learning[J]. Journal of Electronics &Information Technology, 2015, 37(9): 2033–2039. doi: 10.11999/JEIT150031
    侯志強, 戴鉑, 胡丹, 等. 基于感知深度神經(jīng)網(wǎng)絡(luò)的視覺跟蹤[J]. 電子與信息學(xué)報, 2016, 38(7): 1616–1623. doi: 10.11999/JEIT151449

    HOU Zhiqiang, DAI Bo, HU Dan, et al. Robust visual tracking via perceptive deep neural network[J]. Journal of Electronics &Information Technology, 2016, 38(7): 1616–1623. doi: 10.11999/JEIT151449
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]. Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 2012: 702–715. doi: 10.1007/978-3-642-33765-9_50.
    DANELLJAN M, KHAN F S, FELSBERG M, et al. Adaptive color attributes for real-time visual tracking[C]. Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 1090–1097. doi: 10.1109/CVPR.2014.143.
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. doi: 10.1109/tpami.2014.2345390
    DANELLJAN M, H?GER G, KHAN F S, et al. Accurate scale estimation for robust visual tracking[C]. Proceedings of British Machine Vision Conference, Nottingham, UK, 2014: 65.1–65.11. doi: 10.5244/C.28.65.
    DANELLJAN M, H?GER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]. Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 4310–4318. doi: 10.1109/ICCV.2015.490.
    DANELLJAN M, ROBINSON A, KHAN F S, et al. Beyond correlation filters: Learning continuous convolution operators for visual tracking[C]. Proceedings of the 14th European Conference, Amsterdam, the Netherlands, 2016: 472–488. doi: 10.1007/978-3-319-46454-1_29.
    RUSSAKOVSKY O, DENG Jia, SU Hao, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211–252. doi: 10.1007/s11263-015-0816-y
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105. doi: 10.1145/3065386.
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. International Conference on Learning Representations, San Diego,USA,2015.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    VEDALDI A and LENC K. Matconvnet: Convolutional neural networks for matlab[C]. Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 2015: 689–692. doi: 10.1145/2733373.2807412.
    WU Yi, LIM J, and YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226
    DANELLJAN M, H?GER G, KHAN F S, et al. Convolutional features for correlation filter based visual tracking[C]. Proceedings of 2015 IEEE International Conference on Computer Vision Workshop, Santiago, Chile, 2015: 58–66. doi: 10.1109/ICCVW.2015.84.
    QI Yuankai, ZHANG Shengping, QIN Lei, et al. Hedged deep tracking[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 4303–4311. doi: 10.1109/CVPR.2016.466.
    MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074–3082. doi: 10.1109/ICCV.2015.352.
    ZHANG Jianming, MA Shugao, and SCLAROFF S. MEEM: Robust tracking via multiple experts using entropy minimization[C]. Proceedings of the 13th European Conference, Zurich, Switzerland, 2014: 188–203.
    LIANG Pengpeng, BLASCH E, and LING Haibin. Encoding color information for visual tracking: Algorithms and benchmark[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5630–5644. doi: 10.1109/TIP.2015.2482905
    TAO Ran, GAVVES E, and SMEULDERS A W M. Siamese instance search for tracking[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1420–1429. doi: 10.1109/CVPR.2016.158.
    BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]. European Conference on Computer Vision, Amsterdam, the Netherlands, 2016: 850–865.
    侯志強, 張浪, 余旺盛, 等. 基于快速傅里葉變換的局部分塊視覺跟蹤算法[J]. 電子與信息學(xué)報, 2015, 37(10): 2397–2404. doi: 10.11999/JEIT150183

    HOU Zhiqiang, ZHANG Lang, YU Wangsheng, et al. Local patch tracking algorithm based on fast fourier transform[J]. Journal of Electronics &Information Technology, 2015, 37(10): 2397–2404. doi: 10.11999/JEIT150183
  • 加載中
圖(4) / 表(4)
計量
  • 文章訪問數(shù):  2358
  • HTML全文瀏覽量:  842
  • PDF下載量:  70
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2018-08-07
  • 修回日期:  2019-01-21
  • 網(wǎng)絡(luò)出版日期:  2019-02-15
  • 刊出日期:  2019-07-01

目錄

    /

    返回文章
    返回