一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號碼
標(biāo)題
留言內(nèi)容
驗證碼

光照變化下基于逆向稀疏表示的視覺跟蹤方法

王洪雁 邱賀磊 鄭佳 裴炳南

王洪雁, 邱賀磊, 鄭佳, 裴炳南. 光照變化下基于逆向稀疏表示的視覺跟蹤方法[J]. 電子與信息學(xué)報, 2019, 41(3): 632-639. doi: 10.11999/JEIT180442
引用本文: 王洪雁, 邱賀磊, 鄭佳, 裴炳南. 光照變化下基于逆向稀疏表示的視覺跟蹤方法[J]. 電子與信息學(xué)報, 2019, 41(3): 632-639. doi: 10.11999/JEIT180442
Hongyan WANG, Helei QIU, Jia ZHENG, Bingnan PEI. Visual Tracking Method Based on Reverse Sparse Representation under Illumination Variation[J]. Journal of Electronics & Information Technology, 2019, 41(3): 632-639. doi: 10.11999/JEIT180442
Citation: Hongyan WANG, Helei QIU, Jia ZHENG, Bingnan PEI. Visual Tracking Method Based on Reverse Sparse Representation under Illumination Variation[J]. Journal of Electronics & Information Technology, 2019, 41(3): 632-639. doi: 10.11999/JEIT180442

光照變化下基于逆向稀疏表示的視覺跟蹤方法

doi: 10.11999/JEIT180442
基金項目: 國家自然科學(xué)基金(61301258, 61271379),中國博士后科學(xué)基金(2016M590218)
詳細(xì)信息
    作者簡介:

    王洪雁:男,1979年生,副教授,博士,主要研究方向為MIMO雷達(dá)信號處理、毫米波通信、機(jī)器視覺

    邱賀磊:男,1991年生,碩士生,研究方向為圖像處理、機(jī)器視覺

    鄭佳:男,1990年生,碩士生,研究方向為機(jī)器視覺、無人機(jī)容錯控制

    裴炳南:男,1956年生,教授,博士,博士生導(dǎo)師,主要研究方向為雷達(dá)信號處理、毫米波通信

    通訊作者:

    王洪雁 gglongs@163.com

  • 中圖分類號: TP391

Visual Tracking Method Based on Reverse Sparse Representation under Illumination Variation

Funds: The National Natural Science Foundation of China (61301258, 61271379), China Postdoctoral Science Foundation (2016M590218)
  • 摘要:

    針對光照變化引起目標(biāo)跟蹤性能顯著下降的問題,該文提出一種聯(lián)合優(yōu)化光照補(bǔ)償和多任務(wù)逆向稀疏表示的視覺跟蹤方法。首先基于模板與候選目標(biāo)的平均亮度差異對模板實(shí)施光照補(bǔ)償,并利用候選目標(biāo)逆向稀疏表示光照補(bǔ)償后的模板。而后將所得多個關(guān)于單模板的優(yōu)化問題轉(zhuǎn)化為一個關(guān)于多模板的多任務(wù)優(yōu)化問題,并利用交替迭代方法求解此多任務(wù)優(yōu)化問題以獲得最優(yōu)光照補(bǔ)償系數(shù)矩陣以及稀疏編碼矩陣。最后利用所得稀疏編碼矩陣快速剔除無關(guān)候選目標(biāo),并采用局部結(jié)構(gòu)化評估方法實(shí)現(xiàn)目標(biāo)精確跟蹤。仿真結(jié)果表明,與現(xiàn)有主流算法相比,劇烈光照變化情況下,所提方法可顯著改善目標(biāo)跟蹤精度及穩(wěn)健性。

  • 圖  1  用于光照補(bǔ)償?shù)膱D像矢量化

    圖  2  跟蹤結(jié)果

    表  1  光照補(bǔ)償與多任務(wù)逆向稀疏表示聯(lián)合優(yōu)化算法

     輸入:${T}$, ${Y}$, $\beta $和$\tilde \lambda $
     (1) 基于式(8)設(shè)定稀疏編碼矩陣${C}$的初始值;
     (2) 由式(12),式(2),式(4),式(6)獲得${K}$;
     (3) 利用APG方法求解問題式(13)以求得${C}$;
     (4) 重復(fù)步驟(2),步驟(3),直至滿足收斂條件。
     輸出:${K}$和${C}$
    下載: 導(dǎo)出CSV

    表  2  視頻序列及其主要挑戰(zhàn)

    測試序列挑戰(zhàn)因素
    Car4光照變化,尺度變化
    Singer1光照變化,尺度變化,遮擋等
    Trellis光照變化,背景雜波,尺度變化等
    Car1光照變化,運(yùn)動模糊,尺度變化等
    下載: 導(dǎo)出CSV

    表  3  不同跟蹤方法的平均中心位置誤差和平均跟蹤重疊率

    測試序列平均中心位置誤差(像素)平均跟蹤重疊率
    本文TLDStruckL1APGMTT本文TLDStruckL1APGMTT
    Car43.4712.848.6977.0022.340.840.630.490.250.45
    Singer12.887.9914.5153.3536.170.860.730.360.280.34
    Trellis6.8231.066.9262.2068.800.650.480.610.200.21
    Car11.1885.1551.7393.93101.810.830.260.110.170.15
    平均3.5924.2620.4671.6257.280.800.530.400.230.29
    下載: 導(dǎo)出CSV

    表  4  快速候選目標(biāo)篩選方案對運(yùn)行速度(FPS)的影響

    測試序列Car4Singer1TrellisCar1
    不采用篩選方案運(yùn)行速度(FPS)4.14.63.15.5
    采用篩選方案運(yùn)行速度(FPS)10.58.710.48.4
    下載: 導(dǎo)出CSV
  • FRADI H, LUVISON B, and PHAM Q C. Crowd behavior analysis using local mid-level visual descriptors[J]. IEEE Transactions on Circuits & Systems for Video Technology, 2017, 27(3): 589–602. doi: 10.1109/TCSVT.2016.2615443
    YU Gang, LI Chao, and SHANG Zeyuan. Video monitoring method, video monitoring system and computer program product[P]. USA Patent, 9792505, 2017.
    UENG S K and CHEN Guanzhi. Vision based multi-user human computer interaction[J]. Multimedia Tools & Applications, 2016, 75(16): 10059–10076. doi: 10.1007/s11042-015-3061-z
    WU Yi, LIM J, and YANG Minghsuan. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226
    PAN Zheng, LIU Shuai, and FU Weina. A review of visual moving target tracking[J]. Multimedia Tools & Applications, 2017, 76(16): 16989–17018. doi: 10.1007/s11042-016-3647-0
    薛模根, 劉文琢, 袁廣林, 等. 基于編碼遷移的快速魯棒視覺跟蹤[J]. 電子與信息學(xué)報, 2017, 39(7): 1571–1577. doi: 10.11999/JEIT160966

    XUE Mogen, LIU Wenzhuo, YUAN Guanglin, et al. Fast robust visual tracking based on coding transfer[J]. Journal of Electronics &Information Technology, 2017, 39(7): 1571–1577. doi: 10.11999/JEIT160966
    楊峰, 張婉瑩. 一種多模型貝努利粒子濾波機(jī)動目標(biāo)跟蹤算法[J]. 電子與信息學(xué)報, 2017, 39(3): 634–639. doi: 10.11999/JEIT160467

    YANG Feng and ZHANG Wanying. Multiple model Bernoulli particle filter for maneuvering target tracking[J]. Journal of Electronics &Information Technology, 2017, 39(3): 634–639. doi: 10.11999/JEIT160467
    BAIG M Z and GOKHALE A V. Object tracking using mean shift algorithm with illumination invariance[C]. Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India, 2015: 550–553.
    NAYAK A and CHAUDHURI S. Automatic illumination correction for scene enhancement and object tracking[J]. Image & Vision Computing, 2006, 24(9): 949–959. doi: 10.1016/j.imavis.2006.02.017
    SILVEIRA G and MALIS E. Real-time visual tracking under arbitrary illumination changes[C]. IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA, 2007: 1–6.
    WANG Yuru, TANG Xianglong, CUI Qing, et al. Dynamic appearance model for particle filter based visual tracking[J]. Pattern Recognition, 2012, 45(12): 4510–4523. doi: 10.1016/j.patcog.2012.05.010
    BAO Chenglong, WU Yi, LING Haibin, et al. Real time robust L1 tracker using accelerated proximal gradient approach[C]. IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 1830–1837.
    MA Bo, SHEN Jianbing, LIU Yangbiao, et al. Visual tracking using strong classifier and structural local sparse descriptors[J]. IEEE Transactions on Multimedia, 2015, 17(10): 1818–1828. doi: 10.1109/TMM.2015.2463221
    ZHUANG Bohan, LU Huchuan, XIAO Ziyang, et al. Visual tracking via discriminative sparse similarity map[J]. IEEE Transactions on Image Processing, 2014, 23(4): 1872–1881. doi: 10.1109/TIP.2014.2308414
    JIA Xu, LU Huchuan, and YANG Minghsuan. Visual tracking via coarse and fine structural local sparse appearance models[J]. IEEE Transactions on Image Processing, 2016, 25(10): 4555–4564. doi: 10.1109/TIP.2016.2592701
    SUI Yao and ZHANG Li. Robust tracking via locally structured representation[J]. International Journal of Computer Vision, 2016, 119(2): 110–144. doi: 10.1007/s11263-016-0881-x
    ZHANG Tianzhu, GHANEM B, LIU Si, et al. Robust visual tracking via multi-task sparse learning[C]. IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2042–2049.
    MA Bo, HUANG Lianghua, SHEN Jianbing, et al. Visual tracking under motion blur[J]. IEEE Transactions on Image Processing, 2016, 25(12): 5867–5876. doi: 10.1109/TIP.2016.2615812
    ROSS D A, LIM J, LIN R S, et al. Incremental learning for robust visual tracking[J]. International Journal of Computer Vision, 2008, 77(1): 125–141. doi: 10.1007/s11263-007-0075-7
    POLSON N and SOKOLOV V. Bayesian particle tracking of traffic flows[J]. IEEE Transactions on Intelligent Transportation Systems, 2018, 19(2): 345–356. doi: 10.1109/TITS.2017.2650947
    HE Zhenyu, YI Shuangyan, CHEUNG Y M, et al. Robust object tracking via key patch sparse representation[J]. IEEE Transactions on Cybernetics, 2017, 47(2): 354–364. doi: 10.1109/TCYB.2016.2514714
    ZHANG Kaihua, ZHANG Lei, and YANG Minghsuan. Real-time compressive tracking[C]. European Conference on Computer Vision, Florence, Italy, 2012: 864–877.
    KALAL Z, MATAS J, and MIKOLAJCZYK K. P-N learning: Bootstrapping binary classifiers by structural constraints[C]. IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 49–56.
    HARE S, SAFFARI A, and TORR P H S. Struck: Structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016, 38(10): 2096–2109. doi: 10.1109/TPAMI.2015.2509974
  • 加載中
圖(2) / 表(4)
計量
  • 文章訪問數(shù):  1826
  • HTML全文瀏覽量:  602
  • PDF下載量:  78
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2018-05-10
  • 修回日期:  2018-11-08
  • 網(wǎng)絡(luò)出版日期:  2018-11-19
  • 刊出日期:  2019-03-01

目錄

    /

    返回文章
    返回