一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于多層卷積特征的自適應(yīng)決策融合目標(biāo)跟蹤算法

孫彥景 石韞開 云霄 朱緒冉 王賽楠

孫彥景, 石韞開, 云霄, 朱緒冉, 王賽楠. 基于多層卷積特征的自適應(yīng)決策融合目標(biāo)跟蹤算法[J]. 電子與信息學(xué)報(bào), 2019, 41(10): 2464-2470. doi: 10.11999/JEIT180971
引用本文: 孫彥景, 石韞開, 云霄, 朱緒冉, 王賽楠. 基于多層卷積特征的自適應(yīng)決策融合目標(biāo)跟蹤算法[J]. 電子與信息學(xué)報(bào), 2019, 41(10): 2464-2470. doi: 10.11999/JEIT180971
Yanjing SUN, Yunkai SHI, Xiao YUN, Xuran ZHU, Sainan WANG. Adaptive Strategy Fusion Target Tracking Based on Multi-layer Convolutional Features[J]. Journal of Electronics & Information Technology, 2019, 41(10): 2464-2470. doi: 10.11999/JEIT180971
Citation: Yanjing SUN, Yunkai SHI, Xiao YUN, Xuran ZHU, Sainan WANG. Adaptive Strategy Fusion Target Tracking Based on Multi-layer Convolutional Features[J]. Journal of Electronics & Information Technology, 2019, 41(10): 2464-2470. doi: 10.11999/JEIT180971

基于多層卷積特征的自適應(yīng)決策融合目標(biāo)跟蹤算法

doi: 10.11999/JEIT180971
基金項(xiàng)目: 江蘇省自然科學(xué)基金青年基金(BK20180640, BK20150204),江蘇省重點(diǎn)研發(fā)計(jì)劃(BE2015040),國(guó)家重點(diǎn)研發(fā)計(jì)劃(2016YFC0801403),國(guó)家自然科學(xué)基金(51504214, 51504255, 51734009, 61771417)
詳細(xì)信息
    作者簡(jiǎn)介:

    孫彥景:男,1977年生,教授,博士生導(dǎo)師,研究方向?yàn)闊o線傳感器網(wǎng)絡(luò)、視頻目標(biāo)跟蹤、人工智能、信息物理系統(tǒng)

    石韞開:男,1993年生,碩士生,研究方向?yàn)橐曨l目標(biāo)跟蹤和人工智能

    云霄:女,1986年生,講師,研究方向?yàn)橐曨l目標(biāo)跟蹤和人工智能

    朱緒冉:女,1993年生,碩士生,研究方向?yàn)槟繕?biāo)檢測(cè)與識(shí)別

    王賽楠:女,1992年生,碩士生,研究方向?yàn)橐曨l目標(biāo)跟蹤

    通訊作者:

    云霄 yxztong@163.com

  • 中圖分類號(hào): TP391.4

Adaptive Strategy Fusion Target Tracking Based on Multi-layer Convolutional Features

Funds: The Natural Science Foundation of Jiangsu Province (BK20180640, BK20150204), The Research Development Programme of Jiangsu Province (BE2015040), The State Key Research Development Program (2016YFC0801403), The National Natural Science Foundation of China (51504214, 51504255, 51734009, 61771417)
  • 摘要: 針對(duì)目標(biāo)快速運(yùn)動(dòng)、遮擋等復(fù)雜視頻場(chǎng)景中目標(biāo)跟蹤魯棒性差和跟蹤精度低的問題,該文提出一種基于多層卷積特征的自適應(yīng)決策融合目標(biāo)跟蹤算法(ASFTT)。首先提取卷積神經(jīng)網(wǎng)絡(luò)(CNN)中幀圖像的多層卷積特征,避免網(wǎng)絡(luò)單層特征表征目標(biāo)信息不全面的缺陷,增強(qiáng)算法的泛化能力;使用多層特征計(jì)算幀圖像相關(guān)性響應(yīng),提高算法的跟蹤精度;最后該文使用自適應(yīng)決策融合算法將所有響應(yīng)中目標(biāo)位置決策動(dòng)態(tài)融合以定位目標(biāo),融合算法綜合考慮生成響應(yīng)的各跟蹤器的歷史決策信息和當(dāng)前決策信息,以保證算法的魯棒性。采用標(biāo)準(zhǔn)數(shù)據(jù)集OTB2013對(duì)該文算法和6種當(dāng)前主流跟蹤算法進(jìn)行了仿真對(duì)比,結(jié)果表明該文算法具有更加優(yōu)秀的跟蹤性能。
  • 圖  1  ASFTT算法框圖

    圖  2  算法整體的精度曲線和成功率曲線圖

    圖  3  算法各屬性的精度曲線和成功率曲線圖

    圖  4  跟蹤效果對(duì)比圖

    表  1  基于多層卷積特征的自適應(yīng)決策融合目標(biāo)跟蹤算法

     輸入:視頻序列第1幀的目標(biāo)位置;初始各決策權(quán)重$w_1^1,w_1^2, ·\!·\!· ,w_1^m$; $R_1^m = 0$,$l_1^m = 0$。
     輸出:每幀圖像的目標(biāo)位置$({a_t},{b_t})$。
     (1) //權(quán)重初始化。使用式(4)計(jì)算$k$個(gè)跟蹤器的初始權(quán)重;
     (2) for t=2 to T(T是視頻的總幀數(shù)):
     (3) //提取網(wǎng)絡(luò)多層特征。提取網(wǎng)絡(luò)中待檢測(cè)圖像$k$層的特征$x_t^k$和模板分支最后一層特征${u'_1}$;
     (4) //響應(yīng)值計(jì)算。使用式(6)和式(8)計(jì)算$k$個(gè)相關(guān)濾波響應(yīng)值$R_t^k$和相似性響應(yīng)值${R'_t}$;
     (5) //自適應(yīng)響應(yīng)決策融合。計(jì)算目標(biāo)位置首先使用式(7)和式(9)計(jì)算步驟(4)中每個(gè)決策者預(yù)測(cè)的目標(biāo)位置$(a_t^m,b_t^m)$;通過式(10)計(jì)算最終的    目標(biāo)位置$({a_t},{b_t})$;
     (6) //更新權(quán)重值,用于下一幀檢測(cè)。首先通過式(11)和式(12)計(jì)算各決策者的損失$L_t^m$和當(dāng)前代價(jià)函數(shù)$p_t^m$;其次使用式(13)和式(14)更新穩(wěn)    定性模型并計(jì)算每個(gè)決策者的穩(wěn)定性度量值$r_t^m$;使用式(15b)和式(15a)計(jì)算每個(gè)決策者當(dāng)前代價(jià)函數(shù)$p_t^m$的$\alpha _t^m$比例值和每個(gè)決策者    的累積代價(jià)函數(shù)$S_t^m$;并使用式(16)更新每個(gè)決策者所對(duì)應(yīng)的權(quán)重$w_{t + 1}^m$;最后通過式(5)更新$k$個(gè)跟蹤器的權(quán)重;
     (7) end for;
    下載: 導(dǎo)出CSV

    表  2  測(cè)試視頻序列包含的影響因素

    序列幀數(shù)影響因素
    basketball725形變、遮擋、光照變化、背景雜波等
    jumping313運(yùn)動(dòng)模糊、快速運(yùn)動(dòng)
    shaking365光照變化、背景雜波、尺度變化等
    couple140平面外旋轉(zhuǎn)、尺度變化、形變等
    下載: 導(dǎo)出CSV
  • 侯志強(qiáng), 張浪, 余旺盛, 等. 基于快速傅里葉變換的局部分塊視覺跟蹤算法[J]. 電子與信息學(xué)報(bào), 2015, 37(10): 2397–2404. doi: 10.11999/JEIT150183

    HOU Zhiqiang, ZHANG Lang, YU Wangsheng, et al. Local patch tracking algorithm based on fast fourier transform[J]. Journal of Electronics &Information Technology, 2015, 37(10): 2397–2404. doi: 10.11999/JEIT150183
    HUANG C, LUCEY S, and RAMANAN D. Learning policies for adaptive tracking with deep feature cascades[C]. Proceedings of IEEE International Conference on Computer Vision, Venice, Italy, 2017: 105–114.
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 2012: 1097–1105.
    WANG Linzhao, WANG Lijun, LU Huchuan, et al. Saliency detection with recurrent fully convolutional networks[C]. Proceedings of the 14th Computer Vision European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 825–841.
    LONG J, SHELHAMER E, and DARRELL T. Fully convolutional networks for semantic segmentation[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3431–3440.
    DANELLJAN M, ROBINSON A, KHAN F S, et al. Beyond correlation filters: learning continuous convolution operators for visual tracking[M]. LEIBE B, MATAS J, SEBE N, et al. Computer Vision – ECCV 2016. Cham: Springer, 2016: 472–488.
    WANG Naiyan and YEUNG D Y. Learning a deep compact image representation for visual tracking[C]. Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 2013: 809–817.
    BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]. Proceedings of the Computer Vision – ECCV 2016 Workshops, Amsterdam, The Netherlands, 2016: 850–865.
    DALAL N and TRIGGS B. Histograms of oriented gradients for human detection[C]. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, USA, 2005: 886–893.
    TIAN Gang, HU Ruimin, WANG Zhongyuan, et al. Improved object tracking algorithm based on new hsv color probability model[M]. YU Wen, HE Haibo, ZHANG Nian. Advances in Neural Networks – ISNN 2009. Berlin Heidelberg, Springer, 2009: 1145–1151.
    孫航, 李晶, 杜博, 等. 基于多階段學(xué)習(xí)的相關(guān)濾波目標(biāo)跟蹤[J]. 電子學(xué)報(bào), 2017, 45(10): 2337–2342. doi: 10.3969/j.issn.0372-2112.2017.10.004

    SUN Hang, LI Jing, DU Bo, et al. Correlation filtering target tracking based on online multi-lifespan learning[J]. Acta Electronica Sinica, 2017, 45(10): 2337–2342. doi: 10.3969/j.issn.0372-2112.2017.10.004
    DANELLJAN M, H?GER G, KHAN F S, et al. Accurate scale estimation for robust visual tracking[C]. Proceedings of British Machine Vision Conference, Nottingham, UK, 2014: 65.1–65.11.
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]. Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 2012: 702–715.
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. doi: 10.1109/tpami.2014.2345390
    HELD D, THRUN S, and SAVARESE S. Learning to track at 100 FPS with deep regression networks[C]. Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 749–765.
    TAO Ran, GAVVES E, and SMEULDERS A W M. Siamese instance search for tracking[C]. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1420–1429.
    ZHANG Hainan, SUN Yanjing, LI Song, et al. Long-term tracking based on multi-feature adaptive fusion for video target[J]. IEICE Transactions on Information and Systems, 2018.
    CHAUDHURI K, FREUND Y, and HSU D. A parameter-free hedging algorithm[C]. Proceedings of the 22nd International Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, 2009: 297–305.
    WU Yi, LIM J, and YANG M H. Online object tracking: a benchmark[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 2411–2418.
    MUELLER M, SMITH N, and GHANEM B. Context-aware correlation filter tracking[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 1387–1395.
    DANELLJAN M, H?GER G, KHAN F S, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561–1575. doi: 10.1109/TPAMI.2016.2609928
    BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1401–1409.
    ZHANG Kaihua, LIU Qingshan, WU Yi, et al. Robust visual tracking via convolutional networks without training[J]. IEEE Transactions on Image Processing, 2016, 25(4): 1779–1792.
  • 加載中
圖(4) / 表(2)
計(jì)量
  • 文章訪問數(shù):  3520
  • HTML全文瀏覽量:  1512
  • PDF下載量:  174
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2018-10-17
  • 修回日期:  2019-02-26
  • 網(wǎng)絡(luò)出版日期:  2019-03-16
  • 刊出日期:  2019-10-01

目錄

    /

    返回文章
    返回