一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于自適應(yīng)背景選擇和多檢測區(qū)域的相關(guān)濾波算法

蒲磊 馮新喜 侯志強(qiáng) 余旺盛

蒲磊, 馮新喜, 侯志強(qiáng), 余旺盛. 基于自適應(yīng)背景選擇和多檢測區(qū)域的相關(guān)濾波算法[J]. 電子與信息學(xué)報(bào), 2020, 42(12): 3061-3067. doi: 10.11999/JEIT190931
引用本文: 蒲磊, 馮新喜, 侯志強(qiáng), 余旺盛. 基于自適應(yīng)背景選擇和多檢測區(qū)域的相關(guān)濾波算法[J]. 電子與信息學(xué)報(bào), 2020, 42(12): 3061-3067. doi: 10.11999/JEIT190931
Lei PU, Xinxi FENG, Zhiqiang HOU, Wangsheng YU. Correlation Filter Algorithm Based on Adaptive Context Selection and Multiple Detection Areas[J]. Journal of Electronics & Information Technology, 2020, 42(12): 3061-3067. doi: 10.11999/JEIT190931
Citation: Lei PU, Xinxi FENG, Zhiqiang HOU, Wangsheng YU. Correlation Filter Algorithm Based on Adaptive Context Selection and Multiple Detection Areas[J]. Journal of Electronics & Information Technology, 2020, 42(12): 3061-3067. doi: 10.11999/JEIT190931

基于自適應(yīng)背景選擇和多檢測區(qū)域的相關(guān)濾波算法

doi: 10.11999/JEIT190931
基金項(xiàng)目: 國家自然科學(xué)基金(61571458, 61703423)
詳細(xì)信息
    作者簡介:

    蒲磊:男,1991年生,博士生,研究方向?yàn)橛?jì)算機(jī)視覺、目標(biāo)跟蹤

    馮新喜:男,1964年生,教授,研究方向?yàn)樾畔⑷诤?、模式識(shí)別

    侯志強(qiáng):男,1973年生,教授,研究方向?yàn)閳D像處理、計(jì)算機(jī)視覺

    余旺盛:男,1985年生,講師,研究方向?yàn)閳D像處理、模式識(shí)別

    通訊作者:

    蒲磊 warmstoner@163.com

  • 中圖分類號: TN911.73; TP391.4

Correlation Filter Algorithm Based on Adaptive Context Selection and Multiple Detection Areas

Funds: The National Natural Science Foundation of China (61571458, 61703423)
  • 摘要: 為了進(jìn)一步提高相關(guān)濾波算法的判別力和對快速運(yùn)動(dòng)、遮擋等復(fù)雜場景的應(yīng)對能力,該文提出一種基于自適應(yīng)背景選擇和多檢測區(qū)域的跟蹤框架。首先對檢測后的響應(yīng)圖進(jìn)行峰值分析,當(dāng)響應(yīng)為單峰的時(shí)候,提取目標(biāo)上下左右的4塊區(qū)域作為負(fù)樣本對模型進(jìn)行訓(xùn)練,當(dāng)響應(yīng)為多峰的時(shí)候,采用峰值提取技術(shù)和閾值選擇方法提取較大幾個(gè)峰值區(qū)域作為負(fù)樣本。為了進(jìn)一步提高算法對遮擋的應(yīng)對能力,該文提出了一種多檢測區(qū)域的搜索策略。將該框架和傳統(tǒng)的相關(guān)濾波算法進(jìn)行結(jié)合,實(shí)驗(yàn)結(jié)果表明,相對于基準(zhǔn)算法,該算法在精度上提高了6.9%,在成功率上提高了6.3%。
  • 圖  1  基于響應(yīng)圖峰值提取的自適應(yīng)背景選擇策略

    圖  2  多檢測區(qū)域搜索策略

    圖  3  OTB100測試結(jié)果的精度曲線和成功率曲線

    圖  4  定性分析

    表  1  基于自適應(yīng)背景選擇和多檢測區(qū)域的相關(guān)濾波算法

     輸入:圖像序列I1, I2, ···, In,目標(biāo)初始位置p0=(x0, y0)。
     輸出:每幀圖像的跟蹤結(jié)果pt=(xt, yt)。
     對于t=1, 2, ···, n, do
      (1) 定位目標(biāo)中心位置
      (a) 利用前一幀目標(biāo)位置pt-1確定第t幀ROI區(qū)域,并提取
        HOG特征;
      (b) 利用式(3)在多個(gè)檢測區(qū)域進(jìn)行計(jì)算,獲得多個(gè)響應(yīng)圖;
      (c) 提取多個(gè)響應(yīng)圖的最大值作為目標(biāo)的中心位置pt
      (2) 模型更新
      (a) 對得到的響應(yīng)圖計(jì)算峰值個(gè)數(shù);
      (b) 當(dāng)為單峰時(shí),提取上下左右4個(gè)背景塊進(jìn)行模型更新;
      (c) 當(dāng)為多峰時(shí),選取峰值位置的背景塊作為負(fù)樣本,對模型
        進(jìn)行訓(xùn)練;
      (d) 采用式(7)對模型進(jìn)行更新。
     結(jié)束
    下載: 導(dǎo)出CSV

    表  2  算法跟蹤速度對比

    本文算法DCF_CADCFDSSTTLDMOSSE_CA
    成功率0.5860.5660.5230.5520.4480.488
    跟蹤精度0.8080.7760.7390.7310.6330.642
    跟蹤速度(FPS)53.582.3333.028.333.4115.0
    下載: 導(dǎo)出CSV
  • SMEULDERS A W M, CHU D M, CUCCHIARA R, et al. Visual tracking: An experimental survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442–1468. doi: 10.1109/TPAMI.2013.230
    HE Anfeng, LUO Chong, TIAN Xinmei, et al. A twofold Siamese network for real-time object tracking[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 4834–4843. doi: 10.1109/CVPR.2018.00508.
    LI Bo, YAN Junjie, WU Wei, et al. . High performance visual tracking with Siamese region proposal network[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8971–8980. doi: 10.1109/CVPR.2018.00935.
    LI Peixia, CHEN Boyu, OUYANG Wanli, et al. GradNet: Gradient-guided network for visual object tracking[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 2019: 6162–6171. doi: 10.1109/ICCV.2019.00626.
    BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 2544–2550. doi: 10.1109/CVPR.2010.5539960.
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]. 12th European Conference on Computer Vision on Computer Vision, Florence, Italy, 2012: 702–715. doi: 10.1007/978-3-642-33765-9_50.
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. doi: 10.1109/tpami.2014.2345390
    DANELLJAN M, KHAN F S, FELSBERG M, et al. Adaptive color attributes for real-time visual tracking[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 1090–1097. doi: 10.1109/CVPR.2014.143.
    DANELLJAN M, H?GER G, KHAN F S, et al. Convolutional features for correlation filter based visual tracking[C]. 2015 IEEE International Conference on Computer Vision Workshop, Santiago, Chile, 2015: 58–66. doi: 10.1109/ICCVW.2015.84.
    QI Yuankai, ZHANG Shengping, QIN Lei, et al. Hedged deep tracking[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 4303–4311. doi: 10.1109/CVPR.2016.466.
    MA Chao, HUANG Jiabin, YANG Xiaokang, et al. Hierarchical convolutional features for visual tracking[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 3074–3082. doi: 10.1109/ICCV.2015.352.
    WANG Haijun, ZHANG Shengyan, GE Hongjuan, et al. Robust visual tracking via semiadaptive weighted convolutional features[J]. IEEE Signal Processing Letters, 2018, 25(5): 670–674. doi: 10.1109/LSP.2018.2819622
    QI Yuankai, ZHANG Shengping, QIN Lei, et al. Hedging deep features for visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(5): 1116–1130. doi: 10.1109/TPAMI.2018.2828817
    ZHANG Tianzhu, XU Changsheng, and YANG M H. Learning multi-task correlation particle filters for visual tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(2): 365–378. doi: 10.1109/TPAMI.2018.2797062
    DANELLJAN M, H?GER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 4310–4318. doi: 10.1109/ICCV.2015.490.
    蒲磊, 馮新喜, 侯志強(qiáng), 等. 基于空間可靠性約束的魯棒視覺跟蹤算法[J]. 電子與信息學(xué)報(bào), 2019, 41(7): 1650–1657. doi: 10.11999/JEIT180780

    PU Lei, FENG Xinxi, HOU Zhiqiang, et al. Robust visual tracking based on spatial reliability constraint[J]. Journal of Electronics &Information Technology, 2019, 41(7): 1650–1657. doi: 10.11999/JEIT180780
    GALOOGAHI H K, SIM T, LUCEY S. Correlation filters with limited boundaries[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 4630–4638. doi: 10.1109/CVPR.2015.7299094.
    PU Lei, FENG Xinxi, and HOU Zhiqiang. Learning temporal regularized correlation filter tracker with spatial reliable constraint[J]. IEEE Access, 2019, 7: 81441–81450. doi: 10.1109/ACCESS.2019.2922416
    LI Feng, TIAN Cheng, ZUO Wangmeng, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 4904–4913. doi: 10.1109/CVPR.2018.00515.
    侯志強(qiáng), 王帥, 廖秀峰, 等. 基于樣本質(zhì)量估計(jì)的空間正則化自適應(yīng)相關(guān)濾波視覺跟蹤[J]. 電子與信息學(xué)報(bào), 2019, 41(8): 1983–1991. doi: 10.11999/JEIT180921

    HOU Zhiqiang, WANG Shuai, LIAO Xiufeng, et al. Adaptive regularized correlation filters for visual tracking based on sample quality estimation[J]. Journal of Electronics &Information Technology, 2019, 41(8): 1983–1991. doi: 10.11999/JEIT180921
    MUELLER M, SMITH N, GHANEM B, et al. Context-aware correlation filter tracking[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 1396–1404. doi: 10.1109/CVPR.2017.152.
    WANG Mengmeng, LIU Yong, HUANG Zeyi, et al. Large margin object tracking with circulant feature maps[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 4021–4029. doi: 10.1109/CVPR.2017.510.
    WU Yi, LIM J, and YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226
    DANELLJAN M, H?GER G, KHAN F S, et al. Accurate scale estimation for robust visual tracking[C]. British Machine Vision Conference 2014, Nottingham, UK, 2014: 65.1–65.11. doi: 10.5244/C.28.65.
    HARE S, GOLODETZ S, SAFFARI A, et al. Struck: Structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096–2109. doi: 10.1109/TPAMI.2015.2509974
    KALAL Z, MIKOLAJCZYK K, and MATAS J. Tracking-learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409–1422. doi: 10.1109/TPAMI.2011.239
    ZHANG Tianzhu, GHANEM B, LIU Si, et al. Robust visual tracking via multi-task sparse learning[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2042–2049. doi: 10.1109/CVPR.2012.6247908.
    BABENKO B, YANG M H, and BELONGIE S. Visual tracking with online multiple instance learning[C]. 2019 IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 983–990. doi: 10.1109/CVPR.2009.5206737.
  • 加載中
圖(4) / 表(2)
計(jì)量
  • 文章訪問數(shù):  2282
  • HTML全文瀏覽量:  755
  • PDF下載量:  127
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-11-20
  • 修回日期:  2020-05-26
  • 網(wǎng)絡(luò)出版日期:  2020-06-01
  • 刊出日期:  2020-12-08

目錄

    /

    返回文章
    返回