基于空時(shí)多線索融合的超像素運(yùn)動目標(biāo)檢測方法
doi: 10.11999/JEIT150950
基金項(xiàng)目:
國家科技重大專項(xiàng)(2014ZX03006003)
Moving Object Detection Method Via Superpixels Based on Spatiotemporal Multi-cues Fusion
Funds:
National Science and Technology Major Projects of China (2014ZX03006003)
-
摘要: 運(yùn)動目標(biāo)檢測是計(jì)算機(jī)視覺領(lǐng)域極具挑戰(zhàn)性的難題,該文針對這一問題提出一種基于空時(shí)多線索融合的超像素運(yùn)動目標(biāo)檢測方法。首先利用簡單線性迭代聚類算法將當(dāng)前幀分割為超像素集合,根據(jù)幀間的像素級時(shí)變線索找到當(dāng)前幀中包含運(yùn)動信息的前景超像素子塊;然后根據(jù)運(yùn)動目標(biāo)的一致性原則建立前一幀目標(biāo)模型,結(jié)合目標(biāo)空間線索進(jìn)一步確定包含運(yùn)動目標(biāo)的檢測窗口,將目標(biāo)檢測問題轉(zhuǎn)化為目標(biāo)分割問題,利用密集角點(diǎn)檢測將目標(biāo)從窗口中分割出來。在多個(gè)具有挑戰(zhàn)性的公開視頻序列上同幾種流行檢測算法的實(shí)驗(yàn)對比結(jié)果證明了所提算法的有效性和優(yōu)越性。
-
關(guān)鍵詞:
- 運(yùn)動目標(biāo)檢測 /
- 超像素分割 /
- 空時(shí)多線索 /
- 前景目標(biāo)模型 /
- 目標(biāo)分割
Abstract: Moving object detection is a challenging issue in computer vision. In this paper, a new detection method via superpixels is proposed based on spatiotemporal multi-cues fusion. First, the current frame is segmented into a set of superpixels using simple linear iterative clustering and the subblocks of foreground superpixels containing motion information are captured according to the time-varying cue of inter-frame pixel-level. Then, a target model of the previous frame, which is established on the basis of the consistency principle of motion target and space clues of a target, are combined to further determine the detection window including the moving object. Finally, the problem of object detection is converted to object segmentation and an object is divided from the detection window utilizing the dense corner detection. Experimental results using several challenging public video sequences show the effectiveness and superiority of the proposed method compared with other state-of-the-art detection approaches. -
HU W, TAN T, and WANG L. A survey on visual surveillance of object motion and behaviors[J]. IEEE Transactions on Systems, Man and Cybernetics, 2004, 34(3): 334-352. doi: 10.1109/TSMCC.2004.829274. BROX T and MALIK J. Large displacement optical flow: descriptor matching in variational motion estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 33(3): 500-513. doi: 10.1109/TPAMI.2010. 143. RADKE R J, ANDRA S, and Al-KOFAHI O. Image change detection algorithms: a systematic survey[J]. IEEE Transactions on Image Processing, 2005, 14(3): 294-307. doi: 10.1109/TIP.2004.838698. 周建英, 吳小培, 張超, 等. 基于滑動窗的混合高斯模型運(yùn)動目標(biāo)檢測方法[J]. 電子與信息學(xué)報(bào), 2013, 35(7): 1650-1656. doi: 10.3724/SP.J.1146.2012.01449. ZHOU Jianying, WU Xiaopei, ZHANG Chao, et al. A moving object detection method based on sliding window Gaussian mixture model[J]. Journal of Electronics Information Technology, 2013, 35(7): 1650-1656. doi: 10.3724/SP.J.1146. 2012.01449. VAN D M and BARNICH O. ViBe: a disruptive method for background subtraction[C]. Proceedings of the Background Modeling and Foreground Detection for Video Surveillance, CRC, USA, 2014: 1-23. ST-CHARLES P L and BILODEAU G A. Improving background subtraction using local binary similarity patterns[C]. Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, 2014: 509-515. CHEN Shengyong, ZHANG Jianhua, and LI Youfu. A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction[J]. IEEE Transactions on Industrial Informatics, 2012, 8(1): 118-127. doi: 10.1109/TII.2011.2173202. STAUFFER C and GRIMSON W. Adaptive background mixture models for real-time tracking[C]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, 1999: 246-252. EVANGELIO R H, PATZOLD M, and KELLER I. Adaptively splitted GMM with feedback improvement for the task of background subtraction[J]. IEEE Transactions on Information Forensics and Security, 2014, 9(5): 863-874. doi: 10.1109/TIFS.2014.2313919. MARTINS P, CASEIRO R, and BATISTA J. Non- parametric Bayesian constrained local models[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 2014: 1797-1804. BARNICH O and VAN D M. ViBe: a universal background subtraction algorithm for video sequences[J]. IEEE Transactions on Image Processing, 2011, 20(6): 1709-1724. doi: 10.1109/TIP.2010.2101613. 莊哲民, 章聰友, 楊金耀, 等. 基于灰度特征和自適應(yīng)閾值的虛擬背景提取研究[J]. 電子與信息學(xué)報(bào), 2015, 37(2): 346-352. doi: 10.11999/JEIT140317. ZHUANG Zhemin, ZHANG Congyou, YANG Jinyao, et al. Investigation on visual background extractor based on gray feature and adaptive threshold[J]. Journal of Electronics Information Technology, 2015, 37(2): 346-352. doi: 10.11999/ JEIT140317. ST-CHARLES P, BILODEAU G, and BERGEVIN R. SuBSENSE: a universal change detection method with local adaptive sensitivity[J]. IEEE Transactions on Image Processing, 2015, 24(1): 359-373. doi: 10.1109/TIP.2014. 2378053. MOGHADAM A A, KUMAR M, and RADHA H. Common and Innovative visuals: a sparsity modeling framework for video[J]. IEEE Transactions on Image Processing, 2014, 23(9): 4055-4069. doi: 10.1109/TIP.2014.2321476. ALEXE B, DESELAERS T, and FERRARI V. Measuring the objectness of image windows[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2189-2202. doi: 10.1109/TPAMI.2012.28. ZHANG Luming, XIA Yingjie, JI Rangping, et al. Spatial-aware object-level saliency prediction by learning graphlet hierarchies[J]. IEEE Transactions on Industrial Electronics, 2015, 62(2): 1301-1308. doi: 10.1109/TIE.2014. 2336602. LIU Zhi, ZOU Wenbin, and MEUR O L. Saliency tree: a novel saliency detection framework[J]. IEEE Transactions on Image Processing, 2014, 23(5): 1937-1952. doi: 10.1109/TIP. 2014.2307434. XU Li, JIA Jiaya, and MATSUSHITA Y. Motion detail preserving optical flow estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(9): 1744-1757. doi: 10.1109/TPAMI.2011.236. LIU Zhi, ZHANG Xiang, LUO Shuhua, et al. Superpixel-based spatiotemporal saliency detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(9): 1522-1540. doi: 10.1109/TCSVT.2014.2308642. WU Jianxin and REHG J M. CENTRIST: a visual descriptor for scene categorization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1489-1501. doi: 10.1109/TPAMI.2010.224. ACHANTA R, SHAJI A, SMITH K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274-2281. doi: 10.1109/TPAMI.2012.120. WANG Yi, JODOIN P M, and PORIKLI F. CDnet 2014: an expanded change detection benchmark dataset[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, 2014: 393-400. -
計(jì)量
- 文章訪問數(shù): 1524
- HTML全文瀏覽量: 118
- PDF下載量: 774
- 被引次數(shù): 0