基于灰度變換與兩尺度分解的夜視圖像融合
doi: 10.11999/JEIT180407
-
1.
長春理工大學(xué)電子信息工程學(xué)院 ??長春 ??130022
-
2.
長春理工大學(xué)光電工程學(xué)院 ??長春 ??130022
-
3.
中國科學(xué)院光電研究院 ??北京 ??100094
Night-vision Image Fusion Based on Intensity Transformation and Two-scale Decomposition
-
1.
School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
-
2.
Photoelectric Engineering College, Changchun University of Science and Technology, Changchun 130022, China
-
3.
Academy of Opto-electronics, Chinese Academy of Sciences, Beijing 100094, China
-
摘要:
為了獲得更適合人感知的夜視融合圖像,該文提出一種基于灰度變換與兩尺度分解的夜視圖像融合算法。首先,利用紅外像素值作為指數(shù)因子對可見光圖像進行灰度轉(zhuǎn)換,在達到可見光圖像增強的同時還使可見光與紅外圖像融合任務(wù)轉(zhuǎn)換為同類圖像融合。其次,通過均值濾波對增強結(jié)果與原始可見光圖像進行兩尺度分解。再次,運用基于視覺權(quán)重圖的方法融合細節(jié)層。最后,綜合這些結(jié)果重構(gòu)出融合圖像。由于該文方法在可見光波段顯示結(jié)果,因此融合圖像更適合視覺感知。實驗結(jié)果表明,所提方法在視覺質(zhì)量和客觀評價方面優(yōu)于其它5種對比方法,融合時間小于0.2 s,滿足實時性要求。融合后圖像背景細節(jié)信息清晰,熱目標(biāo)突出,同時降低處理時間。
Abstract:In order to achieve more suitable night vision fusion images for human perception, a novel night-vision image fusion algorithm is proposed based on intensity transformation and two-scale decomposition. Firstly, the pixel value from the infrared image is used as the exponential factor to achieve intensity transformation of the visible image, so that the task of infrared-visible image fusion can be transformed into the merging of homogeneous images. Secondly, the enhanced result and the original visible image are decomposed into base and detail layers through a simple average filter. Thirdly, the detail layers are fused by the visual weight maps. Finally, the fused image is reconstructed by synthesizing these results. The fused image is more suitable for the visual perception, because the proposed method presents the result in the visual spectrum band. Experimental results show that the proposed method outperforms obviously the other five methods. In addition, the computation time of the proposed method is less than 0.2 s, which meet the real-time requirements. In the fused result, the details of the background are clear while the objects with high temperature variance are highlighted as well.
-
表 1 不同融合方法的客觀性能指標(biāo)
圖像 評價指標(biāo) LAP ROLP CVT DTCWT ADF 本文方法 $\mathop \mu \limits^ \wedge $ 52.5067 55.5025 51.9005 51.8983 51.7756 70.1690 Quad $\sigma $ 31.5616 28.2624 25.1804 25.2682 21.9894 34.3756 ${E_f}$ 6.4729 6.1093 6.1692 6.1586 6.0398 6.7689 $\mathop \mu \limits^ \wedge $ 90.8149 96.3052 91.0868 91.0788 91.1387 124.2739 UNcamp $\sigma $ 29.1292 27.7301 26.9391 26.2760 23.2265 38.3262 ${E_f}$ 6.6550 6.5508 6.5310 6.4847 6.2865 7.2638 $\mathop \mu \limits^ \wedge $ 82.1788 86.1979 82.1010 82.0766 82.0353 122.6444 Kaptein $\sigma $ 36.2649 35.7918 34.1582 33.6152 31.6902 51.6181 ${E_f}$ 6.7763 6.7911 6.7779 6.7054 6.6047 7.4176 $\mathop \mu \limits^ \wedge $ 110.9204 113.3709 110.9161 110.9148 110.9183 163.6281 Steamboat $\sigma $ 14.0743 13.8319 12.4700 12.3160 11.0786 26.4028 ${E_f}$ 5.3071 5.3595 5.2087 5.1377 5.0049 5.9645 下載: 導(dǎo)出CSV
表 2 處理時間對比(s)
圖像 大小 LAP ROLP CVT DTCWT ADF 本文方法 Quad 496×632 0.0193 0.1931 1.9994 0.5288 0.9267 0.1681 UNcamp 270×360 0.0094 0.1076 1.2281 0.2480 0.3225 0.1021 Kaptein 450×620 0.0203 0.1919 1.8308 0.4891 0.8570 0.1341 Steamboat 510×505 0.0127 0.1771 1.7049 0.4434 0.8472 0.1192 平均 0.0247 0.1674 1.6908 0.4273 0.7384 0.1309 下載: 導(dǎo)出CSV
-
馮鑫, 張建華, 胡開群, 等. 基于變分多尺度的紅外與可見光圖像融合[J]. 電子學(xué)報, 2018, 46(3): 680–687. doi: 10.3969/j.issn.0372-2112.2018.03.025FENG Xin, ZHANG Jianhua, HU Kaiqun, et al. The infrared and visible image fusion method based on variational multiscale[J]. Acta Electronica Sinica, 2018, 46(3): 680–687. doi: 10.3969/j.issn.0372-2112.2018.03.025 江澤濤, 吳輝, 周嘵玲. 基于改進引導(dǎo)濾波和雙通道脈沖發(fā)放皮層模型的紅外與可見光圖像融合算法[J]. 光學(xué)學(xué)報, 2018, 38(2): 112–120. doi: 10.3788/aos201838.0210002JIANG Zetao, WU Hui, and ZHOU Xiaoling. Infrared and visible image fusion algorithm based on improved guided filtering and dual-channel spiking cortical model[J]. Acta Optica Sinica, 2018, 38(2): 112–120. doi: 10.3788/aos201838.0210002 LI Jinxi, ZHOU Dingfu, YUAN Sheng, et al. Modified image fusion technique to remove defocus noise in optical scanning holography[J]. Optics Communications, 2018, 407(15): 234–238. doi: 10.1016/j.optcom.2017.08.057 YIN Xiang and MA Jun. Image fusion method based on entropy rate segmentation and multi-scale decomposition[J]. Laser & Optoelectronics Progress, 2018, 55(1): 1–8. doi: 10.3788/LOP55.011011 LI Shutao, KANG Xudong, and HU Jianwen. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 2013, 22(7): 2864–2875. doi: 10.1109/TIP.2013.2244222 LEWIS J J, O'CALLAGHAN R J, NIKOLOV S G, et al. Pixel- and region-based image fusion with complex wavelets[J]. Information Fusion, 2007, 8(2): 119–130. doi: 10.1016/j.inffus.2005.09.006 KUMAR B K S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, Image and Video Processing, 2015, 9(5): 1193–1204. doi: 10.1007/s11760-013-0556-9 謝偉, 周玉欽, 游敏. 融合梯度信息的改進引導(dǎo)濾波[J]. 中國圖象圖形學(xué)報, 2016, 21(9): 1119–1126. doi: 10.11834/jig.20160901XIE Wei, ZHOU Yuqin, and YOU Min. Improved guided image filtering integrated with gradient information[J]. Journal of Image and Graphics, 2016, 21(9): 1119–1126. doi: 10.11834/jig.20160901 ZUO Yujia, LIU Jinghong, BAI Guanbing, et al. Airborne infrared and visible image fusion combined with region segmentation[J]. Sensors, 2017, 17(5): 1–15. doi: 10.3390/s17051127 TAO Li, NGO Hau, ZHANG Ming, et al. A multisensory image fusion and enhancement system for assisting drivers in poor lighting conditions[C]. Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop, Washington, USA, 2005: 106–113. CHANDRASHEKAR L and SREEDEVI A. Advances in biomedical imaging and image fusion[J]. International Journal of Computer Applications, 2018, 179(24): 1–9. doi: 10.5120/ijca2018912307 LIU Yu, CHEN Xun, PENG Hu, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36(7): 191–207. doi: 10.1016/j.inffus.2016.12.001 劉峰, 沈同圣, 馬新星. 交叉雙邊濾波和視覺權(quán)重信息的圖像融合[J]. 儀器儀表學(xué)報, 2017, 38(4): 1005–1013. doi: 10.3969/j.issn.0254-3087.2017.04.027LIU Feng, SHEN Tongsheng, and MA Xinxing. Image fusion via cross bilateral filter and visual weight information[J]. Chinese Journal of Scientific Instrument, 2017, 38(4): 1005–1013. doi: 10.3969/j.issn.0254-3087.2017.04.027 ZHAO Jufeng, FENG Huajun, XU Zhihai, et al. Detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition[J]. Optics Communications, 2013, 287(2): 45–52. doi: 10.1016/j.optcom.2012.08.070 LIU Zhaodong, CHAI Yi, YIN Hongpeng, et al. A novel multi-focus image fusion approach based on image decomposition[J]. Information Fusion, 2017, 35(5): 102–116. doi: 10.1016/j.inffus.2016.09.007 孫彥景, 楊玉芬, 劉東林, 等. 基于內(nèi)在生成機制的多尺度結(jié)構(gòu)相似性圖像質(zhì)量評價[J]. 電子與信息學(xué)報, 2016, 38(1): 127–134. doi: 10.11999/JEIT150616SUN Yanjing, YANG Yufen, LIU Donglin, et al. Multiple-scale structural similarity image quality assessment based on internal generative mechanism[J]. Journal of Electronics &Information Technology, 2016, 38(1): 127–134. doi: 10.11999/JEIT150616 LI Jun, SONG Minghui, and PENG Yuanxi. Infrared and visible image fusion based on robust principal component analysis and compressed sensing[J]. Infrared Physics & Technology, 2018, 89(3): 129–139. doi: 10.1016/j.infrared.2018.01.003 劉國軍, 高麗霞, 陳麗奇. 廣義平均的全參考型圖像質(zhì)量評價池化策略[J]. 光學(xué)精密工程, 2017, 25(3): 742–748. doi: 10.3788/OPE.20172503.0742LIU Guojun, GAO Lixia, and CHEN Liqi. Pool strategy for full-reference IQA via general means[J]. Optics and Precision Engineering, 2017, 25(3): 742–748. doi: 10.3788/OPE.20172503.0742 曲懷敬, 李健. 基于混合統(tǒng)計建模的圖像融合[J]. 計算機輔助設(shè)計與圖形學(xué)學(xué)報, 2017, 29(5): 838–845. doi: 10.3969/j.issn.1003-9775.2017.05.007QU Huaijing and LI Jian. Image fusion based on statistical mixture modeling[J]. Journal of Computer-Aided Design &Computer Graphics, 2017, 29(5): 838–845. doi: 10.3969/j.issn.1003-9775.2017.05.007 朱攀, 劉澤陽, 黃戰(zhàn)華. 基于DTCWT和稀疏表示的紅外偏振與光強圖像融合[J]. 光子學(xué)報, 2017, 46(12): 213–221. doi: 10.3788/gzxb20174612.1210002ZHU Pan, LIU Zeyang, and HUANG Zhanhua. Infrared polarization and intensity image fusion based on dual-tree complex wavelet transform and sparse representation[J]. Acta Photonica Sinica, 2017, 46(12): 213–221. doi: 10.3788/gzxb20174612.1210002 -