基于改進(jìn)U型網(wǎng)絡(luò)的火焰光場圖像降噪及溫度場重建
doi: 10.11999/JEIT240836
-
1.
中國計(jì)量大學(xué)信息工程學(xué)院 浙江省電磁波信息技術(shù)與計(jì)量檢測重點(diǎn)實(shí)驗(yàn)室 杭州 310018
-
2.
中國計(jì)量大學(xué)計(jì)量測試工程學(xué)院 杭州 310018
Noise Reduction and Temperature Field Reconstruction of Flame Light Field Images Based on Improved U-network
-
1.
Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province, College of Information Engineering, China Jiliang University, Hangzhou 310018, China
-
2.
College of Metrology and Measurement Engineering, China Jiliang University, Hangzhou 310018, China
-
摘要: 火焰光場圖像在形成過程中夾雜的輻射噪聲和成像噪聲會(huì)降低火焰溫度場3維重建精度,該文提出一種基于改進(jìn)U型網(wǎng)絡(luò)(UNet)的降噪模型,該模型針對輻射噪聲和成像噪聲的特性以及復(fù)雜火焰圖像的紋理信息設(shè)計(jì)了背景凈化模塊和邊緣信息優(yōu)化模塊。通過密集卷積操作對圖像背景層進(jìn)行特征提取,著重凈化夾雜在圖像背景層的輻射噪聲。通過UNet模塊中對稱的編碼器-解碼器網(wǎng)絡(luò)結(jié)構(gòu)和跳躍連接,對通道間的輻射噪聲和表層的成像噪聲降噪。最后利用邊緣優(yōu)化模塊對圖像細(xì)節(jié)信息進(jìn)行提取,從而獲得更高質(zhì)量的火焰光場圖像。數(shù)值模擬部分,在火焰光場圖像上混合加入信噪比為10 dB的輻射噪聲和成像噪聲,經(jīng)該文模型降噪后的峰值信噪比(PSNR)和結(jié)構(gòu)相似指數(shù)(SSIM)高達(dá)47 dB和
0.9931 ,與其他降噪模型相比有明顯優(yōu)勢。隨后,將火焰光場圖像先經(jīng)該文降噪模型降噪,再進(jìn)行溫度場重建,測得重建平均相對誤差比未降噪時(shí)降低了約37%~57%,明顯提升了火焰溫度場3維重建的精度。實(shí)驗(yàn)部分,獲取真實(shí)蠟燭火焰和丁烷火焰光場圖像,經(jīng)該文降噪模型降噪后的蠟燭火焰圖像SSIM高達(dá)0.9870 ,降噪后的丁烷燃燒火焰圖像SSIM為0.9808 。-
關(guān)鍵詞:
- 圖像處理 /
- 圖像降噪 /
- 深度學(xué)習(xí) /
- 火焰光場圖像 /
- 3維溫度場重建
Abstract:Objective This study establishes the nonlinear relationship between flame light field images and the 3D temperature field using deep learning techniques, enabling rapid 3D reconstruction of the flame temperature field. However, light-field images are prone to radiation and imaging noise during transmission and imaging, which significantly degrades image quality and reduces the accuracy of temperature field reconstruction. Therefore, denoising of flame light field images, with maximum preservation of texture and edge details, is critical for high-precision 3D reconstruction. Deep learning-based denoising algorithms are capable of accommodating a broad range of noise distributions and are particularly effective in enhancing texture and contour information without requiring extensive prior knowledge. Given the complexity of noise in flame light field images, deep learning methods present an optimal solution for denoising. Methods This paper presents a denoising model based on an improved UNet network, designed to address radiation and imaging noise, as well as the texture information in complex flame images. The model reduces noise and optimizes the flame light field image through three modules: the background purification module, the UNet denoising module, and the edge optimization module. Feature extraction is performed on the image background layer using dense convolution operations, with a focus on purifying the radiated noise embedded in the background. The symmetrical encoder-decoder network structure and skip connections in the UNet module help to reduce both radiation noise between channels and imaging noise on the surface. The edge optimization module is tailored to extract detailed information from the image, aiming to enhance the quality of the flame light field images. Comparative and ablation experiments confirm the superior noise reduction performance and effectiveness of the proposed modules. Results and Discussions In the numerical simulation, radiation noise and imaging noise are added to the flame light field image, generating three types of datasets: single radiation noise, single imaging noise, and mixed noise. In the denoising experiment, the BUE denoising model is compared with UNet, CBDNet, DnCNN, and BRDNet. The denoising results ( Fig. 4 ) show that the PSNR and SSIM values of our BUE model exceed those of the other models, reaching 47 dB and0.9931 , respectively. Analysis of the four denoised texture images (Fig. 5 ) demonstrates that the BUE model effectively removes background noise while preserving internal details, such as texture and contour features. Ablation experiments are also conducted by adding the BPM and EIEM modules to the UNet benchmark model. The experimental results (Fig. 5 ,Fig. 6 ) confirm the effectiveness of the BPM and EIEM modules. Subsequently, the flame light field image is denoised using the proposed model, followed by reconstruction of the temperature field (Fig. 8 ). The average relative error of the reconstruction is reduced by approximately 37% to 57% compared to the non-denoised case, significantly improving the accuracy of the 3D flame temperature field reconstruction. In the real-world experiment, light field images of a candle flame and a butane flame are obtained. The SSIM values after denoising using the BUE model are0.9870 and0.9808 , respectively.Conclusions This paper presents a BUE denoising method based on the UNet model, incorporating a background purification module and an edge information enhancement module. This approach effectively extracts the background, reduces noise, and enhances contour and texture details in noisy flame light field images. The noise reduction performance of the model is evaluated through numerical simulations, and the results demonstrate the following: (1) Compared to UNet, CBDNet, DnCNN, and BRDNet, the proposed BUE denoising model shows significant advantages. Under mixed noise conditions with a signal-to-noise ratio of 10 dB, the model achieves a PSNR of 47 dB and an SSIM of 0.9931. Specifically, the PSNR improves by approximately 23.68% compared to UNet and 4.44% compared to DnCNN. (2) By integrating BUE as a denoising preprocessing module into the temperature field reconstruction model, the results show that incorporating denoising reduces the average relative error by approximately 37% to 57% compared to reconstruction without denoising. (3) Real candle flame and butane flame light field images are acquired, and the proposed noise reduction model achieves SSIM values of 0.9870 for the candle flame image and0.9808 for the butane flame image after denoising. -
表 1 合成數(shù)據(jù)集
數(shù)據(jù)集類型 噪聲描述(dB) 數(shù)量 無噪圖像 無 800 單輻射噪聲圖像 輻射噪聲($ \eta_{\text{rad}} $=10, 15, 20) 800 單成像噪聲圖像 成像噪聲 ($ \eta_{\text{img}} $=10, 15, 20) 800 混合噪聲圖像 等量輻射噪聲和成像噪聲($ \eta_{\text{rad}} $=$ \eta_{\text{img}} $=10, 15, 20) 800 下載: 導(dǎo)出CSV
表 2 MobileNet和BUE-MobileNet 重建火焰溫度場的MRE和SSIM結(jié)果
噪聲 $ \eta $(dB) MobileNet
(MRE(%)/SSIM)BUE-MobileNet
(MRE(%)/SSIM)輻射噪聲 15 0.35/ 0.9990 0.16/0.999 7 成像噪聲 15 0.33/ 0.9990 0.14/0.999 8 混合噪聲 15 0.45/ 0.9943 0.28/0.998 5 下載: 導(dǎo)出CSV
-
[1] 張小雪, 王雨, 吳思遠(yuǎn), 等. 基于級聯(lián)稀疏查詢機(jī)制的輕量化火災(zāi)檢測算法[J]. 光電工程, 2023, 50(10): 230216. doi: 10.12086/oee.2023.230216.ZHANG Xiaoxue, WANG Yu, WU Siyuan, et al. An improved lightweight fire detection algorithm based on cascade sparse query[J]. Opto-Electronic Engineering, 2023, 50(10): 230216. doi: 10.12086/oee.2023.230216. [2] 王佳. 鈣鈦礦單探測器光譜測量方法及火焰溫度探測研究[D]. [博士論文], 中北大學(xué), 2023. doi: 10.27470/d.cnki.ghbgc.2023.000030.WANG Jia. Research on perovskite single detector spectrumeter for flame temperature detection[D]. [Ph. D. dissertation], North University of China, 2023. doi: 10.27470/d.cnki.ghbgc.2023.000030. [3] 鐘越, 蔡敏男, 徐文江, 等. 基于深度學(xué)習(xí)的湍流火焰三維羥基濃度場的時(shí)間超分辨率成像[J]. 航空動(dòng)力學(xué)報(bào), 2024, 39(12): 20230071. doi: 10.13224/j.cnki.jasp.20230071.ZHONG Yue, CAI Minnan, XU Wenjiang, et al. Temporal super-resolution imaging of 3D OH concentration field in turbulent flame based on deep learning[J]. Journal of Aerospace Power, 2024, 39(12): 20230071. doi: 10.13224/j.cnki.jasp.20230071. [4] 張杰. 基于光場成像的火焰三維溫度場測量方法研究[D]. [博士論文], 東南大學(xué), 2018.ZHANG Jie. Three-dimensional temperature measurement of flame based on light field imaging[D]. [Ph. D. dissertation], Southeast University, 2018. doi: 10.27014/d.cnki.gdnau.2021.004361. [5] 張杰. 基于深度學(xué)習(xí)和光場成像的火焰三維溫度場重建[D]. [碩士論文], 東南大學(xué), 2021.ZHANG Jie. Rapid reconstruction of 3D flame temperature distribution based on deep learning and light field imaging[D]. [Master dissertation], Southeast University, 2021. doi: 10.27014/d.cnki.gdnau.2021.004361. [6] 孫安泰. 基于深度學(xué)習(xí)的火焰三維溫度場層析重建及預(yù)測研究[D]. [碩士論文], 哈爾濱工業(yè)大學(xué), 2021. doi: 10.27061/d.cnki.ghgdu.2021.003583.SUN Antai. Tomography reconstruction and prediction of three-dimensional temperature field of flame based on deep learning[D]. [Master dissertation], Harbin Institute of Technology, 2021. doi: 10.27061/d.cnki.ghgdu.2021.003583. [7] TIAN Chunwei, FEI Lunke, ZHENG Wenxian, et al. Deep learning on image denoising: An overview[J]. Neural Networks, 2020, 131: 251–275. doi: 10.1016/j.neUNet.2020.07.025. [8] CAI Minnan, LUO Weiyi, XU Wenjiang, et al. Development of learning-based noise reduction and image reconstruction algorithm in two dimensional Rayleigh thermometry[J]. Optik, 2021, 248: 168082. doi: 10.1016/j.ijleo.2021.168082. [9] 鄧安東. 基于深度學(xué)習(xí)的火焰溫度場重建及火焰圖像降噪方法研究[D]. [碩士論文], 上海交通大學(xué), 2021. doi: 10.27307/d.cnki.gsjtu.2021.000774.DENG Andong. Research on temperature field reconstruction and image denoising of flame based on deep learning[D]. [Master dissertation], Shanghai Jiao Tong University, 2021. doi: 10.27307/d.cnki.gsjtu.2021.000774. [10] NIU Zhitian, QI Hong, SHI Jingwen, et al. Three-dimensional rapid visualization of flame temperature field via compression and noise reduction of light field imaging[J]. International Communications in Heat and Mass Transfer, 2022, 137: 106270. doi: 10.1016/j.icheatmasstransfer.2022.106270. [11] 李曉歡, 王霞, 王叢赫, 等. 基于殘差UNet的水下Mueller矩陣圖像去散射算法[J]. 光學(xué)學(xué)報(bào), 2022, 42(24): 2410001. doi: 10.3788/AOS202242.2410001.LI Xiaohuan, WANG Xia, WANG Conghe, et al. De-scattering algorithm for underwater mueller matrix images based on residual UNet[J]. Acta Optica Sinica, 2022, 42(24): 2410001. doi: 10.3788/AOS202242.2410001. [12] 李瀟凡, 王勝強(qiáng), 翁軒, 等. 基于UNet深度學(xué)習(xí)算法的東海大型漂浮藻類遙感監(jiān)測[J]. 光學(xué)學(xué)報(bào), 2021, 41(2): 0201002. doi: 10.3788/AOS202141.0201002.LI Xiaofan, WANG Shengqiang, WENG Xuan, et al. Remote sensing of floating macroalgae blooms in the East China Sea based on UNet deep learning model[J]. Acta Optica Sinica, 2021, 41(2): 0201002. doi: 10.3788/AOS202141.0201002. [13] ZHANG Yulun, TIAN Yapeng, KONG Yu, et al. Residual dense network for image restoration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(7): 2480–2495. doi: 10.1109/TPAMI.2020.2968521. [14] THECKEDATH D and SEDAMKAR R R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks[J]. SN Computer Science, 2020, 1(2): 79. doi: 10.1007/s42979-020-0114-9. [15] 寇旗旗, 程志威, 程德強(qiáng), 等. 基于藍(lán)圖分離卷積的輕量化礦井圖像超分辨率重建方法[J]. 煤炭學(xué)報(bào), 2024, 49(9): 4038–4050. doi: 10.13225/j.cnki.jccs.2023.1101.KOU Qiqi, CHENG Zhiwei, CHENG Deqiang, et al. Lightweight super resolution method based on blueprint separable convolution for mine image[J]. Journal of China Coal Society, 2024, 49(9): 4038–4050. doi: 10.13225/j.cnki.jccs.2023.1101. [16] 馮宇旭, 李裕梅. 深度學(xué)習(xí)優(yōu)化器方法及學(xué)習(xí)率衰減方式綜述[J]. 數(shù)據(jù)挖掘, 2018, 8(4): 186–200. doi: 10.12677/HJDM.2018.84020.FENG Yuxu and LI Yumei. An overview of deep learning optimization methods and learning rate attenuation methods[J]. Hans Journal of Data Mining, 2018, 8(4): 186–200. doi: 10.12677/HJDM.2018.84020. [17] LIU Jiaming, SUN Yu, XU Xiaojian, et al. Image restoration using total variation regularized deep image prior[C]. 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 2019: 7715–7719. doi: 10.1109/ICASSP.2019.8682856. [18] 周成, 黃賀艷, 劉兵, 等. 基于混合散斑圖的壓縮計(jì)算鬼成像方法研究[J]. 光學(xué)學(xué)報(bào), 2016, 36(9): 0911001. doi: 10.3788/AOS201636.0911001.ZHOU Cheng, HUANG Heyan, LIU Bing, et al. Hybrid speckle-pattern compressive computational ghost imaging[J]. Acta Optica Sinica, 2016, 36(9): 0911001. doi: 10.3788/AOS201636.0911001. [19] 孫俊, 許傳龍, 張彪, 等. 多焦距微透鏡陣列光場成像火焰三維溫度場測量[J]. 工程熱物理學(xué)報(bào), 2017, 38(10): 2164–2170.SUN Jun, XU Chuanlong, ZHANG Biao, et al. Three-dimensional temperature measurement of the flame using the light field camera with a multiple focus microlens array[J]. Journal of Engineering Thermophysics, 2017, 38(10): 2164–2170. [20] 龍超, 金恒, 黎玲, 等. 基于特征融合的非局部均值CT圖像降噪[J]. 光學(xué)學(xué)報(bào), 2022, 42(11): 1134024. doi: 10.3788/AOS202242.1134024.LONG Chao, JIN Heng, LI Ling, et al. CT Image denoising with non-local means based on feature fusion[J]. Acta Optica Sinica, 2022, 42(11): 1134024. doi: 10.3788/AOS202242.1134024. [21] SARA U, AKTER M, and UDDIN M S. Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study[J]. Journal of Computer and Communications, 2019, 7(3): 8–18. doi: 10.4236/jcc.2019.73002. [22] GUO Shi, YAN Zifei, ZHANG Kai, et al. Toward convolutional blind denoising of real photographs[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 1712–1722. doi: 10.1109/CVPR.2019.00181. [23] MURALI V and SUDEEP P V. Image denoising using DnCNN: An exploration study[M]. JAYAKUMARI J, KARAGIANNIDIS G K, MA Maode, et al. Advances in Communication Systems and Networks: Select Proceedings of ComNet 2019. Singapore: Springer, 2020: 847–859. doi: 10.1007/978-981-15-3992-3_72. [24] TIAN Chunwei, XU Yong, and ZUO Wangmeng. Image denoising using deep CNN with batch renormalization[J]. Neural Networks, 2020, 121: 461–473. doi: 10.1016/j.neunet.2019.08.022. [25] WEISS K, KHOSHGOFTAAR T M, and WANG Dingding. A survey of transfer learning[J]. Journal of Big Data, 2016, 3(1): 9. doi: 10.1186/s40537-016-0043-6. [26] SINHA D and EL-SHARKAWY M. Thin mobilenet: An enhanced mobilenet architecture[C]. The 2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference, New York, USA, 2019: 280–285. doi: 10.1109/UEMCON47517.2019.8993089. -