多尺度加權(quán)Retinex變壓器油下圖像增強(qiáng)
doi: 10.11999/JEIT240645
-
1.
四川大學(xué)電氣工程學(xué)院 成都 610065
-
2.
四川大學(xué)深地工程智能建造與健康運(yùn)維全國重點(diǎn)實(shí)驗(yàn)室 成都 610065
Image Enhancement under Transformer Oil Based on Multi-Scale Weighted Retinex
-
1.
College of Electrical Engineering, Sichuan University, Chengdu 610065, China
-
2.
State Key Laboratory of Intelligent Construction and Healthy Operation and Maintenance of Deep Underground Engineering, Sichuan University, Chengdu 610065, China
-
摘要: 針對(duì)變壓器油下圖像存在顏色失真、亮度低和細(xì)節(jié)失真問題,該文提出一種多尺度加權(quán)Retinex變壓器油下圖像增強(qiáng)算法。首先,為了緩解變壓器油下圖像顏色失真問題,提出一種混合動(dòng)態(tài)顏色通道補(bǔ)償算法,根據(jù)拍攝圖像各通道的衰減狀態(tài)對(duì)衰減通道進(jìn)行動(dòng)態(tài)補(bǔ)償。然后,為了解決細(xì)節(jié)失真問題,提出一種銳化權(quán)重加權(quán)策略。最后,該文創(chuàng)新性采用金字塔多尺度融合策略對(duì)不同尺度Retinex反射分量和相應(yīng)權(quán)重圖進(jìn)行加權(quán)融合得到變壓器油下清晰圖像。實(shí)驗(yàn)結(jié)果表明所提算法可以有效解決變壓器油下圖像復(fù)雜退化問題。
-
關(guān)鍵詞:
- 變壓器油下圖像增強(qiáng) /
- Retinex /
- 通道補(bǔ)償 /
- 多尺度加權(quán)
Abstract:Objective: Large oil-immersed transformers are critical in power systems, with their operational status essential for maintaining grid stability and reliability. Periodic inspections are necessary to identify and resolve transformer faults and ensure normal operation. However, manual inspections require significant human and material resources. Moreover, conventional inspection methods often fail to promptly detect or accurately locate internal faults, which may ultimately affect transformer lifespan. Robots equipped with visual systems can replace manual inspections for fault identification inside oil-immersed transformers, enabling timely fault detection and expanding the inspection range compared to manual methods. However, high-definition visual imaging is crucial for effective fault detection using robots. Transformer oil degrades and discolors under high-temperature, high-pressure conditions, with these effects varying over time. The oil color typically shifts from pale yellow to reddish-brown, and the types and forms of suspended particles evolve dynamically. These factors cause complex light attenuation and scattering, leading to color distortion and detail loss in captured images. Additionally, the sealed metallic structure of oil-immersed transformers requires robots to rely on onboard artificial light sources during inspections. The limited illumination from these sources further reduces image brightness, hindering clarity and impacting fault detection accuracy. To address issues such as color distortion, low brightness, and detail loss in images captured under transformer oil, this paper proposes a multi-scale weighted Retinex algorithm for image enhancement. Methods: This paper proposes a multi-scale weighted Retinex algorithm for image enhancement under transformer oil. To mitigate color distortion, a hybrid dynamic color channel compensation algorithm is proposed, which dynamically adjusts compensation based on the attenuation of each channel in the captured image. To address detail loss, a sharpening weight strategy is applied. Finally, a pyramid multi-scale fusion strategy integrates Retinex reflection components from multiple scales with their corresponding weight maps, producing clearer images under transformer oil. Results and Discussions: Qualitative experimental results ( Fig. 5 ,Fig. 6 ,Fig. 7 ) indicate that the UCM algorithm, based on non-physical models, achieves color correction by assuming minimal attenuation in the blue channel. However, the dynamic changes in transformer oil result in varying channels with the least attenuation, reducing the algorithm’s generalization capability. Enhancement results from physical-model algorithms, including UDCP, IBLA, and ULAP, exhibited low brightness, often leading to the loss of critical image details. Furthermore, these physical-model methods not only fail to resolve color distortion but frequently intensify it. Deep learning-based algorithms, such as Water-Net, Shallow-uwnet, and UDnet, demonstrated effectiveness in mitigating mild color distortion. However, their enhancement results still suffer from low brightness and blurred details. In contrast, the algorithm proposed in this paper fully accounts for the dynamic characteristics of transformer oil, effectively addressing color distortion, blurring, and detail loss in images captured under transformer oil. Quantitative experiments (Table 1 ) show that the UIQM value of images enhanced by the proposed algorithm increased by an average of 121.206% compared with the original images, the FDUM value increased by an average of 105.978%, and the NIQE value decreased by an average of 6.772%. Both qualitative and quantitative results demonstrate that the proposed algorithm effectively resolves image degradation issues under transformer oil and outperforms the comparison methods. Additionally, applicability tests reveal that the algorithm not only performs well for transformer oil images but also demonstrates strong enhancement capabilities in underwater imaging.Conclusions: Experimental results demonstrate that the algorithm proposed in this paper effectively addresses the complex degradation issues in images captured under transformer oil. Although the proposed algorithm achieves superior enhancement performance, processing a 1 280×720 resolution image requires an average of 2.16 s, which does not meet the demands for embedded real-time applications, such as robotic inspections. Future research will focus on optimizing the algorithm to improve its real-time performance. -
1 混合動(dòng)態(tài)顏色通道補(bǔ)償
輸入:相機(jī)拍攝圖像${I_{{\text{in}}}}$,增益系數(shù)$\omega $ 輸出:補(bǔ)償后的圖像${I_{{\text{out}}}}$ (1) $B,G,R \leftarrow {\text{split}}({I_{{\text{in}}}})$ (2) ${I_{{\text{Max}}}} \leftarrow \max (R,G,B)$ (3) ${I_{{\text{Min}}}} \leftarrow \min (R,G,B)$ (4) if ${I_{{\text{Max}}}} = \bar R$ (5) if ${I_{{\text{Min}}}} = \bar G$ then (6) 計(jì)算${V_{{\text{com\_min}}}},{V_{{\text{com\_med}}}}$根據(jù)$ {V}_{\mathrm{min}}=G $,
${V_{{\text{med}}}} = B,{V_{\max }} = R$(7) end if (8) if ${I_{{\text{Min}}}} = \bar B$ then (9) 計(jì)算${V_{{\text{com\_min}}}},{V_{{\text{com\_med}}}}$根據(jù)${V_{\min }} = B$, ${V_{{\text{med}}}} = G $,
${V_{\max }} = R$(10) end if (11) end if (12) if ${I_{{\text{Max}}}} = \bar B$ (13) if ${I_{{\text{Min}}}} = \bar R$ then (14) 計(jì)算${V_{{\text{com\_min}}}},{V_{{\text{com\_med}}}}$根據(jù)${V_{\min }} = R$, ${V_{{\text{med}}}} = G $,
${V_{\max }} = B$(15) end if (16) if ${I_{{\text{Min}}}} = \bar G$ then (17) 計(jì)算${V_{{\text{com\_min}}}},{V_{{\text{com\_med}}}}$根據(jù)${V_{\min }} = G$, ${V_{{\text{med}}}} = R $,
${V_{\max }} = B$(18) end if (19) end if (20) if ${I_{{\text{Max}}}} = \bar G$ (21) if ${I_{{\text{Min}}}} = \bar R$ then (22) 計(jì)算${V_{{\text{com\_min}}}},{V_{{\text{com\_med}}}}$根據(jù)${V_{\min }} = R$, ${V_{{\text{med}}}} = B $,
${V_{\max }} = G$(23) end if (24) if ${I_{{\text{Min}}}} = \bar B$ then (25) 計(jì)算${V_{{\text{com\_min}}}},{V_{{\text{com\_med}}}}$根據(jù)${V_{\min }} = B$, ${V_{{\text{med}}}} = R $,
${V_{\max }} = G$(26) end if (27) end if (28) ${I_{{\text{out}}}} \leftarrow {\text{merge}}(\bar B,\bar G,\bar R)$ (29) return ${I_{{\text{out}}}}$ 下載: 導(dǎo)出CSV
表 1 UIQM, FUDM和NIQE無參考圖像質(zhì)量評(píng)估結(jié)果
指標(biāo) 方法 原圖 UCM UDCP IBLA ULAP Water-Net Shallow-UWnet UDnet 本文 UIQM 1.476 1.943 1.417 2.144 1.379 2.272 1.273 1.880 3.265 FDUM 0.184 0.224 0.229 0.298 0.294 0.249 0.187 0.183 0.379 NIQE 5.021 4.864 5.315 5.714 4.815 4.859 5.240 4.754 4.681 下載: 導(dǎo)出CSV
表 2 不同模塊消融實(shí)驗(yàn)
HCC DW PF UIQM FDUM NIQE 1.476 0.184 5.021 √ 1.508 0.206 4.982 √ 2.965 0.283 4.630 √ √ 3.112 0.343 4.501 √ √ √ 3.265 0.379 4.681 下載: 導(dǎo)出CSV
-
[1] JHA M and BHANDARI A K. CBLA: Color balanced locally adjustable underwater image enhancement[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 5020911. doi: 10.1109/TIM.2024.3396850. [2] ZHANG Dehuan, WU Chenyu, ZHOU Jingchun, et al. Robust underwater image enhancement with cascaded multi-level sub-networks and triple attention mechanism[J]. Neural Networks, 2024, 169: 685–697. doi: 10.1016/j.neunet.2023.11.008. [3] YANG H Y, CHEN Peiyin, HUANG C C, et al. Low complexity underwater image enhancement based on dark channel prior[C]. 2011 Second International Conference on Innovations in Bio-inspired Computing and Applications, Shenzhen, China, 2011: 17–20. doi: 10.1109/IBICA.2011.9. [4] QIANG Hu, ZHONG Yuzhong, ZHU Yuqi, et al. Underwater image enhancement based on multichannel adaptive compensation[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 5014810. doi: 10.1109/TIM.2024.3378290. [5] DREWS JR P, DO NASCIMENTO E, MORAES F, et al. Transmission estimation in underwater single images[C]. 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2013: 825–830. doi: 10.1109/ICCVW.2013.113. [6] SONG Wei, WANG Yan, HUANG Dongmei, et al. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration[C]. Proceedings of the 19th Pacific-Rim Conference on Multimedia on Advances in Multimedia Information Processing – PCM 2018, Hefei, China, 2018: 678–688. doi: 10.1007/978-3-030-00776-8_62. [7] ZHANG Song, ZHAO Shili, AN Dong, et al. LiteEnhanceNet: A lightweight network for real-time single underwater image enhancement[J]. Expert Systems with Applications, 2024, 240: 122546. doi: 10.1016/j.eswa.2023.122546. [8] WANG Zhengyong, SHEN Liquan, XU Mai, et al. Domain adaptation for underwater image enhancement[J]. IEEE Transactions on Image Processing, 2023, 32: 1442–1457. doi: 10.1109/TIP.2023.3244647. [9] 米澤田, 晉潔, 李圓圓, 等. 基于多尺度級(jí)聯(lián)網(wǎng)絡(luò)的水下圖像增強(qiáng)方法[J]. 電子與信息學(xué)報(bào), 2022, 44(10): 3353–3362. doi: 10.11999/JEIT220375.MI Zetian, JIN Jie, LI Yuanyuan, et al. Underwater image enhancement method based on multi-scale cascade network[J]. Journal of Electronics & Information Technology, 2022, 44(10): 3353–3362. doi: 10.11999/JEIT220375. [10] LI Chongyi, ANWAR S, and PORIKLI F. Underwater scene prior inspired deep underwater image and video enhancement[J]. Pattern Recognition, 2020, 98: 107038. doi: 10.1016/j.patcog.2019.107038. [11] RAO Yuan, LIU Wenjie, LI Kunqian, et al. Deep color compensation for generalized underwater image enhancement[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(4): 2577–2590. doi: 10.1109/TCSVT.2023.3305777. [12] WANG Keyan, HU Yan, CHEN Jun, et al. Underwater image restoration based on a parallel convolutional neural network[J]. Remote Sensing, 2019, 11(13): 1591. doi: 10.3390/rs11131591. [13] WU Shengcong, LUO Ting, JIANG Gangyi, et al. A two-stage underwater enhancement network based on structure decomposition and characteristics of underwater imaging[J]. IEEE Journal of Oceanic Engineering, 2021, 46(4): 1213–1227. doi: 10.1109/JOE.2021.3064093. [14] BUCHSBAUM G. A spatial processor model for object colour perception[J]. Journal of the Franklin Institute, 1980, 310(1): 1–26. doi: 10.1016/0016-0032(80)90058-7. [15] LAND E H and MCCANN J J. Lightness and retinex theory[J]. Journal of the Optical Society of America, 1971, 61(1): 1–11. doi: 10.1364/JOSA.61.000001. [16] JOBSON D J, RAHMAN Z, and WOODELL G A. Properties and performance of a center/surround retinex[J]. IEEE Transactions on Image Processing, 1997, 6(3): 451–462. doi: 10.1109/83.557356. [17] RAHMAN Z, JOBSON D J, and WOODELL G A. Multi-scale retinex for color image enhancement[C]. The 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 1996: 1003–1006. doi: 10.1109/ICIP.1996.560995. [18] PANETTA K, GAO Chen, and AGAIAN S. Human-visual-system-inspired underwater image quality measures[J]. IEEE Journal of Oceanic Engineering, 2016, 41(3): 541–551. doi: 10.1109/JOE.2015.2469915. [19] YANG Ning, ZHONG Qihang, LI Kun, et al. A reference-free underwater image quality assessment metric in frequency domain[J]. Signal Processing: Image Communication, 2021, 94: 116218. doi: 10.1016/j.image.2021.116218. [20] MITTAL A, SOUNDARARAJAN R, and BOVIK A C. Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209–212. doi: 10.1109/LSP.2012.2227726. [21] IQBAL K, ODETAYO M, JAMES A, et al. Enhancing the low quality images using unsupervised colour correction method[C]. 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 2010: 1703–1709. doi: 10.1109/ICSMC.2010.5642311. [22] PENG Y T and COSMAN P C. Underwater image restoration based on image blurriness and light absorption[J]. IEEE Transactions on Image Processing, 2017, 26(4): 1579–1594. doi: 10.1109/TIP.2017.2663846. [23] LI Chongyi, GUO Chunle, REN Wenqi, et al. An underwater image enhancement benchmark dataset and beyond[J]. IEEE Transactions on Image Processing, 2020, 29: 4376–4389. doi: 10.1109/TIP.2019.2955241. [24] NAIK A, SWARNAKAR A, and MITTAL K. Shallow-UWnet: Compressed model for underwater image enhancement (student abstract)[C]. The Thirty-Fifth AAAI Conference on Artificial Intelligence, Palo Alto, USA, 2021: 15853–15854. doi: 10.1609/aaai.v35i18.17923. [25] SALEH A, SHEAVES M, JERRY D, et al. Adaptive uncertainty distribution in deep learning for unsupervised underwater image enhancement[J]. arXiv preprint arXiv: 2212.08983, 2022. [26] LIU Risheng, FAN Xin, ZHU Ming, et al. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(12): 4861–4875. doi: 10.1109/TCSVT.2019.2963772. -