一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

聯(lián)合多曝光融合和圖像去模糊的深度網(wǎng)絡(luò)

張梅 趙康威 朱金輝

張梅, 趙康威, 朱金輝. 聯(lián)合多曝光融合和圖像去模糊的深度網(wǎng)絡(luò)[J]. 電子與信息學(xué)報(bào), 2024, 46(11): 4219-4228. doi: 10.11999/JEIT240113
引用本文: 張梅, 趙康威, 朱金輝. 聯(lián)合多曝光融合和圖像去模糊的深度網(wǎng)絡(luò)[J]. 電子與信息學(xué)報(bào), 2024, 46(11): 4219-4228. doi: 10.11999/JEIT240113
ZHANG Mei, ZHAO Kangwei, ZHU Jinhui. Deep Network for Joint Multi-exposure Fusion and Image Deblur[J]. Journal of Electronics & Information Technology, 2024, 46(11): 4219-4228. doi: 10.11999/JEIT240113
Citation: ZHANG Mei, ZHAO Kangwei, ZHU Jinhui. Deep Network for Joint Multi-exposure Fusion and Image Deblur[J]. Journal of Electronics & Information Technology, 2024, 46(11): 4219-4228. doi: 10.11999/JEIT240113

聯(lián)合多曝光融合和圖像去模糊的深度網(wǎng)絡(luò)

doi: 10.11999/JEIT240113
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(62071184)
詳細(xì)信息
    作者簡(jiǎn)介:

    張梅:女,副教授,研究方向?yàn)閮?yōu)化與調(diào)度、智能算法與仿真、圖形處理

    趙康威:男,碩士生,研究方向?yàn)閳D像融合、深度學(xué)習(xí)

    朱金輝:男,副教授,研究方向?yàn)橛?jì)算機(jī)應(yīng)用技術(shù)

    通訊作者:

    朱金輝 csjhzhu@scut.edu.cn

  • 中圖分類號(hào): TN911.73; TP391

Deep Network for Joint Multi-exposure Fusion and Image Deblur

Funds: The National Natural Science Foundation of China (62071184)
  • 摘要: 多曝光圖像融合可提高圖像的動(dòng)態(tài)范圍,從而獲取高質(zhì)量的圖像。對(duì)于在像自動(dòng)駕駛等快速運(yùn)動(dòng)場(chǎng)景中獲得的模糊的長(zhǎng)曝光圖像,利用通用的圖像融合方法將其直接與低曝光圖像融合得到的圖像質(zhì)量并不高。目前暫缺乏對(duì)帶有運(yùn)動(dòng)模糊的長(zhǎng)曝光和短曝光圖像的端到端融合方法?;诖?該文提出一種聯(lián)合多曝光融合和圖像去模糊的深度網(wǎng)絡(luò)(DF-Net)端到端地解決帶有運(yùn)動(dòng)模糊的長(zhǎng)短曝光圖像融合問題。該方法提出一種結(jié)合小波變換的殘差模塊用于構(gòu)建編碼器和解碼器,其中設(shè)計(jì)單個(gè)編碼器對(duì)短曝光圖像進(jìn)行特征提取,構(gòu)建基于編碼器和解碼器的多級(jí)結(jié)構(gòu)對(duì)帶有模糊的長(zhǎng)曝光圖像進(jìn)行特征提取,設(shè)計(jì)殘差均值激勵(lì)融合模塊進(jìn)行長(zhǎng)短曝光特征的融合,最后通過解碼器重建圖像。由于缺少基準(zhǔn)數(shù)據(jù)集,創(chuàng)建了基于數(shù)據(jù)集 SICE 的帶有運(yùn)動(dòng)模糊的多曝光融合數(shù)據(jù)集,用于模型的訓(xùn)練與測(cè)試。最后,從定性和定量的角度將所設(shè)計(jì)的模型和方法和其他先進(jìn)的圖像去模糊和多曝光融合的分步優(yōu)化方法進(jìn)行了實(shí)驗(yàn)對(duì)比,驗(yàn)證了該文的模型和方法對(duì)帶有運(yùn)動(dòng)模糊的多曝光圖像融合的優(yōu)越性。并在移動(dòng)車輛上采集到的多曝光數(shù)據(jù)組上進(jìn)行驗(yàn)證,結(jié)果顯示了所提方法解決實(shí)際問題的有效性。
  • 圖  1  DF-Net網(wǎng)絡(luò)架構(gòu)圖

    圖  2  編碼器和解碼器結(jié)構(gòu)圖

    圖  3  小波殘差模塊結(jié)構(gòu)圖

    圖  4  殘差激勵(lì)均值融合模塊結(jié)構(gòu)圖

    圖  5  快速運(yùn)動(dòng)下長(zhǎng)短曝光圖像及其頻域圖

    圖  6  清晰圖像及其頻譜圖及添加運(yùn)動(dòng)模糊的圖像及其頻譜圖

    圖  7  帶有模糊的多曝光圖像融合數(shù)據(jù)組

    圖  8  測(cè)試集示例圖

    圖  9  在帶有模糊的多曝光數(shù)據(jù)集的“塔”圖像上,DF-Net與“Deblur+MEF”策略下方法的圖像比較

    圖  10  在帶有模糊的多曝光數(shù)據(jù)集的“森林”圖像上,DF-Net與“MEF+Deblur”策略下方法的圖像比較

    圖  11  真實(shí)拍攝的帶有模糊多曝光數(shù)據(jù)集及融合結(jié)果圖

    圖  12  模塊消融實(shí)驗(yàn)的圖像比較

    表  1  DF-Net與Deblur+MEF策略下最優(yōu)方法在PSNR和SSIM上的比較

    方法組合 DPE-MEF [15] IFCNN [16] MEFNet[17] U2fusion[18]
    PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    DMPHN [12] 18.012 0 0.822 6 19.470 0 0.813 5 16.630 0 0.746 0 18.075 9 0.700 9
    MIMO-UNet [13] 18.138 9 0.835 7 19.803 2 0.835 5 17.026 8 0.774 8 18.269 2 0.716 1
    DeepRFT [14] 19.052 9 0.912 8 20.517 4 0.906 0 18.154 6 0.870 8 18.760 7 0.752 9
    DF-Net PSNR = 21.712 6 SSIM = 0.924 6
    下載: 導(dǎo)出CSV

    表  2  DF-Net與MEF+Deblur策略下最優(yōu)方法在PSNR和SSIM上的比較

    方法組合 DPE-MEF [15] IFCNN[16] MEFNet[17] U2fusion[18]
    PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
    DMPHN [12] 18.273 4 0.799 8 19.701 4 0.856 4 18.415 5 0.778 1 17.449 2 0.605 0
    MIMO-UNet [13] 20.089 6 0.873 1 20.187 9 0.876 1 18.601 4 0.797 1 19.563 0 0.815 0
    DeepRFT[14] 19.913 3 0.871 6 19.704 0 0.885 9 18.779 3 0.819 1 19.918 2 0.809 6
    DF-Net PSNR = 21.712 6 SSIM = 0.924 6
    下載: 導(dǎo)出CSV

    表  3  DF-Net在256p下與其他方法在FLOPs和Params上的比較

    方法組合DPE-MEF[15]IFCNN[16]MEFNet[17]U2fusion[18]
    FLOPsParamsFLOPsParamsFLOPsParamsFLOPsParams
    DMPHN[12]106.6119.8684.326.9777.696.92118.147.52
    MIMO-UNet[13]180.9528.33158.6615.44152.0315.39192.4815.99
    DeepRFT[14]34.8213.1312.530.245.900.1946.350.79
    DF-NetFLOPs = 2.01Params = 0.28
    下載: 導(dǎo)出CSV

    表  4  模塊消融實(shí)驗(yàn)比較

    小波殘差模塊 RMEFB PSNR SSIM
    實(shí)驗(yàn)1 × × 21.216 1 0.912 4
    實(shí)驗(yàn)2 × 21.352 1 0.917 2
    實(shí)驗(yàn)3 × 21.602 4 0.919 6
    DF-Net 21.712 6 0.924 6
    下載: 導(dǎo)出CSV
  • [1] LI Shutao and KANG Xudong. Fast multi-exposure image fusion with median filter and recursive filter[J]. IEEE Transactions on Consumer Electronics, 2012, 58(2): 626–632. doi: 10.1109/TCE.2012.6227469.
    [2] MERTENS T, KAUTZ J, and VAN REETH F. Exposure fusion[C]. The 15th Pacific Conference on Computer Graphics and Applications, Maui, USA, 2007: 382–390. doi: 10.1109/PG.2007.17.
    [3] ZHANG Hao and MA Jiayi. IID-MEF: A multi-exposure fusion network based on intrinsic image decomposition[J]. Information Fusion, 2023, 95: 326–340. doi: 10.1016/j.inffus.2023.02.031.
    [4] LI Jiawei, LIU Jinyuan, ZHOU Shihua, et al. Learning a coordinated network for detail-refinement multiexposure image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(2): 713–727. doi: 10.1109/TCSVT.2022.3202692.
    [5] KIM T H, AHN B, and LEE K M. Dynamic scene deblurring[C]. 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 2013: 3160–3167. doi: 10.1109/ICCV.2013.392.
    [6] 楊愛萍, 李磊磊, 張兵, 等. 基于輕量化漸進(jìn)式殘差網(wǎng)絡(luò)的圖像快速去模糊[J]. 電子與信息學(xué)報(bào), 2022, 44(5): 1674–1682. doi: 10.11999/JEIT210298.

    YANG Aiping, LI Leilei, ZHANG Bing, et al. Fast image deblurring based on the lightweight progressive residual network[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1674–1682. doi: 10.11999/JEIT210298.
    [7] TSAI F J, PENG Y T, LIN Y Y, et al. Stripformer: Strip transformer for fast image deblurring[C]. The 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 146–162. doi: 10.1007/978-3-031-19800-7_9.
    [8] CHEN Liangyu, CHU Xiaojie, ZHANG Xiangyu, et al. Simple baselines for image restoration[C]. The 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 17–33. doi: 10.1007/978-3-031-20071-7_2.
    [9] ZAMIR S W, ARORA A, KHAN S, et al. Multi-stage progressive image restoration[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 14821–14831. doi: 10.1109/CVPR46437.2021.01458.
    [10] HU Jie, SHEN Li, and SUN Gang. Squeeze-and-excitation networks[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7132–7141. doi: 10.1109/CVPR.2018.00745.
    [11] SODANO M, MAGISTRI F, GUADAGNINO T, et al. Robust double-encoder network for RGB-D panoptic segmentation[C]. 2023 IEEE International Conference on Robotics and Automation, London, UK, 2023: 4953–4959. doi: 10.1109/ICRA48891.2023.10160315.
    [12] ZHANG Hongguang, DAI Yuchao, LI Hongdong, et al. Deep stacked hierarchical multi-patch network for image deblurring[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 5978–5986. doi: 10.1109/CVPR.2019.00613.
    [13] CHO S J, JI S W, HONG J P, et al. Rethinking coarse-to-fine approach in single image deblurring[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 4641–4650. doi: 10.1109/ICCV48922.2021.00460.
    [14] MAO Xintian, LIU Yiming, LIU Fengze, et al. Intriguing findings of frequency selection for image deblurring[C]. Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washington, USA, 2023: 1905–1913. doi: 10.1609/aaai.v37i2.25281.
    [15] HAN Dong, LI Liang, GUO Xiaojie, et al. Multi-exposure image fusion via deep perceptual enhancement[J]. Information Fusion, 2022, 79: 248–262. doi: 10.1016/j.inffus.2021.10.006.
    [16] ZHANG Yu, LIU Yu, SUN Peng, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99–118. doi: 10.1016/j.inffus.2019.07.011.
    [17] MA Kede, DUANMU Zhengfang, ZHU Hanwei, et al. Deep guided learning for fast multi-exposure image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 2808–2819. doi: 10.1109/TIP.2019.2952716.
    [18] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502–518. doi: 10.1109/TPAMI.2020.3012548.
  • 加載中
圖(12) / 表(4)
計(jì)量
  • 文章訪問數(shù):  280
  • HTML全文瀏覽量:  84
  • PDF下載量:  38
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-02-28
  • 修回日期:  2024-10-08
  • 網(wǎng)絡(luò)出版日期:  2024-10-12
  • 刊出日期:  2024-11-10

目錄

    /

    返回文章
    返回