一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

雙鑒別器盲超分重建方法研究

盧迪 于國(guó)梁

盧迪, 于國(guó)梁. 雙鑒別器盲超分重建方法研究[J]. 電子與信息學(xué)報(bào), 2024, 46(1): 277-286. doi: 10.11999/JEIT221502
引用本文: 盧迪, 于國(guó)梁. 雙鑒別器盲超分重建方法研究[J]. 電子與信息學(xué)報(bào), 2024, 46(1): 277-286. doi: 10.11999/JEIT221502
LU Di, YU Guoliang. Research on Blind Super-resolution Reconstruction with Double Discriminator[J]. Journal of Electronics & Information Technology, 2024, 46(1): 277-286. doi: 10.11999/JEIT221502
Citation: LU Di, YU Guoliang. Research on Blind Super-resolution Reconstruction with Double Discriminator[J]. Journal of Electronics & Information Technology, 2024, 46(1): 277-286. doi: 10.11999/JEIT221502

雙鑒別器盲超分重建方法研究

doi: 10.11999/JEIT221502
詳細(xì)信息
    作者簡(jiǎn)介:

    盧迪:女,教授,博士,研究方向?yàn)閿?shù)據(jù)融合、圖像處理

    于國(guó)梁:男,碩士生,研究方向?yàn)閳D像處理、超分辨率重建

    通訊作者:

    盧迪 ludizeng@hrbust.edu.cn

  • 中圖分類號(hào): TN911.73; TP391

Research on Blind Super-resolution Reconstruction with Double Discriminator

  • 摘要: 圖像超分變率重建方法在公共安全檢測(cè)、衛(wèi)星成像、醫(yī)學(xué)和照片恢復(fù)等方面有著十分重要的用途。該文對(duì)基于生成對(duì)抗網(wǎng)絡(luò)的超分辨率重建方法進(jìn)行研究,提出一種基于純合成數(shù)據(jù)訓(xùn)練的真實(shí)世界盲超分算法(Real-ESRGAN)的UNet3+雙鑒別器Real-ESRGAN方法(Double Unet3+ Real-ESRGAN, DU3-Real-ESRGAN)。首先,在鑒別器中引入U(xiǎn)Net3+結(jié)構(gòu),從全尺度捕捉細(xì)粒度的細(xì)節(jié)和粗粒度的語(yǔ)義。其次,采用雙鑒別器結(jié)構(gòu),一個(gè)鑒別器學(xué)習(xí)圖像紋理細(xì)節(jié),另一個(gè)鑒別器關(guān)注圖像邊緣,實(shí)現(xiàn)圖像信息互補(bǔ)。在Set5, Set14, BSD100和Urban100數(shù)據(jù)集上,與多種基于生成對(duì)抗網(wǎng)絡(luò)的超分重建方法相比,除Set5數(shù)據(jù)集外,DU3-Real-ESRGAN方法在峰值信噪比(PSNR)、結(jié)構(gòu)相似性(SSIM)和無(wú)參圖像考評(píng)價(jià)指標(biāo)(NIQE)都優(yōu)于其他方法,產(chǎn)生了更直觀逼真的高分辨率圖像。
  • 圖  1  Real-ESRGAN生成器網(wǎng)絡(luò)結(jié)構(gòu)

    圖  2  Real-ESRGAN鑒別器網(wǎng)絡(luò)結(jié)構(gòu)

    圖  3  UNet++和UNet3+網(wǎng)絡(luò)結(jié)構(gòu)

    圖  4  UNet3+網(wǎng)絡(luò)decoder結(jié)構(gòu)圖

    圖  5  DU3-Real-ESRGAN網(wǎng)絡(luò)結(jié)構(gòu)

    圖  6  DIV2K數(shù)據(jù)集HR圖像與LR圖像對(duì)比圖

    圖  7  Set5數(shù)據(jù)集對(duì)比圖

    圖  8  BSD100數(shù)據(jù)集對(duì)比圖

    圖  9  Set14數(shù)據(jù)集不同算法對(duì)比

    圖  10  Urban100數(shù)據(jù)集不同算法對(duì)比

    表  1  PSNR/SSIM值對(duì)比

    數(shù)據(jù)集算法
    SRGANEDSRESRGANReal-ESRGANU3-RealESRGANDU3-Real-ESRGAN
    Set528.99/0.79128.80/0.78728.81/0.786830.52/0.87830.01/0.86830.24/0.870
    Set1427.03/0.81526.64/0.80327.13/0.74128.71/0.83028.55/0.84529.57/0.847
    BSD10027.85/0.74528.34/0.82727.33/0.80829.14/0.85529.25/0.85130.19/0.859
    Urban10027.45/0.82527.71/0.742027.29/0.83628.82/0.85029.15/0.79530.05/0.857
    下載: 導(dǎo)出CSV

    表  2  NIQE值對(duì)比

    數(shù)據(jù)集算法
    SRGANEDSRESRGANReal-ESRGANU3-RealESRGNDU3-Real-ESRGAN
    Set55.671 25.137 24.580 63.506 43.602 13.840 0
    Set147.559 35.158 84.409 63.541 33.533 23.516 8
    BSD1007.341 36.271 53.817 23.691 63.267 53.247 4
    Urban1007.108 96.563 24.199 63.929 03.454 33.399 3
    下載: 導(dǎo)出CSV
  • [1] 陶狀, 廖曉東, 沈江紅. 雙路徑反饋網(wǎng)絡(luò)的圖像超分辨重建算法[J]. 計(jì)算機(jī)系統(tǒng)應(yīng)用, 2020, 29(4): 181–186. doi: 10.15888/j.cnki.csa.007344

    TAO Zhuang, LIAO Xiaodong, and SHEN Jianghong. Dual stream feedback network for image super-resolution reconstruction[J]. Computer Systems &Applications, 2020, 29(4): 181–186. doi: 10.15888/j.cnki.csa.007344
    [2] 陳棟. 單幅圖像超分辨率重建算法研究[D]. [碩士論文], 華南理工大學(xué), 2020.

    CHEN Dong. Research on single image super-resolution reconstruction algorithm[D]. [Master dissertation], South China University of Technology, 2020.
    [3] KAPPELER A, YOO S, DAI Qiqin, et al. Video super-resolution with convolutional neural networks[J]. IEEE Transactions on Computational Imaging, 2016, 2(2): 109–122. doi: 10.1109/TCI.2016.2532323
    [4] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 2017–2025.
    [5] IRANI M and PELEG S. Super resolution from image sequences[C]. [1990] Proceedings. 10th International Conference on Pattern Recognition, Atlantic City, USA, 1990: 115–120.
    [6] STARK H and OSKOUI P. High-resolution image recovery from image-plane arrays, using convex projections[J]. Journal of the Optical Society of America A, 1989, 6(11): 1715–1726. doi: 10.1364/JOSAA.6.001715
    [7] DONG Chao, LOY C C, HE Kaiming, et al. Learning a deep convolutional network for image super-resolution[C]. 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 184–199.
    [8] DONG Chao, LOY C C, and TANG Xiaoou. Accelerating the super-resolution convolutional neural network[C]. 14th European Conference on Computer Vision. Amsterdam, The Netherlands, 2016: 391–407.
    [9] PARK S J, SON H, CHO S, et al. SRFeat: Single image super-resolution with feature discrimination[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 455–471.
    [10] ZHANG Yulun, LI Kunpeng, LI Kai, et al. Image super-resolution using very deep residual channel attention networks[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 294–310.
    [11] LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 105–114.
    [12] LIM B, SON S, KIM H, et al. Enhanced deep residual networks for single image super-resolution[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, USA, 2017: 1132–1140.
    [13] WANG Xintao, YU Ke, WU Shixiang, et al. ESRGAN: Enhanced super-resolution generative adversarial networks[C]. European Conference on Computer Vision, Munich, Germany, 2018: 63–79.
    [14] SOH J W, PARK G Y, JO J, et al. Natural and realistic single image super-resolution with explicit natural manifold discrimination[C]. The 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 8114-8123.
    [15] WANG Xintao, XIE Liangbin, DONG Chao, et al. Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data[C]. 2021 IEEE/CVF International Conference on Computer Vision Workshops, Montreal, Canada, 2021: 1905–1914.
    [16] SAJJADI M S M, SCHÖLKOPF B, and HIRSCH M. EnhanceNet: Single image super-resolution through automated texture synthesis[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 4501–4510.
    [17] ZHANG Kai, LI Yawei, ZUO Wangmeng, et al. Plug-and-play image restoration with deep denoiser prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(10): 6360–6376. doi: 10.1109/TPAMI.2021.3088914
    [18] HUANG Huimin, LIN Lanfen, TONG Ruofeng, et al. UNet 3+: A full-scale connected UNet for medical image segmentation[C]. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020: 1055–1059.
    [19] ZHOU Zongwei, SIDDIQUEE M M R, TAJBAKHSH N, et al. Unet++: A nested U-Net architecture for medical image segmentation[M]. Stoyanov D, Taylor Z, Carneiro G, et al. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham: Springer, 2018: 3–11.
    [20] MITTAL A, SOUNDARARAJAN R, and BOVIK A C. Making a “Completely Blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209–212. doi: 10.1109/LSP.2012.2227726
  • 加載中
圖(10) / 表(2)
計(jì)量
  • 文章訪問(wèn)數(shù):  443
  • HTML全文瀏覽量:  472
  • PDF下載量:  54
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2022-12-02
  • 修回日期:  2023-09-13
  • 網(wǎng)絡(luò)出版日期:  2023-09-15
  • 刊出日期:  2024-01-17

目錄

    /

    返回文章
    返回