基于深度卷積神經(jīng)網(wǎng)絡(luò)的氣象雷達(dá)噪聲圖像語義分割方法
doi: 10.11999/JEIT190098
-
中國民航大學(xué)計(jì)算機(jī)科學(xué)與技術(shù)學(xué)院 ??天津 ??300300
Meteorological Radar Noise Image Semantic Segmentation Method Based on Deep Convolutional Neural Network
-
College of Computer Science and Technology, Civil Aviation University of China, Tianjin 300300, China
-
摘要: 針對新一代多普勒氣象雷達(dá)的散射回波圖像受非降雨等噪聲回波干擾導(dǎo)致精細(xì)化短時(shí)氣象預(yù)報(bào)準(zhǔn)確度降低的問題,該文提出一種基于深度卷積神經(jīng)網(wǎng)絡(luò)(DCNN)的氣象雷達(dá)噪聲圖像語義分割方法。首先,設(shè)計(jì)一種深度卷積神經(jīng)網(wǎng)絡(luò)模型(DCNNM),利用MJDATA數(shù)據(jù)集的訓(xùn)練集數(shù)據(jù)進(jìn)行訓(xùn)練,通過前向傳播過程提取特征,將圖像高維全局語義信息與局部特征細(xì)節(jié)融合;然后,利用訓(xùn)練誤差值反向傳播迭代更新網(wǎng)絡(luò)參數(shù),實(shí)現(xiàn)模型的收斂效果最優(yōu)化;最后,通過該模型對氣象雷達(dá)圖像數(shù)據(jù)進(jìn)行分割處理。實(shí)驗(yàn)結(jié)果表明,該文方法對氣象雷達(dá)圖像的去噪效果較好,與光流法、全卷積網(wǎng)絡(luò)(FCN)等方法相比,該文方法對氣象雷達(dá)圖像中真實(shí)回波和噪聲回波的識別準(zhǔn)確率高,圖像的像素精度較高。
-
關(guān)鍵詞:
- 氣象雷達(dá) /
- 深度學(xué)習(xí) /
- 圖像語義分割 /
- 圖像去噪 /
- 卷積神經(jīng)網(wǎng)絡(luò)
Abstract: Considering the problem that the scattering echo image of the new generation Doppler meteorological radar is reduced by the noise echoes such as non-rainfall, the accuracy of the refined short-term weather forecast is reduced. A method for semantic segmentation of meteorological radar noise image based on Deep Convolutional Neural Network(DCNN) is proposed. Firstly, a Deep Convolutional Neural Network Model (DCNNM) is designed. The training set data of the MJDATA data set are used for training, and the feature is extracted by the forward propagation process, and the high-dimensional global semantic information of the image is merged with the local feature details. Then, the network parameters are updated by using the training error value back propagation iteration to optimize the convergence effect of the model. Finally, the meteorological radar image data are segmented by the model. The experimental results show that the proposed method has better denoising effect on meteorological radar images, and compared with the optical flow method and the Fully Convolutional Networks (FCN), the method has high recognition accuracy for meteorological radar image real echo and noise echo, and the image pixel precision is high. -
表 1 4類噪聲回波的特征描述
噪聲回波 形狀 高度(km) 強(qiáng)度(dBz) 逆溫層回波 分布比較均勻的塊狀回波,范圍較大,邊緣清晰 5~6 10~30 涓流回波 分布比較均勻的半圓形回波,范圍較大,邊緣清晰 6~7 5~15 低空昆蟲回波 分布不均勻的點(diǎn)狀回波,范圍小,比較分散 2~3 0~10 形態(tài)學(xué)噪聲回波 分布不均勻的點(diǎn)狀或片狀回波,范圍較小,比較分散 3~4 5~20 下載: 導(dǎo)出CSV
表 2 模型訓(xùn)練參數(shù)設(shè)置
訓(xùn)練參數(shù) 參數(shù)取值 網(wǎng)絡(luò)學(xué)習(xí)率 10–8 權(quán)重衰減系數(shù) 0.001 momentum系數(shù) 0.91 感知屏蔽數(shù)量 0.5 批處理大小 4 網(wǎng)絡(luò)最大迭代次數(shù) 10000 下載: 導(dǎo)出CSV
表 3 氣象雷達(dá)圖像去噪效果交叉驗(yàn)證取值表
像素點(diǎn) 255 128 255 A手工標(biāo)注為降雨的像素點(diǎn),并且機(jī)器去噪也標(biāo)注為降雨的像素點(diǎn) B手工標(biāo)注為噪聲的像素點(diǎn),但是機(jī)器去噪標(biāo)注為降雨的像素點(diǎn) 128 C手工標(biāo)注為降雨的像素點(diǎn),但是機(jī)器去噪標(biāo)注為噪聲的像素點(diǎn) D手工標(biāo)注為噪聲的像素點(diǎn),并且機(jī)器去噪也標(biāo)注為噪聲的像素點(diǎn) 下載: 導(dǎo)出CSV
表 4 4類模型測試效果對比(%)
數(shù)據(jù)集 方法 TERACC NERACC PA MJDATA (5000) 光流法 88.21 59.03 73.39 FCN 91.68 79.61 85.43 光流法+FCN 92.60 73.91 78.17 Model1 93.65 81.65 96.75 下載: 導(dǎo)出CSV
表 5 4類模型測試效果對比(%)
數(shù)據(jù)集 方法 TERACC NERACC PA MJDATA (7473) DeepLab v3 88.57 81.65 91.75 ShelfNet 86.92 84.34 90.51 Mask R-CNN 89.66 85.20 93.63 Model2 90.40 84.36 92.79 下載: 導(dǎo)出CSV
-
楊植宗. 多普勒效應(yīng)與多普勒雷達(dá)[J]. 物理通報(bào), 2003(2): 47–48. doi: 10.3969/j.issn.0509-4038.2003.02.027YANG Zhizong. Doppler effect and Doppler radar[J]. Physics Bulletin, 2003(2): 47–48. doi: 10.3969/j.issn.0509-4038.2003.02.027 NAGAYAMA S, MURAMATSU S, YAMADA H, et al. Millimeter wave radar image denoising with complex nonseparable oversampled lapped transform[C]. 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 2017: 1824–1829. WU Peng, XU Hongling, and XIE Pengcheng. Research on ground penetrating radar image denoising using nonsubsampled contourlet transform and adaptive threshold algorithm[J]. International Journal of Signal Processing, Image Processing and Pattern Recognition, 2016, 9(5): 219–228. doi: 10.14257/ijsip.2016.9.5.19 MASTRIANI M. Denoising based on wavelets and deblurring via self-organizing map for Synthetic Aperture Radar images[J]. International Scholarly and Scientific Research & Innovation, 2008, 2(9): 2073–2082. 王俊, 楊成龍. 結(jié)合小波分析和變分原理的雷達(dá)圖像去噪模型[J]. 指揮控制與仿真, 2017, 39(5): 41–44. doi: 10.3969/j.issn.1673-3819.2017.05.009WANG Jun and YANG Chenglong. Radar image denoising model based on wavelet analysis and variation principle[J]. Command Control &Simulation, 2017, 39(5): 41–44. doi: 10.3969/j.issn.1673-3819.2017.05.009 CHEN Chong and XU Zengbo. Aerial-image denoising based on convolutional neural network with multi-scale residual learning approach[J]. Information, 2018, 9(7): 169. doi: 10.3390/info9070169 董曉亞, 趙曉麗, 張嘉祺. 一種改進(jìn)的噪聲圖像語義分割方法[J]. 光電子·激光, 2017, 28(12): 1372–1377. doi: 10.16136/j.joel.2017.12.0103DONG Xiaoya, ZHAO Xiaoli, and ZHANG Jiaqi. An improved semantic segmentation method for noisy image[J]. Journal of Optoelectronics·Laser, 2017, 28(12): 1372–1377. doi: 10.16136/j.joel.2017.12.0103 CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834–848. doi: 10.1109/TPAMI.2017.2699184 CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[C]. International Conference on Learning Representations, San Diego, USA, 2015. KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105. ACHILLE A and SOATTO S. Information dropout: Learning optimal representations through noisy computation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(12): 2897–2905. doi: 10.1109/TPAMI.2017.2784440 郭正紅, 張俊華, 郭曉鵬, 等. 結(jié)合視覺顯著圖的Seam Carving圖像縮放方法[J]. 云南大學(xué)學(xué)報(bào): 自然科學(xué)版, 2018, 40(2): 222–227.GUO Zhenghong, ZHANG Junhua, GUO Xiaopeng, et al. Seam Carving image scaling method with visual significant graph[J]. Journal of Yunnan University:Natural Sciences Edition, 2018, 40(2): 222–227. 岳鑫, 肖晨. 基于奇異值分解和雙三次插值的圖像縮放算法改進(jìn)[J]. 西安郵電大學(xué)學(xué)報(bào), 2018, 23(4): 72–77. doi: 10.13682/j.issn.2095-6533.2018.04.012YUE Xin and XIAO Chen. Improvement of image scaling algorithm based on singular value decomposition and bicubic interpolation[J]. Journal of Xi’an University of Posts and Telecommunications, 2018, 23(4): 72–77. doi: 10.13682/j.issn.2095-6533.2018.04.012 KOMAR M, YAKOBCHUK P, GOLOVKO V, et al. Deep neural network for image recognition based on the Caffe framework[C]. The Second IEEE International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 2018: 102–106. DOSOVITSKIY A, FISCHER P, ILG E, et al. FlowNet: Learning optical flow with convolutional networks[C]. IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015: 2758–2766. LONG J, SHELHAMER E, and DARRELL T. Fully convolutional networks for semantic segmentation[C]. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, 2015: 3431–3440. CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. https://arxiv.org/abs/1706.05587, 2017. ZHUANG Juntang and YANG Junlin. ShelfNet for real-time semantic segmentation[EB/OL]. https://arxiv.org/abs/1811.11254v1, 2018. HE Kaiming, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 2980–2988. -