基于一致性生成對抗的遙感多時相建筑物變化檢測數(shù)據(jù)對生成技術
doi: 10.11999/JEIT240720
-
1.
北京跟蹤與通信技術研究所 北京 100094
-
2.
中國科學院空天信息創(chuàng)新研究院 北京 100094
Building Change Detection Data Generation Technology for Multi-temporal Remote Sensing Imagery Based on Consistent Generative Adversarial
-
1.
Beijing Institute of Tracking and Communication Technology, Beijing 100094, China
-
2.
Aerospace Information Innovation Research Institute, Chinese Academy of Sciences, Beijing 100094, China
-
摘要: 雖然目前可以獲取海量的多時相遙感數(shù)據(jù),但是由于建筑物變化時間周期過長,難以獲取充足的建筑物變化數(shù)據(jù)對來支撐數(shù)據(jù)驅(qū)動的深度學習變化檢測模型構建,呈現(xiàn)多時相遙感建筑物變化檢測處理精度差的問題。因此,為提升變化檢測算法模型處理性能,該文從建筑物變化檢測訓練數(shù)據(jù)對生成開展研究,基于一致性對抗生成機理提出了多時相建筑物變化檢測數(shù)據(jù)對生成網(wǎng)絡(BAG-GAN)。其主要在多時相圖像生成過程中采用對抗一致性損失函數(shù)約束,在保證生成圖像和輸入圖像關聯(lián)性的同時,保證了生成模型的多模態(tài)輸出能力。此外,還通過重組原數(shù)據(jù)集中的變化標簽和多時相遙感圖像來進一步提升建筑物變化信息生成的多樣性,解決了訓練數(shù)據(jù)中有效建筑物變化信息占比少的問題,為變化監(jiān)測算法模型的充分訓練奠定了基礎。最后,在LEVIR-CD和WHU-CD建筑物變化檢測數(shù)據(jù)集上進行了數(shù)據(jù)生成實驗,并使用生成擴充后的數(shù)據(jù)集訓練了多種較為經(jīng)典的遙感圖像變化檢測模型,實驗結果表明該文提出的BAG-GAN多時相建筑物變化檢測數(shù)據(jù)對生成網(wǎng)絡及相應的生成策略可以有效提升變化檢測模型的處理精度。
-
關鍵詞:
- 多時相遙感 /
- 建筑物變化檢測 /
- 對抗生成網(wǎng)絡 /
- 數(shù)據(jù)對生成
Abstract:Objective Building change detection is an essential task in urban planning, disaster management, environmental monitoring, and other critical applications. Advances in multi-temporal remote sensing technology have provided vast amounts of data, enabling the monitoring of changes over large geographic areas and extended time frames. Despite this, significant challenges persist, particularly in acquiring sufficient labeled data pairs for training deep learning models. Building changes are typically characterized by long temporal cycles, leading to a scarcity of annotated data that is critical for training data-driven deep learning models. This scarcity severely limits the models’ capacity to generalize and achieve high accuracy, particularly in complex and diverse scenarios. The performance of existing methods often suffers from poor generalization due to insufficient training data, reducing their applicability to practical tasks. To address these challenges, this study proposes a novel solution: the development of a multi-temporal building change detection data pair generation network, referred to as BAG-GAN. This network leverages a consistency adversarial generation mechanism to create diverse and semantically consistent data pairs. The aim is to enrich training datasets, thereby enhancing the learning capacity of deep learning models for detecting building changes. By addressing the bottleneck of insufficient labeled data, BAG-GAN provides a new pathway for improving the accuracy and robustness of multi-temporal building change detection. Methods BAG-GAN integrates Generative Adversarial Networks (GANs) with a specially designed consistency constraint mechanism, tailored for the generation of data pairs in multi-temporal building change detection tasks. The core innovation of this network lies in its adversarial consistency loss function. This loss function ensures that the generated images maintain semantic consistency with the corresponding input images while reflecting realistic and diverse changes. The consistency constraint is crucial for preserving the integrity of the generated data and ensuring its relevance to real-world scenarios. The network is composed of two main components: a generator and a discriminator, which work in tandem through an adversarial learning process. The generator aims to produce realistic and semantically consistent multi-temporal image pairs, while the discriminator evaluates the quality of the generated data, guiding the generator to improve iteratively. Additionally, BAG-GAN is equipped with multimodal output capabilities, enabling the generation of diverse building change data pairs. This diversity enhances the robustness of deep learning models by exposing them to a wider range of scenarios during training. To address the issue of limited training data, the study incorporates a data augmentation strategy. Original datasets, such as LEVIR-CD and WHU-CD, were reorganized by combining change labels with multi-temporal remote sensing images to create new synthetic datasets. These augmented datasets, along with the data generated by BAG-GAN, were used to train and evaluate several widely recognized deep learning models, including FC-EF, FC-Siam-Conc, and others. Comparative experiments were conducted to assess the effectiveness of BAG-GAN and its contribution to improving model performance in multi-temporal building change detection. Results and Discussions The experimental results demonstrate that BAG-GAN effectively addresses the challenges of insufficient labeled data in building change detection tasks. Models trained on the augmented datasets, which included BAG-GAN-generated data, achieved significant improvements in detection accuracy and robustness. For instance, classic models like FC-EF and FC-Siam-Conc showed substantial performance gains when trained on augmented datasets compared to their performance on the original datasets. These improvements validate the effectiveness of BAG-GAN in generating high-quality training data. BAG-GAN also excelled in producing diverse and multimodal building change data pairs Visual comparisons between the generated data and the original datasets highlighted the network’s ability to create realistic and varied data, effectively enhancing the diversity of training datasets. This diversity is critical for addressing the imbalance in existing datasets, where effective building change information is underrepresented. By increasing the proportion of relevant change information in the training data, BAG-GAN improves the learning conditions for deep learning models, enabling them to better generalize across different scenarios. Further analysis revealed that BAG-GAN significantly enhances the ability of detection models to localize changes and recover fine-grained details of building modifications. This is particularly evident in complex scenarios involving subtle or small-scale changes. The adversarial consistency loss function played a pivotal role in ensuring the semantic relevance of the generated data, making BAG-GAN a reliable tool for data augmentation in remote sensing applications. Moreover, the network's ability to generate data pairs with high-quality and multimodal characteristics ensures its applicability to a wide range of remote sensing tasks beyond building change detection. Conclusions This study introduces BAG-GAN, a novel multi-temporal building change detection data pair generation network designed to overcome the limitations of insufficient labeled data in remote sensing. The network incorporates an adversarial consistency loss function, which ensures that the generated data is both semantically consistent and diverse. By leveraging a consistency adversarial generation mechanism, BAG-GAN enhances the quality and diversity of training datasets, addressing key bottlenecks in multi-temporal building change detection tasks. Through experiments on the LEVIR-CD and WHU-CD datasets, BAG-GAN demonstrated its ability to significantly improve the performance of classic remote sensing change detection models, such as FC-EF and FC-Siam-Conc. The results highlight the network’s effectiveness in generating high-quality data pairs that enhance model training and detection accuracy. This research not only provides a robust methodological framework for improving multi-temporal building change detection but also offers a foundational tool for broader applications in remote sensing. The findings pave the way for future advancements in change detection techniques, offering valuable insights for researchers and practitioners in the field. -
表 1 LEVIR-CD模型性能提升(20%與100%)
變化檢測模型 LEVIR-CD(20%)
Prec/Rec/IoULEVIR-CD(100%)
Prec/Rec/IoUFC-EF
+數(shù)據(jù)增強變換0.689/0.696/0.595
0.683/0.654/0.5110.769/0.682/0.620
0.771/0.665/0.593+BAG-GAN 0.863/0.641/0.611 0.875/0.757/0.701 FC-Siam-Conc
+數(shù)據(jù)增強變換0.615/0.709/0.541
0.609/0.698/0.5310.696/0.802/0.628
0.667/0.735/0.609+BAG-GAN 0.894/0.711/0.668 0.922/0.741/0.691 FC-Siam-Diff
+數(shù)據(jù)增強變換0.581/0.690/0.495
0.573/0.634/0.4870.654/0.787/0.586
0.647/0.772/0.557+BAG-GAN 0.866/0.618/0.564 0.889/0.781/0.737 SNUNet
+數(shù)據(jù)增強變換0.940/0.916/0.872
0.913/0.920/0.8650.956/0.951/0.914
0.903/0.944/0.887+BAG-GAN 0.933/0.938/0.876 0.961/0.958/0.924 下載: 導出CSV
表 2 WHU-CD模型性能提升(20%與100%)
變化檢測模型 LEVIR-CD(20%)
Prec/Rec/IoULEVIR-CD(100%)
Prec/Rec/IoUFC-EF
+數(shù)據(jù)增強變換0.689/0.696/0.595
0.683/0.654/0.5110.769/0.682/0.620
0.771/0.665/0.593+BAG-GAN 0.863/0.641/0.611 0.875/0.757/0.701 FC-Siam-Conc
+數(shù)據(jù)增強變換0.615/0.709/0.541
0.609/0.698/0.5310.696/0.802/0.628
0.667/0.735/0.609+BAG-GAN 0.894/0.711/0.668 0.922/0.741/0.691 FC-Siam-Diff
+數(shù)據(jù)增強變換0.581/0.690/0.495
0.573/0.634/0.4870.654/0.787/0.586
0.647/0.772/0.557+BAG-GAN 0.866/0.618/0.564 0.889/0.781/0.737 SNUNet
+數(shù)據(jù)增強變換0.940/0.916/0.872
0.913/0.920/0.8650.956/0.951/0.914
0.903/0.944/0.887+BAG-GAN 0.933/0.938/0.876 0.961/0.958/0.924 下載: 導出CSV
-
[1] NASTASE I N A, MOLDOVANU S, and MORARU L. Deep learning-based segmentation of breast masses using convolutional neural networks[J]. Journal of Physics: Conference Series, 2024, 2701(1): 012005. doi: 10.1088/1742-6596/2701/1/012005. [2] 劉美琴, 王子麟. 基于域適應的圖像語義分割綜述[J]. 北京交通大學學報, 2024, 48(2): 1–9. doi: 10.11860/j.issn.1673-0291.20230120.LIU Meiqin and WANG Zilin. A review on image semantic segmentation based on domain adaptation[J]. Journal of Beijing Jiaotong University, 2024, 48(2): 1–9. doi: 10.11860/j.issn.1673-0291.20230120. [3] EL-MAGD S A A, MASOUD A M, HASSAN H S, et al. Towards understanding climate change: Impact of land use indices and drainage on land surface temperature for valley drainage and non-drainage areas[J]. Journal of Environmental Management, 2024, 350: 119636. doi: 10.1016/j.jenvman.2023.119636. [4] HOSSAIN M S, KHAN M A H, OLUWAJUWON T V, et al. Spatiotemporal change detection of land use land cover (LULC) in Fashiakhali wildlife sanctuary (FKWS) impact area, Bangladesh, employing multispectral images and GIS[J]. Modeling Earth Systems and Environment, 2023, 9(3): 3151–3173. doi: 10.1007/s40808-022-01653-7. [5] 何自芬, 史本杰, 張印輝, 等. 多注意力融合的環(huán)高原湖泊遙感影像分割[J]. 電子學報, 2023, 51(4): 885–895. doi: 10.12263/DZXB.20220085.HE Zifen, SHI Benjie, ZHANG Yinhui, et al. Remote sensing image segmentation of around plateau lakes based on multi-attention fusion[J]. Acta Electronica Sinica, 2023, 51(4): 885–895. doi: 10.12263/DZXB.20220085. [6] 張冬梅, 李石磊. 一種顯著性檢測提取高分遙感影像建筑物的方法[J]. 測繪與空間地理信息, 2024, 47(6): 97–101. doi: 10.3969/j.issn.1672-5867.2024.06.029.ZHANG Dongmei and LI Shilei. A saliency detection method for extracting buildings from high-resolution remote sensing images[J]. Geomatics & Spatial Information Technology, 2024, 47(6): 97–101. doi: 10.3969/j.issn.1672-5867.2024.06.029. [7] MEI Jie, ZHENG Yibo, and CHENG Mingming. D2ANet: Difference-aware attention network for multi-level change detection from satellite imagery[J]. Computational Visual Media, 2023, 9(3): 563–579. doi: 10.1007/s41095-022-0325-1. [8] LIAO Cheng, HU Han, YUAN Xuekun, et al. BCE-Net: Reliable building footprints change extraction based on historical map and up-to-date images using contrastive learning[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 201: 138–152. doi: 10.1016/j.isprsjprs.2023.05.011. [9] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 26th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105. [10] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90. [11] HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269. doi: 10.1109/CVPR.2017.243. [12] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015. doi: 10.48550/arXiv.1409.1556. [13] DEMIRCI M Y, BE?LI N, and GüMü??ü A. An improved hybrid solar cell defect detection approach using generative adversarial networks and weighted classification[J]. Expert Systems with Applications, 2024, 252: 124230. doi: 10.1016/j.eswa.2024.124230. [14] 劉少鵬, 趙慧民, 洪佳明, 等. 面向醫(yī)學圖像生成的魯棒條件生成對抗網(wǎng)絡[J]. 電子學報, 2023, 51(2): 427–437. doi: 10.12263/DZXB.20210051.LIU Shaopeng, ZHAO Huimin, HONG Jiaming, et al. Medical image synthesis using robust conditional GAN[J]. Acta Electronica Sinica, 2023, 51(2): 427–437. doi: 10.12263/DZXB.20210051. [15] STANCIU D C and IONESCU B. Autoencoder-based data augmentation for deepfake detection[C]. The 2nd ACM International Workshop on Multimedia AI against Disinformation, Thessaloniki, Greece, 2023: 19–27. doi: 10.1145/3592572.3592840. [16] YANG Suorong, XIAO Weikang, ZHANG Mengchen, et al. Image data augmentation for deep learning: A survey[EB/OL]. https://arxiv.org/abs/2204.08610, 2022. [17] OUBARA A, WU Falin, AMAMRA A, et al. Survey on remote sensing data augmentation: Advances, challenges, and future perspectives[C]. 5th Conference on Computing Systems and Applications, Cham, Switzerland, 2022: 95–104. doi: 10.1007/978-3-031-12097-8_9. [18] CRESWELL A, WHITE T, DUMOULIN V, et al. Generative adversarial networks: An overview[J]. IEEE Signal Processing Magazine, 2018, 35(1): 53–65. doi: 10.1109/MSP.2017.2765202. [19] BOWLES C, CHEN Liang, GUERRERO R, et al. GAN augmentation: Augmenting training data using generative adversarial networks[EB/OL]. https://arxiv.org/abs/1810.10863, 2018. [20] DOERSCH C. Tutorial on variational autoencoders[EB/OL]. https://arxiv.org/abs/1606.05908, 2016. [21] WANG Yinglin, ZHAO Jianhui, GUO Zhengwei, et al. Soil moisture inversion based on data augmentation method using multi-source remote sensing data[J]. Remote Sensing, 2023, 15(7): 1899. doi: 10.3390/rs15071899. [22] HUANG Rui, WANG Ruofei, GUO Qing, et al. Background-mixed augmentation for weakly supervised change detection[C]. The 37th AAAI Conference on Artificial Intelligence, Washington, USA, 2023: 7919–7927. doi: 10.1609/aaai.v37i7.25958. [23] 但志平, 方帥領, 孫航, 等. 基于雙判別器異構CycleGAN框架下多階通道注意力校準的室外圖像去霧[J]. 電子學報, 2023, 51(9): 2558–2571. doi: 10.12263/DZXB.20211337.DAN Zhiping, FANG Shuailing, SUN Hang, et al. Outdoor image dehazing based on multi-order channel attention calibration using a dual-discriminator heterogeneous CycleGAN framework[J]. Acta Electronica Sinica, 2023, 51(9): 2558–2571. doi: 10.12263/DZXB.20211337. [24] 馬賓, 王一利, 徐健, 等. 基于雙向生成對抗網(wǎng)絡的圖像感知哈希算法[J]. 電子學報, 2023, 51(5): 1405–1412. doi: 10.12263/DZXB.20221224.MA Bin, WANG Yili, XU Jian, et al. An image perceptual hash algorithm based on bidirectional generative adversarial network[J]. Acta Electronica Sinica, 2023, 51(5): 1405–1412. doi: 10.12263/DZXB.20221224. [25] 賈童瑤, 卓力, 李嘉鋒, 等. 基于深度學習的單幅圖像去霧研究進展[J]. 電子學報, 2023, 51(1): 231–245. doi: 10.12263/DZXB.20220838.JIA Tongyao, ZHUO Li, LI Jiafeng, et al. Research advances on deep learning based single image dehazing[J]. Acta Electronica Sinica, 2023, 51(1): 231–245. doi: 10.12263/DZXB.20220838. [26] 李滔, 董秀成, 林宏偉. 基于深監(jiān)督跨尺度注意力網(wǎng)絡的深度圖像超分辨率重建[J]. 電子學報, 2023, 51(1): 128–138. doi: 10.12263/DZXB.20210659.LI Tao, DONG Xiucheng, and LIN Hongwei. Depth map super-resolution reconstruction based on deeply supervised cross-scale attention network[J]. Acta Electronica Sinica, 2023, 51(1): 128–138. doi: 10.12263/DZXB.20210659. [27] JI Shunping, WEI Shiqing, and LU Meng. Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(1): 574–586. doi: 10.1109/tgrs.2018.2858817. [28] LI Xuan, DUAN Haibin, ZHANG Hui, et al. Data augmentation using image generation for change detection[C]. 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI), Beijing, China, 2021: 188–191. doi: 10.1109/DTPI52967.2021.9540199. [29] VARGHESE A, GUBBI J, RAMASWAMY A, et al. ChangeNet: A deep learning architecture for visual change detection[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 129–145. doi: 10.1007/978-3-030-11012-3_10. [30] SHAFIQUE A, CAO Guo, KHAN Z, et al. Deep learning-based change detection in remote sensing images: A review[J]. Remote Sensing, 2022, 14(4): 871. doi: 10.3390/rs14040871. [31] 張良培, 武辰. 多時相遙感影像變化檢測的現(xiàn)狀與展望[J]. 測繪學報, 2017, 46(10): 1447–1458. doi: 10.11947/j.AGCS.2017.20170340.ZHANG Liangpei and WU Chen. Advance and future development of change detection for multi-temporal remote sensing imagery[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10): 1447–1459. doi: 10.11947/j.AGCS.2017.20170340. [32] 眭海剛, 馮文卿, 李文卓, 等. 多時相遙感影像變化檢測方法綜述[J]. 武漢大學學報: 信息科學版, 2018, 43(12): 1885–1898. doi: 10.13203/j.whugis20180251.SUI Haigang, FENG Wenqing, LI Wenzhuo, et al. Review of change detection methods for multi-temporal remote sensing imagery[J]. Geomatics and Information Science of Wuhan University, 2018, 43(12): 1885–1898. doi: 10.13203/j.whugis20180251. [33] 佟國峰, 李勇, 丁偉利, 等. 遙感影像變化檢測算法綜述[J]. 中國圖象圖形學報, 2015, 20(12): 1561–1571. doi: 10.11834/jig.20151201.TONG Guofeng, LI Yong, DING Weili, et al. Review of remote sensing image change detection[J]. Journal of Image and Graphics, 2015, 20(12): 1561–1571. doi: 10.11834/jig.20151201. [34] JI Shunping, SHEN Yanyun, LU Meng, et al. Building instance change detection from large-scale aerial images using convolutional neural networks and simulated samples[J]. Remote Sensing, 2019, 11(11): 1343. doi: 10.3390/rs11111343. [35] NEMOTO K, HAMAGUCHI R, SATO M, et al. Building change detection via a combination of CNNs using only RGB aerial imageries[C]. SPIE 10431, Remote Sensing Technologies and Applications in Urban Environments II, Warsaw, Poland, 2017: 104310J. doi: 10.1117/12.2277912. [36] LIU Ruoyun, KUFFER M, and PERSELLO C. The temporal dynamics of slums employing a CNN-based change detection approach[J]. Remote Sensing, 2019, 11(23): 2844. doi: 10.3390/rs11232844. [37] LIU Yi, PANG Chao, ZHAN Zongqian, et al. Building change detection for remote sensing images using a dual-task constrained deep Siamese convolutional network model[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(5): 811–815. doi: 10.1109/LGRS.2020.2988032. [38] CHEN Hao and SHI Zhenwei. A spatial-temporal attention-based method and a new dataset for remote sensing image change detection[J]. Remote Sensing, 2020, 12(10): 1662. doi: 10.3390/rs12101662. [39] JIANG Huiwei, HU Xiangyun, LI Kun, et al. PGA-SiamNet: Pyramid feature-based attention-guided Siamese network for remote sensing orthoimagery building change detection[J]. Remote Sensing, 2020, 12(3): 484. doi: 10.3390/rs12030484. [40] ZHAN Yang, FU Kun, YAN Menglong, et al. Change detection based on deep Siamese convolutional network for optical aerial images[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(10): 1845–1849. doi: 10.1109/LGRS.2017.2738149. [41] SUN Chen, SHRIVASTAVA A, SINGH S, et al. Revisiting unreasonable effectiveness of data in deep learning era[C]. The 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 843–852. doi: 10.1109/ICCV.2017.97. [42] ALBAHLI S and ALBATTAH W. Deep transfer learning for COVID-19 prediction: Case study for limited data problems[J]. Current Medical Imaging, 2021, 17(8): 973–980. doi: 10.2174/1573405616666201123120417. [43] HAO Hanxiang, BAIREDDY S, BARTUSIAK E R, et al. An attention-based system for damage assessment using satellite imagery[C]. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 2021: 4396–4399. doi: 10.1109/IGARSS47720.2021.9554054. [44] SEO M, LEE H, JEON Y, et al. Self-pair: Synthesizing changes from single source for object change detection in remote sensing imagery[C]. The 2023 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, USA, 2023: 6363–6372. doi: 10.1109/WACV56688.2023.00631. [45] CHEN Hao, LI Wenyuan, and SHI Zhenwei. Adversarial instance augmentation for building change detection in remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5603216. doi: 10.1109/TGRS.2021.3066802. [46] KUMDAKCI H, ?NGüN C, and TEMIZEL A. Generative data augmentation for vehicle detection in aerial images[C]. Pattern Recognition. ICPR International Workshops and Challenges, 2021: 19–31. doi: 10.1007/978-3-030-68793-9_2. [47] ZHU Junyan, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. The 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2242–2251. doi: 10.1109/ICCV.2017.244. [48] JIANG Yuchen, ZHU Bin, and XIE Bo. Remote sensing images data augmentation based on style transfer under the condition of few samples[J]. Journal of Physics: Conference Series, 2020, 1653(1): 012039. doi: 10.1088/1742-6596/1653/1/012039. [49] ZHAO Yihao, WU Ruihai, and DONG Hao. Unpaired image-to-image translation using adversarial consistency loss[C]. 16th European Conference on Computer Vision, Glasgow, UK, 2020: 800–815. doi: 10.1007/978-3-030-58545-7_46. [50] ISOLA P, ZHU Junyan, ZHOU Tinghui, et al. Image-to-image translation with conditional adversarial networks[C]. The 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5967–5976. doi: 10.1109/CVPR.2017.632. [51] DAUDT R C, LE SAUX B, and BOULCH A. Fully convolutional Siamese networks for change detection[C]. 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 2018: 4063–4067. doi: 10.1109/ICIP.2018.8451652. [52] FANG Sheng, LI Kaiyu, SHAO Jinyuan, et al. SNUNet-CD: A densely connected Siamese network for change detection of VHR images[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 8007805. doi: 10.1109/LGRS.2021.3056416. -