LEE- Telekomünikasyon Mühendisliği-Yüksek Lisans
Bu koleksiyon için kalıcı URI
Gözat
Yazar "Cesur, Nahide Nesli" ile LEE- Telekomünikasyon Mühendisliği-Yüksek Lisans'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgePansharpening using generative adversarial networks with dual discriminators(Graduate School, 2023-01-24) Cesur, Nahide Nesli ; Erer, Işın ; 504191331 ; Telecommunication EngineeringSatellites equipped with sensors are capable of capturing various types of images, including multispectral and panchromatic images. Panchromatic images possess a high spatial resolution, but a low spectral resolution, while multispectral images possess a low spatial resolution and high spectral resolution. The process of producing an image that possesses both high spatial and high spectral resolution is known as image fusion or pansharpening. The field of image fusion has been the subject of extensive study for many years, and can be broadly divided into two categories: traditional methods and deep learning-based approaches. Examples of traditional methods include GS Adaptive(GSA), Generalized Laplacian Pyramid (GLP), and Band-Dependent Spatial-Detail (BDSD). As the field progressed, Convolutional Neural Network (CNN) based models began to be designed for the pansharpening task, resulting in a significant breakthrough. Subsequently, numerous studies have been proposed in this area. Pansharpening with satellite images reached great success and promising results that leads to become a popular research area in recent years. CNN based methods have been achieving great progress and success there are still a few obstacles to handle. A novel pansharpening model is proposed that utilizes a super resolution task with two discriminators and an initial process of dataset preparation utilizing the intensity component. The utilization of two discriminators and the intensity component makes the proposed model a unique approach for pansharpening. Typically, CNN-based models use reduced resolution panchromatic and multispectral images due to the lack of a reference image, resulting in a mismatch problem when mapping to the reduced resolution images. However, the proposed model utilizes a reduced resolution multispectral image and the intensity component of a high resolution multispectral image as a grayscale image, instead of a reduced resolution panchromatic image, in the training process. During the training process, three distinct datasets were utilized to update the weights of the model. The output of the model generates a high-resolution multispectral image by utilizing multispectral and panchromatic images. The model comprises of two separate discriminators, each of which focuses on the spatial or spectral details of the given input. Additionally, the generator takes multispectral and panchromatic images, concatenates them and produces a synthetic image that closely resembles the original multispectral image. After learning process is completed, a variety of validation scenarios were executed. Visual representations of both full resolution and reduced resolution validations were shared. Additionally, selected methods were employed to compare the obtained results and demonstrate the success of the model. Metrics such as ERGAS, SAM, QNR and Q were utilized to calculate and evaluate the results both qualitatively and quantitatively. Five different metrics were used for reference performance results, and three metrics were used for non-reference performance results. Furthermore, various satellite images were employed to observe the results on different characteristic datasets, including Pleiades and WV II, for both training and testing. The proposed model exhibits superior results compared to other CNN-based models, as evidenced by both quantitative and qualitative measures. The proposed model differs from previous models in three main ways. Firstly, the utilization of an intensity component to obtain input images that are precisely matched for the models. Secondly, the use of two separate discriminators, each of which is designed to distinguish spatial or spectral information. Lastly, the incorporation of an adversarial loss for both discriminators to preserve details. The proposed approach demonstrates exceptional performance results, and its results were compared to previous CNN-based methods and traditional methods in the experiments.