Super-resolution of landsat-8 images using sentinel-2 images and generative adversarial networks

dc.contributor.advisorSertel, Elif
dc.contributor.advisorBayram, Bülent
dc.contributor.authorSunker, Esra
dc.contributor.authorID501211620
dc.contributor.departmentGeomatics Engineering
dc.date.accessioned2025-03-11T10:43:38Z
dc.date.available2025-03-11T10:43:38Z
dc.date.issued2024-07-05
dc.descriptionThesis (M.Sc.) -- İstanbul Technical University, Graduate School, 2024
dc.description.abstractSatellite images are crucial data sources for various research disciplines, but their resolution may not always provide expected features. To overcome this, the super-resolution approach using deep learning techniques is a popular method to improve low-resolution (LR) satellite images using high-resolution (HR) satellite images. The study aims to investigate the effects of scaling factor on the algorithms for Super Resolution and the impact of different deep learning algorithms on the same dataset. The results will be presented in Chapter 5. Super-resolution (SR) is used to produce high-resolution versions of low-resolution images, recovering details that are hard or impossible to obtain using traditional techniques. Generative Adversarial Networks (GAN) are an effective tool for resolving this challenge. GANs consist of a generator and a discriminator, which can produce realistic images when trained with the right parameters and suitable datasets. The study used the OLI2MSI dataset, which contains both Landsat and Sentinel image pairs for the same area. A custom dataset was created to test the weights obtained from the training of the OLI2MSI dataset. The architecture of the GAN-based super-resolution approach was used in this study. In Case 1, the GAN network was determined to be Real-ESRGAN, which was trained using ratios of x1, x2, and x4. In Case 2, Real-ESRGAN, Pix2pix, SRGAN and HAT were used. Also for the CNN-based architectures such as SRCNN and VDSR were included in the study. The study used five accuracy metrics in training and tests for the Real-ESRGAN model. These metrics include Peak-Signal-Noise-Ratio (PSNR), Learned Perceptual Image Patch Similarity (LPIPS), Structural Similarity Index Mesure (SSIM), Feature Similarity (FSIM) and Natural Image Quality Evaluator (NIQE). PSNR measures the quality of an image by computing the pixel-by-pixel distance between images. SSIM evaluates the similarity between the structure of the super-resolution (SR) image and the high-resolution (HR) image considered as the ground truth. FSIM calculates the degree of similarity between two images by simulating human visual systems, potentially providing better results than other metrics such as PSNR. NIQE is free from the use of a reference ground truth image and computes the dissimilarity between these models. The training process was conducted using Pytorch with Nvidia Geforce 3070 TI graphics cards, with hyperparameters such as optimizer, learning rate, batch size, loss function, and image size. The OLI2MSI dataset was divided into sections for training and testing, and the scaling factor between Landsat and Sentinel images was adjusted to accommodate different scaling factors. The Real-ESRGAN model was tested on the OLI2MSI dataset on scaling factors x1, x2 and x4. When the results were examined, the best performance was obtained at the x2 scale factor. When Case2 results were examined, it was seen that Real-ESRGAN performed better than the other two GANs, Pix2pix and SRGAN, in converting LR images to HR. When all models are compared, Transformer models give the best numerical results, while CNN-based models give the worst results. When the training types were compared, transfer learning gave better results than training from scratch. When the results are examined visually, Real-ESRGAN and HAT produced better results than other models. Especially in residential areas, Real-ESRGAN's performance is better than other models.
dc.description.degreeM.Sc.
dc.identifier.urihttp://hdl.handle.net/11527/26605
dc.language.isoen_US
dc.publisherGraduate School
dc.sdg.typeGoal 14: Life Below Water
dc.sdg.typeGoal 15: Life on Land
dc.sdg.typeGoal 17: Partnerships to achieve the Goal
dc.subjectsatellite images
dc.subjectuydu görüntüleri
dc.titleSuper-resolution of landsat-8 images using sentinel-2 images and generative adversarial networks
dc.title.alternativeSentinel-2 görüntüleri ve çekişmeli üretici ağlar kullanılarak landsat-8 görüntülerinin süper çözünürlüğü
dc.typeMaster Thesis

Dosyalar

Orijinal seri

Şimdi gösteriliyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
Ad:
501211620.pdf
Boyut:
2.5 MB
Format:
Adobe Portable Document Format

Lisanslı seri

Şimdi gösteriliyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
Ad:
license.txt
Boyut:
1.58 KB
Format:
Item-specific license agreed upon to submission
Açıklama