Enhancing hyperspectral and multispectral image fusion using high dimensional model representation

thumbnail.default.alt
Tarih
2025-07-07
Yazarlar
Kahraman, Efe
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Graduate School
Özet
Spectral imaging is an advanced technique used to quantify the reflectance of incident electromagnetic waves across a range of wavelengths. The measured reflectance values are intrinsically linked to the physical and chemical properties of the surface being observed. The collection of reflectance amounts across different wavelengths of an object or surface is referred as spectral signature. As each material exhibits a distinct spectral signature, spectral imaging enables the identification and classification of objects by analyzing these unique reflectance patterns. Spectral imaging is broadly categorized into two primary modalities: hyperspectral imaging and multispectral imaging. Hyperspectral imaging acquires data across hundreds of narrow, contiguous, and uniformly spaced spectral bands, offering fine-grained spectral detail. In contrast, multispectral imaging captures data over a limited number of broader, non-contiguous spectral bands, typically ranging from 3 to 15. Due to the narrow bandwidth of hyperspectral channels, the photon count per band is relatively low, resulting in a reduced signal-to-noise ratio (SNR). This lower SNR necessitates a compromise in spatial resolution. However, the dense spectral sampling yields superior spectral resolution compared to multispectral imaging. On the other hand, multispectral systems benefit from wider spectral bands, which allow for greater photon collection per band, thereby enhancing the SNR and enabling higher spatial resolution. This comes at the cost of lower spectral resolution due to the coarser and more sparsely distributed spectral information. Consequently, it is not feasible for a single imaging sensor to simultaneously achieve both high spectral and high spatial resolution. To address this limitation, various image fusion techniques have been developed. These approaches aim to integrate complementary information from different sources—such as hyperspectral and multispectral images—to generate data with enhanced spatial and spectral fidelity. Both deep learning-based frameworks and more traditional methodologies, such as those based on matrix factorization and tensor decomposition, are actively employed for this purpose. Among the widely used tensor decomposition techniques are the CANDECOMP/PARAFAC (CP) decomposition and Tucker decomposition, each offering unique advantages in multi-dimensional data analysis. However, in the context of hyperspectral-multispectral (HS-MS) image fusion, it is essential to utilize a coupled tensor decomposition framework, which allows for the joint processing of multiple data modalities by aligning shared components across datasets. In this study, we focus on Coupled Non-negative Matrix Factorization (CNMF) as the core fusion methodology. This approach begins by initializing the factor matrices corresponding to both the hyperspectral and multispectral data. Subsequently, these factors are iteratively updated using a multiplicative update algorithm to minimize reconstruction error while maintaining non-negativity constraints. Once the factor matrices have been sufficiently optimized, the fused image is reconstructed by multiplying the feature matrix $W$ derived from the hyperspectral component with the coefficient matrix H obtained from the multispectral component. This coupling strategy effectively leverages the complementary strengths of both data sources, yielding an image with enhanced spectral and spatial resolution. In this thesis, we propose an effective fusion technique that combines High Dimensional Model Representation (HDMR) with CNMF, which shows significant improvements over plain CNMF in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and mutual information (MI). Salinas, Kennedy Space Center, and Indian Pines datasets were used to conduct experiments. It is noted that the proposed method enhances PSNR by up to 12 dB, SSIM by up to 0.70, and MI by up to 0.30, in comparison to the fused images produced from CNMF.
Açıklama
Thesis (M.Sc.) -- Istanbul Technical University, Graduate School, 2025
Anahtar kelimeler
computer vision, bilgisayarla görme, hyperspectral imaging, hiperspektral görüntüleme, digital signal processing, sayısal işaret işleme
Alıntı