Yeni Çoklu Çözünürlüklü Görüntü Ayrışımları İle Çoklu Spektral Ve Pankromatik Uydu Görüntülerinin Füzyonu

thumbnail.default.placeholder
Tarih
2015-05-18
Yazarlar
Kaplan, Nur Hüseyin
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Fen Bilimleri Enstitüsü
Institute of Science And Technology
Özet
Görüntü füzyonu, basitçe aynı görüntünün farklı biçimlerinden yeni ve geliştirilmiş bir görüntü elde etme işlemidir. Belirli bir bölgenin farklı uydular aracılığıyla elde edilen görüntüleri veya bir fotoğrafın farklı bozulmalara uğramış kopyaları, görüntü füzyonu yoluyla birleştirilerek, tek bir görüntüyle algılanması mümkün olmayan özelliklerin tespit edilmesi sağlanır. Füzyon görüntüsü, orjinal görüntülerin tamamlayıcı bilgilerini içerdiğinden, bu görüntülere oranla daha iyi özelliklere sahiptir. Sonuç olarak, füzyon ile insanın görsel algılayışı ve görüntü işleme uygulamalarında kullanması için iyileştirilmiş görüntüler elde etmek mümkündür. Görüntü füzyonunun en çok kullanıldığı alan olan Uzaktan Algılama için tasarlanan cihazlar, ya renk bilgisi olan düşük çözünürlükte (MS), ya da renk bilgisi olmayan, yüksek çözünürlükte görüntüler (PAN) elde ederler. Görüntü füzyonu, tam bu noktada devreye girerek, hem çözünürlüğü yüksek, hem renk bilgisi içeren yeni bir görüntünün elde edilmesini sağlar. MS ve PAN görüntünün füzyonu sonucunda amaçlanan şey MS görüntünün PAN aracılığıyla keskinleştirilmesi olduğu için, bu yöntemlere "pankeskinleştirme" de denmektedir. Tarihi gelişim sürecinde, birçok pankeskinleştirme metodu önerilmiştir. En etkin özelliği gösteren yöntemler, çokluçözünürlük yöntemleridir. Çokluçözünürlük metodlarında füzyon işlemi üç adımda gerçekleşir. İlk adımda, giriş görüntüleri alt bandlarına ayrılır, ikinci adımda altbandlar önceden belirlenmiş bir kural çerçevesinde birleştirilir ve son adımda oluşan yeni altbandlardan, füzyon görüntüsü elde edilir. Çokluçözünürlük yöntemiyle MS görüntüye detay ilavesi, ARSIS konsepti altında toparlanmıştır. Bu çalışmada, çoklu çözünürlük yöntemleri arasında, yaygın olarak kullanılan A Trous Dalgacık Dönüşümü tabanlı füzyon yöntemleri incelenmiştir. Bu dönüşüm yardımıyla elde edilen füzyon görüntülerinde detay ilavesinin genellikle aşırı olduğu bilinmektedir. Bu sorunu çözebilmek için geliştirilen yöntemlerde detay ilavesini azaltmak için çıkarılan detaylar katsayılar yardımıyla azaltılmaya çalışılmıştır. Bu yöntemler, genel olarak lokal ve global yöntemler olarak ikiye ayrılmaktadır. Lokal yöntemler, daha geniş detayların olduğu bölgelerde iyi sonuç verirken, global yöntemler daha küçük detayların olduğu bölgelerde iyi sonuç vermektedir. Bilateral süzgeç yapılarının da ATWT gibi görüntüleri altbandlarına ayrıştırabildiği bilinmektedir. Bu çalışmada. Bilateral süzgeçleme için kullanılan parametreler yardımıyla görüntüden elde edilecek detay miktarlarını ayarlamak mümkün olduğundan, istenilen sonuçlara yaklaşmak daha kolay olabilmektedir. Bilateral süzgeçleri temel alan ve ATWT yöntemlerine karşı düşen global yöntemler önerilmiştir. ATWT yöntemlerinde görüntüyü ağırlıklandırmak için ağırlıklandırılmış ATWT (WATWT) dönüşümünü baz alan yöntemler ele alınmıştır. Bu yöntemde, bilateral süzgecin konumsal parametresi kullanılırken, uzamsal parametresi yerine ATWT de kullanılan süzgeç yerleştirilmiştir. Bu sayede, parametre hesabı teke indirilerek, bilateral süzgeçlerden daha hızlı, bilateral süzgeçlerin performansına yakın global yöntemler önerilebilmiştir. WATWT yi temel alan bir lokal yöntem de önerilmiştir. Önerilen yöntemler çokluspektral ve pankromatik uydu görüntülerinin füzyonu için kullanılmış ve tez çalışmasında incelenen diğer yöntemler ile karşılaştırılmıştır. Elde edilen sonuçlar belli kriterlere göre değerlendirildiğinde, önerilen global yöntemlerin mevcut global yöntemlerden ve yine önerilen lokal yöntem mevcut lokal yöntemlerden daha iyi sonuçlar verdiği gözlemlenmiştir. Önerilen lokal yöntemin sonuçları, ATWT tabanlı yöntemlerin iyi sonuçlar elde edemediği küçük detaylar içeren bölgelerde, daha tatmin edici sonuçlar aldığı gözlemlenmiştir.
Image fusion is basically the process of obtaining a new and improved image from different forms of the same image. The images obtained for a specific region by different satellites or copies of a photograph having different corruptions are merged by using image fusion techniques to obtain the characteristics that cannot be obtained from original images alone. Including the complementary information of the original images, fused image has better characteristics than these images. As a result, it is possible to obtain improved images that are better for human visual sense and for further image processing applications. Remote Sensing, being the most used area in image fusion, uses devices that develop two different kinds of image. One of them involves the color information with low resolution; the other has high resolution without color information. At this point, image fusion is applied in order to develop an image having both high resolution and color information. Therefore, image fusion can be also named as choosing the best characteristics of the input images. The sensors used for image fusion are mostly panchromatic and multispectral sensors. Because the fusion result is a sharpened version of multispectral image by panchromatic image, the fusion is also named as pansharpening. Considering the historical background of pansharpening, there are lots of methods proposed. The multiresolution image fusion techniques have the most effective features among these methods. In multiresolution techniques, fusion algorithm has three steps. First step is the subband decomposition of the input images. Second step is merging the subbands by a rule predefined and the final step is reconstruction of the merged subbands. The basic multiresolution methods are the intensity (I), hue (H), saturation (S) transform (IHS) and principal component analysis (PCA) based pansharpening. In IHS based method, the MS image is transformed via IHS transform. The intensity (I) component of MS image is replaced by the PAN image, whose histogram is matched with I component. Finally reverse IHS transform is applied to obtain the pansharpened image. In PCA method, the first component of MS image is replaced by the PAN image, similar way to the IHS method. Wavelet transform based methods are the most widely used multiresolution methods. Conventional wavelet transform has downsampling and upsampling processes, which cause artifacts and some missing information in the pansharpened image. To overcome this problem, several wavelet transform based pansharpening algorithms are proposed. One of them is the A Trous Wavelet Transform. In this transform, to obtain the subbands of the image, a cubic spline mask is applied to the image. The output is the approximation subband of the image. To obtain the detail subband (wavelet plane), approximation image is basically subtracted from the original image. Pansharpening methods based on ATWT do not have upsampling and subsampling processes. So they do not have the negative effects of downsampling process and yet the computational load is less due to the use of additive transform. The pansharpened image can be obtained by replacing MS wavelet planes with the planes obtained from PAN image. This result contains too much PAN information, which will result in spectral problems in the pansharpened image. Another and more efficient way to obtain the fused image is to add the wavelet planes of PAN image directly to the MS image. This method has better results than the former one. But, this method also has overenhancement issues. To reduce the overenhancement problem and keeping the spectral quality better, first, the wavelet planes are proportioned by luminance (intensity) of the MS image, then, they are added to the MS image. Although this method is better than the former methods, spectral bias is high and it lacks performance in larger detailed areas (urban areas). Other methods, which decrease the spectral bias and have better results in areas with larger details, are named as CBD and ECB. CBD method calculates a coefficient for every pixel within an NxN sized window. These coefficients are used for weighting the wavelet planes to be added to the MS image. In CBD method, small details are totally neglected, which yields in blurry results in those areas. ECB method is the enhanced CBD method, which does not totally disregard the small details but add them by a small coefficient. It is known that bilateral filters can decompose the images like the ATWT. The main difference in this filtering process is that there are two parameters, which are controlling the range and spatial properties of the filtering named as Gaussian range and Gaussian spatial parameters. To recall, in ATWT based methods, to reduce the overenhancement problems the details are limited after the detail extraction. In bilateral filtering, it is possible to arrange the detail extracted from image during the filtering process by simply changing the Gaussian kernels used for filtering, namely spatial and range kernels. The main problem here is the parameter selections. For parameter selection, a dictionary is created by a set of MS and PAN images. In order to select the proper range kernel with a stable spatial kernel, the images are fused for changing range kernels. The most efficient parameter is selected. Then by taking this range kernel, spatial kernel is changed for the dictionary. Two global parameters are obtained by taking the average of the results for bilateral filtering based methods. Further consideration showed that spatial kernel changes a little for different images while range kernel is differing sharply according to images. So, taking a stable spatial kernel for all images and changing the range parameter by a simple formula for different images will be more proper. Pansharpening based on bilateral filtering can be made in several ways. First method is called the substitutive method (BFS) in which the detail bands of MS image are replaced by the one obtained from the PAN image. Second method is the additive method (BFA). In this method, the details of the PAN image are added directly to MS image. In bilateral IHS combined method (BFL), the intensity component of the MS image obtained by IHS transform is replaced with PAN image whose histogram is matched with the intensity of MS image. Finally, inverse IHS transform is applied. The final method is called bilateral filtering luminance proportional (BFLP) method. This method is like as the AWLP method. In this method, a luminance component obtained from the bands of MS image is used for weightining the detail subbands of PAN image in a proportional manner and this result is added to the MS image. As discussed above, it can easily be concluded that spatial kernel of bilateral filter does not affect the pansharpening results effectively. So, replacing the spatial parameter with the spline filter of ATWT and using only the range parameter can reduce the computational cost, while pansharpening results will be close to bilateral filtering. This transform is called weighted a trous wavelet transform (WATWT). Here, focus will be only on the range parameter. One way to determine the range parameter is to change the parameter during the fusion process and take the parameter, which gives the best quantitative results. WATWT based methods are similar to the ones described above for bilateral filtering which are the substitutive method (WSW), additive method (WAW) and luminance proportional method (WAWLP). Both bilateral filtering based and WATWT based methods described above are global methods. Because the details are limited already during the filtering processes and it is not possible to adopt local methods of ATWT, namely CBD and ECB. But, it is possible to change the detail added adaptively for WATWT by changing the range parameter adaptively. In this method, for every window equal to the bicubic spline filter size, a range parameter is selected. Looking in different parts of images, a single formula is extracted. Using this adaptive formula, pansharpening process is carried on. This method is called adaptive weighted additive wavelet (AWAW) based pansharpening. The proposed methods are used for the fusion of multispectral and panchromatic images (pansharpening), as well as the conventional methods described in this work. Different images from SPOT, IKONOS-2 and Quickbird satellites are used for this purpose. The goal of pansharpening is to obtain an MS image, which has the resolution of the PAN image. Because MS image at the PAN resolution does not exist, visual comparisons could not be made. For quantitative comparison, the original PAN images are degraded to the resolution of MS image and MS image is also degraded at the same ratio. These new MS and PAN images are used for pansharpening, so the resulting image can be compared to the original MS image quantitatively. The visual and quantitative comparisons show different conclusions. First of all, the results of multiresolution methods are much better than the other methods, such as IHS based method. Even if IHS method sharpens the image, the spectral properties of MS image cannot be preserved. Second conclusion is that the additive methods are better than the substitutive methods. Results show that, substitutive methods are overenhanced and still have distortions in color. Conventional additive methods, being better than the substitutive methods, still have an overenhancement problem. So, luminance proportional methods (AWLP, BFLP, IAWP and WAWLP) and local enhancement methods (CBD, ECB, AWAW) have the best results. The proportional methods are generally better in images having too much details (such as buildings) while local methods are better in images having larger details (such as urban areas). In luminance methods, the best methods are the proposed BFLP and WAWLP methods, while in local methods the best result is achieved by AWAW.
Açıklama
Tez (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2015
Thesis (PhD) -- İstanbul Technical University, Institute of Science and Technology, 2015
Anahtar kelimeler
görüntü füzyonu, uzaktan algılama, pankeskinleştirme, çokluçözünürlük bilateral süzgeçler, ağırlıklandırılmış toplamsal dalgacık dönüşümü, image fusion, remote sensing, pansharpening, multiscale bilateral filter, weighted additive wavelet transform
Alıntı