LEE- Uydu Haberleşme ve Uzaktan Algılama-Yüksek Lisans
Bu koleksiyon için kalıcı URI
Gözat
Son Başvurular
1 - 5 / 20
-
ÖgeExplainable deep learning classification of tree species with very high resolution VHRTreeSpecies dataset(Graduate School, 2025-01-24)Forests are among the most vital natural resources, playing a significant role in regulating the climate, maintaining ecological balance, and supporting biodiversity conservation and sustainable forest management. Additionally, they contribute to various applications, such as hazard management and wildlife habitat mapping. Understanding the spatial and temporal distribution of forests and forest stand types is a prerequisite for gaining deeper insights into their role within the Earth's systems. In this context, remote sensing data is widely utilized for forest stand type classification. However, traditional classification methods are often time-consuming and typically limited to specific areas and species, which significantly restricts their applicability to different regions and diverse tree species. With the increasing availability of high-resolution satellite imagery, deep learning methods have emerged as a powerful tool for forest management and tree species classification, offering enhanced efficiency and broader applicability compared to conventional approaches. Remote sensing (RS) applications, which serve as an essential spatial data source in forestry practices, have emerged as an effective solution for field studies due to their cost-efficiency and rapid data acquisition capabilities. Remote sensing systems provide valuable spatial, temporal, and spectral resolution data to cover forest areas at the required scale and within the necessary temporal intervals for data collection. High-resolution remote sensing data are frequently preferred for deriving detailed tree-level information, particularly for tasks such as individual tree detection or damage assessment necessary for maintaining tree health. Satellite systems such as Sentinel-2 and Landsat are frequently preferred due to their open-access nature, which allows for the collection of data across broad spectral bands and the provision of continuous data access. Nevertheless, the spatial resolution limitations of these satellites may render them inadequate for particular applications. Tree species with varying structural and morphological characteristics exhibit distinct spectral properties. Trees within the same environment but at varying developmental stages or health conditions can display significant differences in their spectral characteristics. In this regard, the application of remote sensing data is essential for achieving precise and reliable classification of tree species. Over the past decade, considerable progress has been made in the identification of tree species, encompassing a spectrum of approaches from fundamental image processing techniques to sophisticated machine learning (ML) and deep learning (DL) methodologies. Nevertheless, traditional classification algorithms, such as Random Forest (RF) and Support Vector Machines (SVM), have shown limited effectiveness in identifying tree canopies within dense and complex backgrounds. However, the time-consuming nature of traditional methods and their typical application to only specific areas and tree species substantially constrain the usability of these models across different regions and diverse species. Conversely, with the increasing availability of high-resolution satellite imagery, deep learning methods have emerged as powerful tools in forest management and tree species classification. DL-based models possess the potential to accurately extract more intricate information structures. Nevertheless, the effective application of these models generally requires a larger number of reference data samples to enable sufficient learning of the model parameters. As part of this thesis, a new benchmark dataset for forest stand type classification, called VHRTreeSpecies, is introduced. This comprehensive dataset includes very high-resolution RGB satellite imagery of 15 dominant tree species from various forest ecosystems across Turkey. The input images and their corresponding labels were generated using Google Earth imagery and forest stand maps provided by the General Directorate of Forestry (GDF). The dataset was curated by selecting pure species and masking raster images using vector data. High-quality images captured during the summer months (late July to mid-August) from the past five years were prioritized. The dataset was further diversified to represent different forest stand development stages (youth, sapling, thin, medium, and mature trees) and canopy closure levels (open, moderately closed, fully closed). The dataset was analyzed using various CNN architectures, including ResNet-50, ResNet-101, VGG16, VGG19, ResNeXt-50, EfficientNet, and ConvNeXt. Additionally, explainable artificial intelligence (XAI) methods, such as Occlusion, Integrated Gradients and Grad-CAM, were applied to examine the decision-making processes of the models. Evaluation metrics, including Max-Sensitivity and AUC-MoRF, were employed to comprehensively assess model performance not only in terms of classification accuracy but also in terms of the interpretability and reliability of their decision-making mechanisms.
-
ÖgeDesign of miniaturized beam scanning microstrip antenna with isolated ports and review of antenna miniaturization technics(Graduate School, 2024-07-07)In this study, the first chapter explains the working models of patch antennas in simple language: the transmission line model, which provides a very good convergence method for determining the antenna dimensions and understanding the fringing effect. The cavity model helps us understand how the patch antennas radiate and which mechanisms affect the radiation pattern. In addition, antenna parameters are explained with examples and figures. The second part identifies and describes antenna miniaturization methods in the literature. The importance of miniaturization today is mentioned in the chapter. The miniaturization methods in the literature are examined and compared to each other. In addition, a rectangular patch antenna was designed to demonstrate miniaturization methods. Applying the described methods on this reference antenna evaluates the pros and cons of the methods. At the same time, to show that the methods can be combined, designs where the methods are used together are also realized. All the designs and methods applied in the chapter are evaluated at the end of the chapter. The third chapter found an antenna that can perform beam scanning using odd and even modes on the patch antenna found in the literature. Miniaturization was performed on this antenna, which has a structure consisting of a combination of two miniaturized antennas, and the isolation problem between the ports of the design was applied to the antenna in our hand by using another work in the literature. In the first step, a substrate with a higher dielectric coefficient was used for miniaturization, and the antenna structure was miniaturized by adding slots to the patch. At the same time, in the next steps, the design is modified to use an aperture-coupled feeding method as it offers a better solution for isolation. A reflector is added to the design to suppress the back-lobe radiation resulting from the modified design. Then, isolation between the ports was achieved using the Y parameters, as described in detail in the thesis. The proposed design is manufactured for validation, and the isolation method is confirmed via the measurement of S parameters with simulation. The thesis encompasses a comprehensive overview of patch antennas, delving into their fundamental principles while emphasizing the miniaturization of antenna designs. Notably, the research addresses and rectifies existing design limitations found in the literature through the process of miniaturization. On the other hand, at the design steps, the antenna is miniaturized using these methods and port isolation provided by the designed decoupling feeding network. The isolation steps are described in detail, explained with a literature review, and applied the antenna. As a result of these afford the beam scanning miniaturized antenna with isolated ports is obtained.
-
ÖgeSatellite images super resolution using generative adversarial networks(Graduate School, 2022)The general broad definition of remote sensing is to observe an object and collect data regarding this object without actual contact. From a narrower perspective, it is the science that studies the earth and its atmosphere by gathering data from above the earth. Nowadays earth observation systems with their various sensors in multiple bands produce a huge amount of data that need to be processed and analyzed to get a final product in a certain discipline. Applications like monitoring the water resources, forest fire monitoring, soil type classifications are examples of remote sensing use in different fields of our modern life. Satellite imagery plays a pivotal role in remote sensing .they can be acquired by various types of sensors some of which are passive like optical sensors and some whıch are active like LIDAR and SAR. This study focuses on the satellite images in the visible portion of the spectrum. This type of satellite imagery can vary in resolution whether this resolution is spatial, spectral, temporal, or radiometric. The satellite imagery also can be categorized according to its spatial resolution into low, medium, and high-resolution images and each of them can be deployed in certain applications. Preprocessing these images is a critical stage that would affect the final product or the application that uses these images. High resolution is a desirable characteristic, yet it can be difficult to achieve financially and technically. However, image processing can offer a convenient software solution to this problem by super-resolution techniques. Hence, the importance of superresolution which is one of the preprocessing tasks that obtains high-resolution images is considered fundamental in lots of remote sensing applications. Super-resolution aims to obtain high-resolution images using low-resolution observation. Super-resolution is considered a classical image processing problem that is ill-posed due to the lack of a single unique solution. Thus, lots of algorithms and approaches were proposed over the years. This study gives a general review of the main significant types of super-resolution algorithms which can be divided into interpolation-based, reconstruction-based, and learning-based algorithms. The simplest methods are interpolation-based ones, nevertheless, the results lack high-frequency details. The second type is reconstruction-based methods which require a good prior choice to get better results. designing a good prior can be complex These methods can be complicated. The third category is example-based or learn-based methods which include learning the relationship between the low resolution and high-resolution images by exploiting datasets to learn from. Algorithms like sparse coding super-resolution and deep learning methods are learning-based methods. Super-resolution methods performance is usually evaluated by many metrics such as, peak signal to noise ratio PSNR, which is based on mean squared error, a pixel-wise metric thus, can be misleading, structural similarity index SSIM which is considered more accurate as it considers the structure of the image instead of the individual pixel value. Deep learning, which deploys deep neural networks in its algorithms, is a branch of machine learning which is, in turn, a subfield of artificial intelligence. It is widely used in image processing and computer vision problems, especially after the emergence of convolutional neural networks CNNs. Deep learning models structures in image processing problems usually share common building blocks like CNNs. The default CNN consists of a convolutional layer followed by an activation layer to ensure nonlinearity, hence learning, which is followed by a pooling layer. The backpropagation is used to adjust weights at the end of every epoch of training. The fourth chapter of this thesis elaborates the super-resolution algorithms which were proposed to deal with super-resolution problems that present the state-of-the-art performance compared to the other methods. SRCNN was the first suggested model to deal with super-resolution. It is considered as the benchmark of super-resolution using deep learning. This model was followed by the FSRCNN which tried to overcome the backward of the previous model by using the low-resolution image as an input without upscaling and performed the upscaling later by using deconvolution layer. Very Deep Super Resolution model which mainly consists of deep VGG layers to get better results. Then there was the enhanced deep super-resolution model EDSR that exploited the concept of the residual blocks to be able to increase the depth of the network without getting slower training. SRResNet and SRGAN were proposed in the same paper to give a better performance in image super-resolution. SRResNet deployed the residual blocks in its structure in addition to conv layers and uses the mean squared error dased loss or VGG content loss to optimize. The generative model of generative adversarial neural networks consists of two network models that learn together, the generator aims to learn to generate the required data with the help of a discriminator that tries to differentiate between fake data generated by the generator and ground truth. This approach of training in an adversarial manner presents a state of the art performance in several tasks, It was also used in the super-resolution task by what is called as SRGANs super-resolution networks. In addition to the adversarial structure of this model, another factor that improved its performance is the perceptual loss that was used in optimizing the model. Mentioning all of these deep learning super-resolution algorithms, the next chapter gives a general overview of the use of deep learning in remote sensing. This use is expanding with the increased amount of remote sensing data and its quality and with the development of deep learning algorithms and computational abilitıes. From the preprocessing of the remote sensing data, like image fusion, segmentation, and denoising, to other many applications such as anomaly detection, land use classification, and other classification tasks, deep learning is being deployed in remote sensing. The experiment that is done in this thesis is to examine the performance of super-resolution generative adversarial neural networks on the satellite images and ıts abıltıy of generalization when it is trained with the irrelevant dataset. By training an SRGAN model using the UC-MERCED Land Use dataset which consists of 21 classes each class contains 100 images of size 256x256 these images are used as high-resolution images and downsized versions of them with factor x4 are used as low-resolution images. After training, the model was tested with random images from the NWPU-RESISC45 dataset. In order to examine the ability of generalization of the model, the same architecture was trained using a natural images dataset which is Linnaeus 5 256X256 which consists of 5 classes of 256x256 sized images in the same way as the previous training. testing was done with random images from the NWPU-RESISC45 dataset. In addition, the SRResNet model that uses the mean square error-based optimization was trained to compare it with the performance of the previous generative SRGAN models. Peak signal to noise ratio and structural similarity index was used to evaluate the performance and make a comparison between the previously mentioned methods. The experiment was done using Google Colab Pro environment utilizing its provided GPU.
-
ÖgeA statistical framework for degraded underwater video generation(Graduate School, 2023)Computer vision in the underwater medium presents unique challenges due to the distinct properties and conditions encountered beneath the water’s surface. Underwater environments are characterized by limited visibility, color distortion, scattering of light, and various water conditions such as turbidity and currents. These factors severely impact the performance of traditional computer vision algorithms designed for terrestrial images, leading to significant difficulties in underwater image and video analysis. One of the primary hardships in underwater computer vision is the degradation of image quality caused by the attenuation of light. As light travels through water, it is absorbed and scattered, resulting in reduced contrast, loss of details, and color distortion. These effects make object detection, recognition, and tracking challenging tasks. Additionally, the scattering of light causes blurring and reduces the sharpness of underwater images, further impeding accurate analysis. Another significant hurdle is the lack of reliable, in-depth information. Estimating depth in underwater scenes is complex due to the varying water conditions and the absence of well-defined visual cues. This limitation poses challenges for tasks such as 3D reconstruction, scene understanding, and object localization.
-
ÖgePerformance of 5G codes over a noisy channel(Graduate School, 2022)At present, the need for mobile internet keeps increasing every day, especially with the rise of IoT devices, as it's estimated that by the year 2025, there will be more than 5 billion IoT devices connected to the network. For wireless mobile communication, a huge bandwidth is needed to adapt the different rates for different applications. The 5G network will provide lower latency and also achieve higher speeds than previous networks. In 5G wireless communication, both turbo codes and tail-biting convolutional codes failed to meet 5G standards even though they proved their efficiency for the LTE standard. In 5G, a more advanced error correction method is needed for both LDPC codes and polar codes, specifically LDPC codes dealing with data channels and polar codes dealing with control channels. As error correction and detection are the main requirements for 5G wireless communication, the BER performance against the (Eb/NO) performance is really important as you don't want to lose almost any transmitted block. One of the methods used to check BER against EB/NO was to check an un-coded signal under various types of modulation, from BPSK up to 256 QAM; the higher the modulation, the worse the BER against EB/NO performance was getting. With 5G packing more data now, even higher than 256 QAM is possible. A performance test of the codes that are being used in 5G has been simulated here. As is customary, the higher the modulation, the worse the BER against EB/NO. A 5G-NR scenario has been performed using BPSK modulation with an AWGN channel to demonstrate how the codes perform under the best modulation scenario. The 5G standard has been applied to both codes as base graph 1 and base graph 2 have been used for LDPC at different code rates. The same goes for polar as channels are in sequential order from worst to best as specified in the standard. The hardware performance for 5G is very challenging, so a single decoder has been used in both codes, with quantization implemented in both of them. As a result of simulations of BER at both codes, different plots have been shown. For LDPC codes, performance iterations had a noticeable improvement in BER levels starting at 10 iterations to 20 iterations and from 20 to 30 iterations. Not a huge BER improvement was seen, so 20 iterations have been implemented as the main iteration number for most of the graphs. For LDPC codes, both base graphs were used. For rate half, with midsize block BG1, had a better performance; for rates 2/3 and 5/6, rate 2/3 had an overall better performance compared to rate 5/6, with 4096 block size providing the best results in both rates. As for polar codes, successive cancelation was implemented for 256 and 512 block sizes with different rates. The lower the block size, the better the results were obtained for polar codes.