LEE- Geomatik Mühendisliği-Doktora
Bu koleksiyon için kalıcı URI
Gözat
Başlık ile LEE- Geomatik Mühendisliği-Doktora'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeA semi-automatic façade generation methodology of architectural heritage from laser point clouds: A case study on Architect Sinan(Lisansüstü Eğitim Enstitüsü, 2021) Kıvılcım, Cemal Özgür ; Duran, Zahide ; 709850 ; Geomatik MühendisliğiTangible cultural assets from different periods and civilizations reinforce historical and cultural memories that are passed from generation to generation. However, due to natural events, lack of proper maintenance, or wars, the heritage structures can be damaged or destroyed over time. To preserve tangible cultural assets for the future, it is crucial to ensure that these buildings' maintenance, repair, and restoration are of high quality. Hence, the preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. In this direction, the acquired data and derived models are used for various purposes in the fields of engineering and architectural applications, digital modeling and reconstructions, virtual or augmented reality applications. However, conventional measurement techniques require tremendous resources and lengthy project completion time for architectural surveys and 3D model production. With technological advances, laser scanning systems became a preferred technology as a geospatial data acquisition technique in the heritage documentation process. Without any doubt, these systems provide many advantages over conventional measurement techniques since the data acquisition is carried out effectively and in a relatively short time. On the other hand, obtaining final products from point clouds is generally time-consuming and requires data manipulation expertise. To achieve this, the operator, who has the knowledge about the structure, must interpret the point cloud, select the key points representing the underlying geometry and perform the vectorizing process over these points. In addition, point data contains systematic and random errors. The noisy point cloud data and ambiguities make this process tedious and prone to human error. The purpose of this thesis is to reduce the user's manual work cycle burden in obtaining 3D models and products from point cloud data: A semi-automatic user-guided methodology with few interventions is developed to easily interpret the geometry of architectural elements and establish fundamental semantic relationships from complex, noisy point clouds. First, the conventional workflow and methodologies in cultural heritage documentation were researched, and the bottlenecks of the current workflow were examined. Then, existing methodologies used in point cloud-based 3D digital building reconstruction were assessed. From this, semi-automatic methods are evaluated for a more suitable approach to 3D digital reconstruction of cultural heritage assets, which are more complex than modern buildings. Recently, Building Information Modeling (BIM) process applications have gained momentum. BIM systems make many contributions to project management, from the design to the operation of new modern buildings. Research on the applications for existing buildings in BIM has increased. Particularly, such applications and research in cultural heritage are gathered under the term of Heritage/Historic-Building Information Modeling (HBIM). In HBIM, dedicated architectural style libraries are generated, and geometric models are produced by associating the geometries of architectural elements with point clouds. Such applications generally come for Western architectural elements, in which construction techniques and geometrical relations of architectural rules and orders have been documented with sketches and drawings for centuries. Detailed descriptions and fine sketches pertaining to the rules and style studies of Ottoman architecture are limited. Having been the capital of many civilizations, historic Istanbul is crowned with the many mosques of Architect Sinan, dating from the 16th century, the golden era of the Ottoman Empire. For his innovative structures, Architect Sinan is considered an architectural and engineering genius. Unfortunately, Sinan did not leave enough written or visual documentation of his works, and although many aspects of Sinan's works have been researched, few have worked on the geometry of the facade elements. Previous architectural research examines the ratios and compares the general architectural elements of Sinan's works (comparing the dimensions and location of the elements). Building on this and our observations of Sinan's mosques, we designed an object-oriented library of parametric objects for selected architectural facade elements. In addition, some fundamental semantic relations of the prepared object library elements were introduced. A case study for procedural modeling was then carried out. In the next stage, we evaluated that an algorithmic approach can be used to obtain parametric architectural elements from noisy point cloud data. We benefited from the Random Sample Consensus (RANSAC) algorithm, which has a wide range of applications in computer vision and robotics. The algorithm is based on the purpose of obtaining the parameters of a given mathematical model; it is a non-deterministic method based on selecting the required number of random data from the data set to create the model and measuring the extent to which the hypothesis produced is compatible with the entire data set by evaluating the model. The basics of this method work with a certain number of iterations and return outputs of the most suitable model parameters, the dataset that makes up the model, and the incompatible data. In addition, model-specific criteria and rules based on architectural knowledge were added to the developed methodology to reduce the number of iterations. All algorithmic codes were produced in Python language. In addition, we used libraries such as NumPy and for arrays and mathematical operations. For visualization studies, the open graphics library (Open Graphics Library, OpenGL) was carried out using the Visualization Tool Kit (VTK) on the graphics application development interface. In addition, python modules of VTK C++ source libraries were compiled using CMake software and Microsoft Visual Studio. As the application area of the study, one of the most important mosques of Istanbul Şehzade Mosque, which is Mimar Sinan's first selatin complex, was chosen. Point cloud data acquired with a terrestrial laser scanner for the documentation studies of the mosque was obtained for this study. Different case areas were determined from the point cloud datasets. Windows on the Qibla direction façade and the domes from the roof covering of the mosque were used, respectively. While making this choice, we considered the variety of window elements and Sinan's use of the dome influenced. In the case applications, the point cloud selected from the window areas was segmented semi-automatically using proposed method recursively at different window levels from the inside to the outside. In the other case study, the algorithm performed the segmentation of the main dome. As a result of this segmentation, point groups that are not included in the model are evaluated once more time using the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm from Python's scikit-learn and presented to the user as a guiding output in the determination of architectural elements and deformations. Using the above-mentioned Sinan architectural dome typology relations with the main dome of the mosque, it was ensured that point clusters were formed in the modeling of other dome structures in the mosque. Finally, as an example, the parametric dome model was converted to Industry Foundation Class (IFC) format using open source CAD software. Integrity and accuracy comparisons were made using the outputs of the presented methodology and the CAD drawings produced by the restoration architects using the same data. The results were within acceptable limits for general-scale studies. Additionally, the presented method contributed to the interpretation of the data by saving time for expert users. In summary, a method has been developed for the semi-automatic extraction of architectural parametric models working directly on the 3D point cloud, specific to the Ottoman Classical Era Mosque, particularly Architect Sinan's works, using a data and model-oriented hybrid 3D building reconstruction approach.
-
ÖgeAn investigation into the effects of different parameters on high-resolution geoid modeling accuracy in the context of height system modernization(Graduate School, 2024-07-12) Karaca, Onur ; Erol, Bihter ; 501162615 ; Geomatics EngineeringIn contemporary times, many countries have focused on obtaining orthometric heights through satellite-based positioning techniques, utilizing ellipsoidal heights and geoid heights derived from geoid models, as part of height system modernization efforts. This shift is primarily driven by several factors: the inherent limitations of traditional leveling, which is a costly, labor-intensive, and time-consuming process; the economic advantages of employing geoid models in conjunction with satellite-based positioning systems; the ability to obtain real-time physical height information; and the diminished susceptibility to the deformative effects of the Earth's crust. Given the ability to obtain highly accurate ellipsoidal heights using GNSS, the accuracy of regional geoid models directly impacts the determination of orthometric heights. In many methods used for the computation of gravimetric geoids, similar to the least squares modification of Stokes formula with additive corrections (LSMSA) technique utilized in this thesis, gridded free-air anomaly data are employed as input. Observed gravity data from the Earth's surface is heavily influenced by the gravitational field of the Earth due to topographic masses and cannot be directly utilized in the gridding process. Consequently, observed gravity data are first transformed into anomalies and then reduced to the geoid using Bouguer reduction, where the topography is represented by a Bouguer plate (or shell). The resulting Bouguer anomalies obtained in this manner are suitable for the gridding process and create a smooth surface. The selection of the Bouguer anomaly type (simple or complete) is performed considering the topography of the study area. Simple Bouguer anomalies can be used in studies conducted in plain areas with minimal topographic variations. However, complete Bouguer anomalies, which also incorporate the terrain correction (TC) effect, are essential for applications carried out in rough topography despite their higher computational burden. The determination of the most suitable Bouguer anomaly to be used in the gridding process in the study area of Auvergne, France, is conducted through numerical tests in the first part of this thesis. The investigations reveal differences exceeding 55.6 mGal between the two anomaly grids and 27 cm between the geoids generated from these grids. Therefore, in areas with complex topography such as Auvergne, the use of complete Bouguer anomalies incorporating terrain correction effects is crucial for obtaining precise gravimetric geoids. In the second numerical application of the study, a series of tests are conducted to determine the optimal parameters to be used in LSMSA, selected as the computation method for gravimetric geoids. The parameters identified here are adopted as the fundamental parameters for the geoids computed throughout the entire study. The third numerical application, constituting the main focus of this thesis, investigates the impact of interpolation methods on gravimetric geoid computation. Both the areal difference maps and absolute validation results obtained in this section demonstrate that the conventional methods such as Inverse Distance to a Power (IDP), Nearest Neighbor, and Kriging, which have been widely used for gridding gravity data in geodesy and geophysics for a long time, provide similar and reliable results. On the other hand, the Artificial Neural Network (ANN), a complex, optimization-based soft computing method with proven effectiveness in many fields, does not yield superior results compared to these three methods. As a conclusion, it is inferred that the interpolation algorithms have an impact on the gravity gridding outcomes and, consequently, the determination of the geoid model. GPS/Leveling points are typically measured in plain terrain rather than rugged areas due to the inherent challenges of their establishment in such environments. Absolute validation processes are conducted in both plain and rugged terrain during the 4th numerical test, revealing differences of up to two-times of magnitude between the validation results obtained in these two different terrains. This disparity stems from the denser and more homogenous distribution of points in plain areas, which more accurately captures the topography. As the final numerical application, the applicability of the ANN algorithm for gravity gridding is investigated. The conducted experiments demonstrated that the complete Bouguer anomalies gridded using the ANN approach yield distinct results each time, regardless of the employed parameters (neurons - epochs). Consequently, it is concluded that the ANN technique has not produced satisfactory results in the gravity gridding process, which is an important part of gravimetric geoid computation, in this thesis study. In summary of the conducted studies, the findings highlight the crucial role of the interpolation algorithm employed in high-precision geoid undulation calculations. Additionally, it has been concluded that the ANN method does not perform as well in the gridding process compared to traditional interpolation methods such as Kriging, IDP, and Nearest Neighbor. Furthermore, the type of Bouguer anomaly used in gridding is crucial, especially for rugged study areas. Lastly, it is noted that the distribution of GPS/leveling points used in absolute validation affects the validation results.
-
ÖgeAssessing the impact of super-resolution on enhancing the spatial quality of historical aerial photographs(Graduate School, 2024-06-10) İncekara, Abdullah Harun ; Şeker, Dursun Zafer ; 501182601 ; Geomatics EngineeringThe level of distinguishability of details in an image is called resolution. In current studies, high-resolution (HR) images are generally preferred. However, not all available images may have resolution sufficient to fulfill their intended purpose. Due to hardware and cost constraints, it's not always feasible to obtain and prodecure HR images, hence low-resolution (LR) images need to be enhanced. This process is possible through techniques known as super-resolution (SR). SR is defined as obtaining an HR image from an LR one. It's accepted that an LR image is a degraded version of its HR counterpart. When detrimental effects are applied to an HR image, some information will be lost. Consequently, a lower-quality image will be obtained, which is referred to as LR. However, the image in need of enhancement is LR, while the unavailable image is HR. Therefore, transitioning from LR to HR is an inverse problem. To solve this problem, the lost information must be identified and restored to the LR image. In current SR studies, deep learning (DL) based models are now being utilized. Various network designs are employed to enhance model performance and achieve better image quality. These designs primarily include linear learning, residual learning, recursive learning, multi-scale learning, dense connections, generative adversarial networks, and attention mechanisms. DL-based SR studies initially began with the use of linear learning in the Super-Resolution Convolutional Neural Network (SRCNN) model. After linear learning, models utilizing residual learning with deeper networks and higher performance perspectives gained prominence. Due to the practical challenges posed by the increased number of parameters in deeper networks, recursive learning has been introduced in image processing studies. Recursive learning, based on the principle of parameter sharing to control the total number of parameters, allowed models to run much faster but introduced the vanishing gradient problem. In this context, dense- connected models incorporating both residual learning and recursive learning were proposed. Subsequently, visually high-quality images were obtained using generative adversarial network structures. Nowadays, there is a focus on attention mechanisms in SR studies. In summary, to improve model performance, learning strategies were altered, various loss functions were tested, and network architectures were modified with various hyperparameters. However, all efforts have been solely algorithm-based, and satisfactory results have actually been achieved, especially with attention mechanisms. One aspect that has not yet been fully addressed in SR studies is the impracticality of using deeper and more complex structures in real-time applications and the inability of models built on common datasets to deliver the expected performance in enhancing images for solving real-engineering problems. For the former, the performance rates of lightweight network architectures should be increased. For the latter, specific approaches tailored to solving the problem should be introduced. The remotely sensed (RS) images that have been scarcely evaluated in SR studies are historical aerial photographs (HAP). Besides the negative effects harbored during the enhancement of RS images, HAPs have additional constraints. Information losses during the conversion of printed copies to digital copies, data acquisition hardware used depending on the technological possibilities of the era, lack of spectral bands, and color information are the main negative constraints. Since HAPs play a crucial role in solving problems the present which is related to the past, they also need to be improved with SR techniques. In this thesis study, it is aimed to enhance the spatial quality of grayscale HAPs with DL-based SR model. In this context, approaches have been brought regarding the content and structure of the dataset. Orthophotos obtained from the General Directorate of Mapping of different years with different resolutions have been used as the primary data source. The acquired orthophotos belong to the years 1954 with a resolution of 30 cm, 1968 with resolutions of 40 cm and 70 cm, and 1982 with a resolution of 10 cm, and 1993 with a resolution of 40 cm. In the approach to dataset content, images of residential areas, farmland areas, forested areas, and bare land classes were extracted separately from orthophotos to create datasets. DL-based SR models cannot be directly used on HAPs because they are built on multi-spectral images. To overcome this limitation, artificial 3-band images were created by duplicating the same band twice. Although the single-band image is numerically converted to a three-band image, there is no change in content. To minimize this limitation, images of different resolutions from different years covering the same regions were used. This approach, which can be called imitating the multi-spectral image, did not include images containing only three different spectral bands in the training, but it seemed as if different spectral bands of the same image were included separately in the training. Another limitation is the lack of color information, which is due to the grayscale nature of the HAPs. The lack of color information for grayscale HAPs was minimized by using images with a wide range of intensities. Since different intensity values provide different grayscale tones, maximum use has been made of intensity values that provide differences for objects that are similar to each other both within the same category and across different categories. Another limitation for HAPs is that LR-HR image pairs are insufficient in content, which has been overcome by using larger size images. Depending on the years from which the data were obtained, there are a limited number of classes. During the convolution process, filters have been ensured to gather information on images containing more diversity in larger image sizes. The proposed approach for the dataset structure is based on the hierarchy of photo interpretation elements. The hierarchy of photo interpretation elements is expressed with different levels. The first level involves color and tone information, which are more pronounced in bare land and forest areas found in orthophotos. The second level includes size, shape, and texture. Residential areas represent the group that reflects these elements the most. The third level includes patterns, with farmland areas being the group that best reflects this element. Within this framework, the dataset is structured as the 1st level consisting of bare land and forest areas, the 2nd level consisting of residential areas, and the 3rd level consisting of farmland areas. The 1993 image was also used in the approach to the data set structure. Each of the three datasets were trained separately by means of SRCNN model. Two different methodologies were used to obtain the final image from separately trained data sets. The final image was created with the average of 3 different images improved in the first methodology. In the second methodology, each improved image was divided into pieces of equal size. A reference-free image quality metric was calculated for each part obtained. The final image was created by concatenating identical parts for which the quality metric gave better results. Approaches to both dataset content and dataset structure were evaluated with reference-based image quality metrics as well as visual interpretation. In the content-based approach, pixel-based metrics and structural similarity based metrics demonstrated positive progress. Evaluations made through visual interpretation also yielded consistent results with image quality metrics. This approach was also effective in reducing the softening effect on the output image. In the structural-based approach, creating the final image based on the reference-free image quality metric gave better results. However, the selectability of better image parts requires more advanced image processing techniques.
-
ÖgeAssessment of global gravity models in coastal zones: A case study using astrogeodetic vertical deflections in İstanbul(Fen Bilimleri Enstitüsü, 2020) Albayrak, Müge ; Özlüdemir, Mustafa Tevfik ; 619803 ; Geomatik Mühendisliği Ana Bilim DalıAstrogeodetic vertical deflections (VDs) provide valuable information about the structure of Earth's gravity field. For this reason, astrogeodetic VD observations are essential gravity field observables. Several types of astrogeodetic observational instruments have been used to obtain astrogeodetic VD components. Currently, modern imaging instruments such as the Digital Zenith Camera System (DZCS) or the total station (TS)-based QDaedalus system, which are operated at field stations at night, are used to observe astronomical coordinates (astronomical latitude [] and longitude []). Global Navigation Satellite System (GNSS) receivers located at the same benchmark (BM) provide geocentric geodetic coordinates (geodetic latitude [] and longitude []). From these, the North-South (= - ) and East-West (= (-) cos ) components of VDs can be calculated. This thesis aims to introduce a new astrogeodetic VDs data set, which was collected using the QDaedalus system in Istanbul, Turkey to investigate the quality of the Global Gravity Model plus (GGMplus) gravity functional maps and Earth Gravitational Model 2008 (EGM2008). To establish the Istanbul Astrogeodetic Network (IAN), 30 BMs were selected out of 1183 BMs that are part of both the Istanbul GPS Triangulation Network (IGTN) and the Istanbul Levelling Network (ILN). While IGTN provides geodetic coordinates and ellipsoidal heights, ILN provides orthometric heights. Before establishing the IAN, the ellipsoidal heights from the IGTN, the orthometric heights from ILN, and the newly-collected valley cross levelling (VCL) data were used to calculate a new geoid model in Istanbul using soft computing techniques, including the adaptive-network-based fuzzy inference system (ANFIS) and the artificial neural networks (ANNs). The aim of this calculation is to show the current status of the Istanbul geodetic geoid. After the calculation of the Istanbul geoid, which is very weak in coastal and mountainous areas, the IAN was established. The first astrogeodetic VD observations campaign taken in Istanbul were made using the Leica TCRM1101 TS integrated QDaedalus system. The measured VDs are unique in that, not only were they measured for the first time in Istanbul, but they also form Turkey's first dense astrogeodetic network. A total of 21 out of 30 BMs are located in the coastal zone allowing us to investigate the quality of global gravity models along the coast of Istanbul. Preliminary steps are required before the QDaedalus system can be used in the IAN to investigate the precision and accuracy of the system. One such activity is to test the QDaedalus system at the same BM several nights in a row to determine the precision of the system. For this thesis, these test observations were carried out at a control site at Technical University of Munich (TUM) test station, the Istanbul Technical University (ITU) test station, and at six densely-spaced pillars of the geodetic control network at the Geodetic Observatory Wettzell (GOW). The accuracy should also be established, and in this thesis was determined by comparing the Hannover DZCS TZK2-D VDs results at 10 field stations located in the Munich region to independently observed VDs data from the QDaedalus. The ITU test station was also used by the Turkish DZCS's—Astrogeodetic Camera System 2 (ACSYS2)—test station to determine the precision of this DZCS by repeated observations. The VDs results comparison of the QDaedalus and ACSYS2 at the ITU test station provide us to determine the accuracy of the ACSYS2. The initial test observations with the Leica TCRM1101 TS integrated QDaedalus system showed that it is capable of producing highly accurate VDs data (~0.20"). After establishing these satisfactory results, the astrogeodetic VDs in the IAN were measured for follow-on campaigns. The standard deviations (SDs) for the IAN are approximately 0.20" for both the North-South () and East-West () components. This new VD data set was compared with modelled VDs from the GGMplus gravity functional maps and EGM2008. The differences between the VDs from QDaedalus and those from GGMplus and EGM2008 tend to increase towards the coastlines, where discrepancies of several arcseconds amplitude between the observed and modelled values are encountered. We interpret these spurious differences as weaknesses in the modelled VDs along the Istanbul coastlines, most likely reflecting increased errors in the altimetry-derived marine gravity field the GGMplus model depends on (via EGM2008 and Danish National Space Center [DNSC2007]). The central finding of this thesis is that astrogeodetic VDs are valuable tools for independently investigating the quality of coastal-zone gravity data sets and gravity field products. The new VD data set is useful for the quality assessment of future EGMs, such as the EGM2020. The results and findings presented in this thesis were supported by grants and scholarships from several fundings and research support sources. The Turkish DZCS modernization process, determination of the precision and accuracy of the system, and the IAN's fieldwork by the QDaedalus system were supported by the Scientific and Technological Research Council of Turkey (TUBITAK) Project (grant number 115Y237). The Leica TDA5005 TS integrated QDaedalus system test observations at TUM were supported by the German Academic Research Council Scholarship (DAAD) short-term grant. The Leica TCRM1101 TS integrated QDaedalus system's test observations at TUM and the Munich region observations were supported by a TUBITAK BIDEB 2214-A scholarship. The GOW observations were supported by the Institute of Astronomical and Physical Geodesy (IAPG) at TUM and GOW. Finally, the data analysis of the IAN was supported by the Fulbright Foundation.
-
ÖgeAssessment of the land and sea interaction by using different types of satellite data(Graduate School, 2024-06-03) Kokal Tuzcu, Aylin ; Musaoğlu, Nebiye ; 501192602 ; Geomatics EngineeringSeas are crucial for regulating climate conditions and supporting biodiversity, thereby sustaining environment, which is essential for life. Earth observation satellite images provide rich data sources, enabling long-term monitoring in a cost-effective and time-efficient manner. The Sea of Marmara (Türkiye), a semi-enclosed basin, stands out with its unique hydrodynamic and biogeochemical characteristics, having a significant role in the marine ecosystem. As the Sea of Marmara in Türkiye is surrounded by the densely populated provinces, the sea's water quality has increasingly become significant for both scientific scientific and public communities over the years. In the Sea of Marmara, marine mucilage, a viscous organic substance, has been observed in 2021. The presence of thick, different colored mucilage blankets raised public concern due to their potential toxicity and the pathogens, which can accumulate with prolonged mucilage presence. The outbreak of marine mucilage significantly impacted the marine ecosystem. Consequently, the presence of marine mucilage is a one of the indicators of water quality. While remote sensing data was used to identify mucilage-covered areas in the Sea of Marmara in other case studies previously, there is lack of study about determining the areal extents and spectral characteristics of the different colored mucilage types while examining the probable reasons by using remote sensing data and technologies. This study focuses on the mucilage phenomenon and its possible causes, such as increases in Sea Surface Temperature (SST) and changes in Land Use / Land Cover (LU/LC) in coastal cities along the sea. It aims to achieve three main objectives: (1) identifying different types of mucilage, (2) detecting SST trend and anomalies, and (3) observing changes in LU/LC by using different types of satellite data. Mucilage, which varies in type and form, includes three different types detected in İzmit Bay (the Sea of Marmara) using Sentinel-2 and Worldview-3 satellite images. These mucilage types are white mucilage, yellow mucilage, and brown mucilage. White mucilage, characterized by smaller aggregates and a high reflectance spectrum, especially between 600-900 nm, represents dispersed patterns. Yellow mucilage results from aged aggregates, influenced by wind and currents, representing accumulated mucilage. Brown mucilage, distinguished by a high organic matter concentration, represents densely accumulated ageing mucilage. The study leveraged the high spatial resolution of these satellites to analyze the areal extents and characteristics of these mucilage types, with Sentinel-2 providing an overview and Worldview-3 offering detailed information. Spectral analysis using GLCM method and the SVM classification method differentiated these types, revealing that yellow mucilage often surrounds white mucilage, showing dispersed patterns, while brown mucilage, mostly found along coastlines, is more accurately detected with Worldview-3 due to its higher spatial resolution. The ANOVA analysis revealed significant differences (P-value <0.05) among these three mucilage types. The accuracy assessment analysis of the classifications showed the detailed detection capabilities of the high-resolution Worldview-3 (Overall accuracy = 0.93) compared to Sentinel-2 (Overall accuracy = 0.86) for distinguishing mucilage distributions. Continuously monitoring SST and detecting anomalies play a critical role in assessing the water quality of the sea. One probable cause of the mucilage phenomenon is related to the increase in SST. Therefore, investigation of temporal and spatial variations in SST and detection of SST anomalies were conducted. The spatial variations of the Sea of Marmara's SST were examined in detail using Sentinel-3 and Landsat-8 satellite imagery. Sentinel-3's ability to capture the entire area in a single frame provides a significant advantage for recent year monitoring. Validation with Landsat-8 images provided consistent results, highlighting the detection of cold water influx from the Black Sea through the Istanbul Strait. However, the National Oceanic and Atmospheric Administration (NOAA) OISST V2 satellite's coarser resolution limited its effectiveness in distinguishing these spatial SST variations. To assess accuracy, SSTs derived from satellite data were compared against in-situ measurements, which were that were derived from Turkish State Meteorological Service, with RMSE and bias calculated for each dataset, confirming the suitability of all three satellite sources for SST monitoring in the Sea of Marmara. Additionally, a time-series analysis from 1990 to 2021, utilizing the NOAA CDR OISST v02r01 dataset on the GEE platform and applying the STL method, detected SST trend and anomalies, including an increase in SSTs, particularly in 2020, and approximately 2°C variations over 32 years. Anomalies, especially in recent years, were compared with the NOAA OISST V2 anomaly band, highlighting the critical role of STL method in eliminating trend effect from the residual component. Due to the dynamism of the highly urbanized coastal cities along the Sea of Marmara, especially the LU/LC changes between 2018 and 2021 are of great significance. This study evaluates the extent of LU/LC changes in coastal cities along the sea in the periods of 1990-2000, 2000-2006, 2006-2012, 2012-2018, 2018-2021 by using Coordination of Information on the Environment (CORINE) data and the overall inspection period of 1990-2021 by utilizing CORINE-based methodology and Sentinel-2 images. An increase of 95.21% is observed in the artificial surfaces. Water covered areas expanded by 14%, primarily driven by the construction of new surface water collection systems. Agricultural areas and orchards were successfully conserved. There has been a prominent increase of 4% in forest areas. However, there was a decline of 25% in open spaces, and 7% decrease in pastures and complex cultivation patterns. Furthermore, the findings provide a clear evidence of a 1762 ha expansion in the aforementioned cities' area between 1990 and 2021 was due to the construction of land reclamation areas along the Sea of Marmara coast. This research study highlights a practical methodology for monitoring land resources across extensive regions over a prolonged period. In conclusion, this dissertation provides a practical methodology for detecting, monitoring, and paying special attention to the areas that need more urgent mucilage cleaning. Furthermore, the factors that can cause the mucilage phenomenon were researched. A comprehensive SST analysis showed that there is an increasing SST trend between 1990 – 2021. Besides, the temporal LU/LC change in the coastal cities along the sea was also examined in the scope of the land/sea interaction and the increase in artificial surfaces was seen. In this study, the capabilities of different types of satellite data were leveraged to monitor the land and sea interaction, demonstrating its indispensable role of remote sensing data in providing accurate data for environmental sustainability.
-
ÖgeBaz istasyonu kuleleri için kırılganlık eğrileri geliştirilmesi ve CBS ile sismik performans analizi uygulaması(Lisansüstü Eğitim Enstitüsü, 2024-09-18) Bilginer, Ömer ; Karaman, Himmet ; 501172620 ; Geomatik MühendisliğiTürkiye bulunduğu konum itibariyle birinci derece deprem bölgesinde yer almaktadır. Ülke tarih boyunca sık sık şiddetli depremlere maruz kalmış ve depremler neticesinde can ve mal kayıpları yaşanmıştır. Özellikle son 3 yılda yaşanan şiddetli depremlerin (2020 Elazığ (Mw 6.8), 2020 Sisam (Mw 6.6), 2023 Kahramanmaraş Elbistan (Mw 7.6) ve 2023 Kahramanmaraş Pazarcık (Mw 7.7)) ardından ülkede yaşayan vatandaşlar için ana gündem maddesi deprem olmuştur. Vatandaşların depreme karşı bilinçlenmesi nedeniyle karar verici kurum ve mekanizmalar için deprem öncesinde alınacak önlemler daha da önem kazanmıştır. Karar verici mekanizmaların deprem öncesinde yapılarda meydana gelebilecek hasarları önceden belirlemesi büyük önem arz etmektedir. Şiddetli bir depremin sebep olabileceği zararların deprem öncesinde analiz yapılarak belirlenmesi deprem sonrasında yaşanabilecek olası can ve mal kayıplarının azaltılmasına yardımcı olacaktır. Ülke nüfusunun beşte birinin yaşadığı ve ülke ekonomisine katkının en fazla olduğu İstanbul'da yaşanacak şiddetli bir deprem sonrasında oluşabilecek can kaybı ve ekonomik kayıp ülke ekonomisi için büyük zararlara yol açacaktır. Bu nedenle İstanbul'da gerçekleşmesi beklenen depremin oluşturacağı zararları deprem öncesinde belirlemek karar verici kurumların depreme hazırlıklı olmalarını sağlayacaktır. Deprem sonrasında kritik öneme sahip yapıların (hastane, enerji santralleri, polis karakolları vs.) hasar almaması veya hasar aldığı halde çalışır durumda kalması gerekmektedir. Bu tür yapıların deprem sonrasında işlevselliğini devam ettirmesi can kayıplarının en aza indirilmesini sağlayacaktır. Özellikle mobil haberleşme ağı altyapısının sürekliliği deprem sonrası için büyük önem arz etmektedir. Geçmişte ülkede yaşanan şiddetli depremler neticesinde haberleşme ağında kesintiler meydana gelmiştir. Şiddetli bir depremden sonra mobil haberleşme ağında meydana gelen hasarlar özellikle kalabalık nüfusa sahip büyükşehirlerde ilk yardım ve arama kurtarma çalışmalarında problemler oluşturmaktadır. Bu nedenle mobil iletişim ağının deprem sonrasında çalışabilir durumda olması gerekmektedir. Mobil iletişim ağının önemli bileşenlerinden mobil iletişim için küresel sistem (GSM) kulelerinin sismik performanslarının belirlenmesi gerekmektedir. Yapılarda oluşabilecek hasarları belirlenebilmesi ve yapının olasılıklı sismik davranışının tespit edilebilmesinde kırılganlık eğrileri kullanılmaktadır. Kırılganlık eğrileri belirli bir yer sarsıntısı seviyesi için limit durumlarının aşılma olasılığı olarak tanımlanmaktadır. Kırılganlık eğrileri sayesinde deprem sonrasında yapılarda oluşabilecek hasar durumları tespit edilebilmektedir. Deprem sonrasında acil müdahale ve iyileştirme çalışmalarında coğrafi bilgi sistemleri (CBS) tabanlı tahmin yazılımlarından sıklıkla faydalanılmaktadır. HAZTURK programı da bu tahmin yazılımlarından biridir. HAZTURK, Amerika Birleşik Devletleri için geliştirilen MAEviz programının Türkiye'ye ait envanter ( bina, köprü, yol vs.) ve veriler (sayısal yükseklik modeli, zemin türü, zemin jeololisi vs.) kullanılarak güncellenmesiyle elde edilmiştir. HAZTURK'te yapılan analizler neticesinde (yapısal ve ekonomik kayıplar, maliyet-fayda, güçlendirme vs.) deprem sonrasında oluşabilecek yapısal, sosyal ve ekonomik kayıplar elde edilebilmektedir. Çalışmanın amacı, gerçekleşmesi beklenen İstanbul depremi sonrasında mobil iletişim ağında yaşanabilecek kesintileri önceden belirleyebilmektir. Deprem sonrasında hasarlı durumda olan GSM kulelerinin konumlarının tespit edilerek karar verici kurumlara geliştirecekleri stratejilerde yardımcı olunması amaçlanmaktadır. Bu amaçla İstanbul'da yer alan monopol ve kafes tipi GSM kulelerinin kırılganlık eğrileri tez kapsamında elde edilmiştir. Elde edilen kırılganlık eğrileri HAZTURK programına entegre edilmiştir. İstanbul için senaryo deprem uygulanarak hasarlı GSM kulelerinin coğrafi konumları tespit edilmiştir. Konumları bilinen hasarlı GSM kulelerinin yerlerine mobil veya drone baz istasyonu yönlendirilerek mobil iletişim ağının kesintisiz çalışması sağlanacaktır. Bu durum neticesinde deprem sonrasında kurumlar arası koordinasyon, ilk yardım, müdahale ve acil kurtarma çalışmalarında bir aksaklık olmayacaktır. Hızlı müdahale sayesinde can ve mal kaybı oranı daha düşük hale gelecektir. Çalışma altı bölümden oluşmaktadır. Birinci bölümde; Türkiye'de geçmişte yaşanan şiddetli depremlere ait bilgiler verilerek yaşanan iletişim kesintilerinden bahsedilmekte ve problem tanımlanmaktadır. Tezin amacı ve kapsamı, çalışma bölgesi ve akış şeması hakkında bilgi verilmektedir. İkinci bölümde; literatür araştırmasına yer verilmiştir. Bina, kafes tipi GSM kule ve monopol tipi GSM kulelerinin kırılganlık eğrilerinin elde edilmesi ve HAZTURK programı hakkında yapılan çalışmalara ait bilgiler verilmiştir. Üçüncü bölümde; kırılganlık eğrilerine dair tanımlamalar, kırılganlık eğrileri oluşturulmasında kullanılan yöntemler, statik itme analizi ve artımsal dinamik analiz hakkında bilgiler verilerek kırılganlık eğrilerinin elde edilmesinde kullanılan denklemlere yer verilmiştir. Dördüncü bölümde; HAZTURK programında veri sınıflandırma sistemi, veri formatı ve programda yapılabilecek analizlere dair bilgiler verilmektedir. Beşinci bölümde; sayısal uygulamaya yer verilmiştir. Kafes ve monopol kulelere ait kırılganlık eğrileri ve her bir hasar seviyesi için elde edilen kırılganlık eğrileri grafiği yer almaktadır. GSM kulelerine ait bu eğriler HAZTURK programına entegre edilmiş olup İstanbul için senaryo deprem uygulanarak hasarlı GSM kulelerinin konumları gösterilmektedir. Altıncı bölümde; çalışma kapsamında elde edilen sonuçlarla ilgili değerlendirmeler yapılarak önerilerde bulunulmuştur.
-
ÖgeCoğrafi bilgi sistem destekli bütünleşik bir kentsel dönüşüm modelinin geliştirilmesi ve uygulanması(Lisansüstü Eğitim Enstitüsü, 2023-08-07) Tunç, Ali ; Yomralıoğlu, Tahsin ; 501142601 ; Geomatik MühendisliğiYüzyılın son otuz yılında sanayi tesislerine dayalı kentsel alanın yapılaşma süreci, şehirleşme sürecinde kamuya ait alanların, çeşitli üretken süreçlere katkıda bulunarak özel çıktıları geliştirmesine, bu amaçla kentin varlığının kolektif bir varlık olarak değerlendirilmesine ve politikaların bu yönde üretilmesine yol açmıştır. Bu noktada şehir planlamasında kentsel mekan bir toplumsal ürün olarak sunulmuş ve odak noktası; kentsel büyüme ve mekansal dönüşümün öne sürülen "modern şehir" konseptinde finansal gelişime hizmet etmesine sebep olmuştur. Kentlerin "modernleşme" süreçlerinde mekansal gelişimin temel taşı olan arazilerin kamu eliyle geliştirilmesi birçok yatırıma fırsat sunmuş ve taşınmaz gelişimini hızlandırmıştır. Ancak çağdaş kentin, sermaye birikimi ve sınıf mücadelesine zemin oluşturacak şekilde evrilmesi ve kentsel toprakların sermayenin küresel büyüme stratejisinde artan merkeziliği, arazi değerinde karlılık arayışına sebep olmuştur. Böylece, "modern şehir" kent planlamasında altyapı ve sosyal donatı alanlarıyla bağlantılı olarak kentin gelişme alanlarında, üst düzey yeni konut ve ticari binaları ile mekansal dönüşümler gerçekleştirilmiş ve şehrin gerçek ihtiyaçlarından ve planlama ilkelerinden uzaklaşan bir şehircilik anlayışı benimsenmiştir. Bu dinamik yapı, kiraların ve konut maliyetlerinin artışına sebep olmuş ve kentsel yaşamın sınıflaşarak dengesiz bir yapıya sahip olmasına yol açmıştır. Bu sınıf farklılıkları, kırsaldan kentlere hızlı göçün yaşanması ile kentlerde gecekondu alanlarının artışını tetiklemiştir. Zaman içerisinde bu kentler hızla imarlaşma sürecine girmiş ve arsa ihtiyaçlarını gidermek adına farklı imar uygulama yöntemleri hayata geçirilmiştir. Türkiye'de kentleşme süreci, 1950'li yıllarda kent nüfusunun artışı ve gecekondu sorunlarıyla karşı karşıya kalan kentlerde arazi düzenlemeleri dönemi ile başlamıştır. 1980'lerden itibaren, ekonomik yeniden yapılanma ve küreselleşme nedeniyle İstanbul, Ankara, İzmir gibi kentler başta olmak üzere hızlı nüfus artışına hazır olmayan şehirler bu kentsel yayılmaya neden olan konut sorunlarıyla karşı karşıya kalmıştır. Sağlıklı ve yaşanılabilir kentsel mekan üretimi açısından bakıldığında, kentsel yaşam alanlarının yeniden düzenlenmesi ihtiyacı öncelikli hale gelmiş ve büyük şehirlerde artan konut arzı ve kaçak yapılaşma sorununa yönelik üretilen çözümler, dönemin sorunlarını çözmekten uzak kalmıştır. Çarpık yapılaşmış, köhneleşmiş, afetlere ve kentsel risklere duyarlı, altyapısı yetersiz ve niteliksiz, yoğun yapılaşmış, yasal ya da imara aykırı yerlerdeki mülkiyetin yeni imar planı verilerine uygun olarak yeniden düzenlenmesi şeklinde tanımlayabileceğimiz özellikli bir imar uygulaması olan "Kentsel Dönüşüm", bu sorunlara getirilebilecek en verimli çözüm olarak gündeme gelmiştir. 6306 sayılı Afet Riski Altındaki Alanların Dönüştürülmesi Hakkında Kanun, ülkemizdeki afet riskinin önlenmesine yönelik dönüştürme uygulamalarının yasal dayanağını oluşturmaktadır. Planlama boyutuyla ele alındığında kentsel dönüşümün nerede yapılacağı sorusu, dönüşümün ilk ayağını oluşturmaktadır. Ülkemizde dönüştürülmesi gereken deprem kuşağında yer alan ciddi sayıda riskli bölge bulunmasından ötürü büyük önem taşımaktadır. Ayrıca belirlenecek alandaki mevcut durum, hazırlanacak planın fonksiyonel amacına dönük oluşturulması için hassasiyetle ele alınması gereken bir konudur. Bu konuların önemleri göz önünde bulundurulduğunda, öncelikle kent bütünü ölçeğinde fonksiyon alanları kararının verilmesi gerekmektedir.
-
ÖgeCoğrafi bilgi sistemleri entegreli makine öğrenmesine dayalı toplu taşınmaz değerleme modelinin geliştirilmesi(Lisansüstü Eğitim Enstitüsü, 2022-10-07) Mete, Muhammed Oğuzhan ; Yomralıoğlu, Tahsin ; 501192606 ; Geomatik MühendisligiKüresel arazi idaresi sisteminin temel fonksiyonlarından biri olan arazi değeri ile planlama, vergilendirme, imar uygulamaları gibi mülkiyete dayalı birçok işlemde karşılaşılmaktadır. Bu bağlamda taşınmaz değerinin uluslararası standartlara uygun, nesnel yaklaşımlar ile değerlendirilmesi oldukça önemlidir. Teknolojinin ilerlemesiyle birlikte Coğrafi Bilgi Sistemleri (CBS), Yapay Zeka ve Makine Öğrenmesi, Bulut Bilişim, Yapı Bilgi Modelleme gibi akıllı sistemlerin değerleme uygulamalarında kullanımı artmakta, taşınmazların değeri yüksek doğrulukla, hızlı bir şekilde belirlenebilmektedir. Taşınmazların tekil değerlemesinde kullanılan Emsal Karşılaştırma, Gelir, Maliyet gibi klasik yöntemlerden farklı olarak toplu değerleme yöntemi bilişim sistemlerinden faydalanarak geniş alanlarda çok sayıda taşınmazın topyekün değerlendirilmesini mümkün kılmaktadır. Öte yandan arazi idaresi sistemlerinin kavramsal model tasarımında ISO standardı olan Arazi İdaresi Alan Modeli (LADM)'yi benimseyen ülkeler, taşınmaz değerleme amaçlı ülke profillerini oluşturarak süreçleri daha etkin bir şekilde yürütmeyi hedeflemektedirler. Değerleme çalışmalarında CBS ve Makine Öğrenmesine dayalı yöntemlerin ön plana çıktığı görülse de bu iki yaklaşımın bütünleşik kullanımının yer aldığı çalışmalar oldukça sınırlıdır. Ayrıca toplu değerleme çalışmalarında taşınmazların yüzölçümü, oda sayısı gibi fiziksel özelliklerine yoğunlaşılmakta, değeri oldukça etkileyen konumsal ve çevresel faktörler yeterince analiz edilmemektedir. Tez çalışması kapsamında CBS ve Makine Öğrenmesi yöntemleri bütünleştirilerek hibrit bir değerleme yöntemi geliştirilmiş, konumsal analizlerle değerleme verilerinin zenginleştirilmesi amaçlanmıştır. Öncelikle Birleşik Krallık çalışma bölgesinde LADM'ye dayalı bir kavramsal model tasarımı yapılmış, fiziksel modele geçiş aşamasında açık kaynaklı PostgreSQL/PostGIS veritabanı oluşturulmuştur. Daha sonra CBS destekli Nominal Değerleme Yöntemi ile yakınlık, yüzey, görünürlük gibi konumsal analizler gerçekleştirilmiş, nominal arsa değer haritası oluşturulmuştur. Birleşik Krallık Kraliyet Arazi Kayıt Kurumu tarafından açık lisans ile paylaşılan gerçek konut satış verileri kullanılarak Lineer Regresyon, Rastgele Orman, XGBoost, CatBoost gibi çeşitli Makine Öğrenmesi regresyon yöntemleri ile toplu değerleme çalışması gerçekleştirilmiştir. Konumsal kriterler eklenmeden önce gerçekleştirilen regresyon analizi sonucunda Makine Öğrenmesi modellerinin yeterli doğruluğa ulaşamadıkları görülmüştür. CBS analizleri sonrası elde edilen konumsal kriterlerin nominal puanları öznitelik zenginleştirme yoluyla değerleme verisindeki taşınmazlara aktarılmıştır. Konumsal kriterlerin eklenmesi sonrası gerçekleştirilen regresyon analizinde R2 değerinin yaklaşık %39, MAPE değerinin ise %27 civarında iyileştirildiği, yapılan oran analizleri sonucunda da çalışmanın toplu değerleme standartlarına uygun şekilde yeterli doğruluğa ulaştığı gözlemlenmiştir. Öte yandan global regresyon modellerinde kriterlerin mekansal otokorelasyonu ve bölgesel önem düzeyleri dikkate alınmazken tüm çalışma bölgesi için sabit bir kriter ağırlığı alınmaktadır. Oysa değeri etkileyen faktörler konuma, çevresel ve sosyo-ekonomik etkilere bağlı olarak değişkenlik gösterebilmektedir. Mekansal otokorelasyonun hesaplanması ve tüm kriterlerin özelliklerine bağlı olarak değer bölgelerinin oluşturulması için Nominal Ağırlıklı Çok Değişkenli Mekansal Kümeleme Yöntemi geliştirilmiştir. Bu yöntem ile beş farklı değer bölgesi tespit edilmiş, her bir kümede lokal regresyon modelleri oluşturularak değerleme doğruluğu artırılmış, bölgelere özgü kriter önemleri ve ağırlık katsayıları elde edilmiştir. Kriter önem skorlarının için hem permütasyon tabanlı öznitelik önemi hem de oyun teorisine dayanan SHAPley değerleri hesaplanmıştır. Böylece değişkenlerin bölgesel olarak değeri hangi yönde, ne derecede etkilediğine dair sonuçlar elde edilmiştir. Yapay zeka yöntemlerinde önyargı ve varyans dengesi, modelin öğrenme karakterini ortaya koyan önemli bir göstergedir. CBS ve Makine Öğrenmesi yöntemleri ile geliştirilen yöntem, genelleştirilebilirliğin incelenmesi amacıyla Birleşik Krallık'tan sonra Türkiye'de İstanbul ve İzmir illeri için de gerçekleştirilmiştir. Sonuçlara bakıldığında konumsal değişkenlerin katkısıyla konut amaçlı taşınmazların değeri İstanbul ve İzmir şehirleri için de yüksek doğrulukla belirlenmiştir. Tez kapsamında ayrıca yapılı taşınmazların arsa ve bina değerlerinin ayrıştırılması için Nominal Yönteme Dayalı Parametrik Maliyet Modellemesi yaklaşımı geliştirilmiştir. Bu kapsamda yapıya ait temel bileşenlerin yeniden inşa maliyeti modellenerek toplam değerden çıkarılmış, arsa değeri Makine Öğrenmesi ile geliştirilen arsa değerleme modeli sonuçları ile kıyaslanmıştır. Böylelikle yapılı bir taşınmazı meydana getiren zemindeki arsanın ve üzerinde bulunan binanın değerini ayrı ayrı ifade edebilen bir değerleme yaklaşımı geliştirilmiştir. Çalışmanın son aşamasında taşınmazlara ait tüm verileri ve değer haritalarını web ortamında kullanıcılarla paylaşmak için Bulut CBS tabanlı Taşınmaz Değer Bilgi Portalı geliştirilmiştir. Hem geleneksel sunucu-istemci mimarisi ile, hem de sunucusuz bulut yaklaşımı ile verilerin depolanması ve web servisleriyle paylaşılması sağlanmış, iki yöntem kıyaslanarak performans ve maliyet analizleri gerçekleştirilmiştir. Tez çalışmasında taşınmaz değerleme ve taşınmaz yönetimi anlamında uçtan uca tüm süreçlerin CBS ve Makine Öğrenmesine dayalı geliştirilmesi sağlanmış, sürdürülebilir arazi yönetimi paradigması çerçevesinde birlikte çalışabilir, bütüncül bir taşınmaz değerleme sistemi ortaya konmuştur.
-
ÖgeDeep learning based road segmentation from multi-source and multi-scale data(Graduate School, 2023-05-12) Öztürk, Ozan ; Şeker, Dursun Zafer ; 501162611 ; Geomatics EngineeringRoads are geographical objects that have been the subject of many application areas, such as city planning, traffic management, disaster management, and military interventions. The success of these applications depends on the speed and accuracy of obtaining road information. Researchers have mostly used satellite and/or aerial photographs as data sources in these studies and focused on the automatic acquisition of road information. Although successful results have been obtained with Artificial intelligence (AI)-based approaches, that are widely used recently, automatic segmentation of roads from remote sensing data is still considered a difficult and important problem due to its complex and irregular structure. AI has been developed to enable computers to realize human abilities such as reasoning, perception, and problem-solving. The most basic expectation is that AI can overcome the problems in which the traditional approaches are insufficient. As a recent trend of AI, deep learning (DL) methods establish a more complex relationship with the data and distinguish the hidden features of the data more accurately. DL is data-driven, and the quality, number, and variety of training data directly impact the performance of the models. For this purpose, comprehensive data sets such as MNIST, COCO, and ImageNet were published. However, the number of datasets containing geographic details is limited compared to others. In addition, datasets containing geographic details can represent only the characteristics of the regions where they were created. Therefore, the models trained with these data sets can only have the capacity to distinguish details at the level that they can only learn from these limited data. It is extremely difficult for these models to effectively predict roads in regions characterized by complex road networks, such as Istanbul. In this thesis, it is aimed to overcome the data gap in road segmentation studies with DL algorithms, to produce datasets representative of the study region, and finally to use data obtained from different sources together to overcome the problems encountered in existing research using only optical images. This thesis is divided into five main parts. The introduction provides a general overview of the subject matter, including comprehensive information on current studies and the motivation of this thesis. In the second part, a fast, accurate, and comprehensive road dataset production infrastructure was created using a web map service to overcome data-related problems. For this purpose, it was found appropriate to utilize service providers where maps can be edited based on user requests. Using the Static API feature of the Google Maps Platform, a data generation program was developed in Python programming language. In this program, the properties of the mask images corresponding to the satellite images were defined with a JavaScript code. An automatic static map style was created for road segmentation. In addition, using this program, the desired number of images can be generated randomly or as a sequence at fixed image sizes and within the boundaries of specified test regions. Furthermore, the Google Maps Platform does not provide geographic information about the images. In order to overcome this deficiency, the geo-referencing of these satellite images and corresponding masks was added to the program. In the third part of the thesis, it is aimed to create an Istanbul road dataset due to the necessity of producing a dataset that represents the characteristics of the region being tested in the road segmentation studies. Istanbul's road network is in a state of development with an ever-increasing population. As it contains different road types and land use details, it is capable of meeting the data diversity required by DL applications. The changing and evolving structure of Istanbul makes it one of the most important regions to be constantly observed and analyzed. In order to examine the contributions of different resolutions of satellite images and different generalization levels of masks in road segmentation studies, the images at zoom levels 14, 15, 16, and 17 from Google Maps were generated in this thesis. Consequently, 10000 optical images and road mask images were produced for each zoom level in the test regions in Istanbul. In order to test the performance of the generated dataset in DL models, the deep residual U-Net architecture was used. When the training metrics of the models' predictions are examined, it was found that the Istanbul dataset achieved successful results in terms of segmenting road pixels at each zoom level separately. In addition, DeepGlobe and Massachusetts datasets, which are widely preferred in road segmentation studies, were included in the analysis to test the prediction performance of the models trained with these datasets generated outside the study region.
-
ÖgeDeep learning-based building segmentation using high-resolution aerial images(Graduate School, 2022-10-05) Sarıtürk, Batuhan ; Şeker, Dursun Zafer ; 501142612 ; Geomatics EngineeringWith the advancements in satellite and remote sensing technologies, and the developments in urban areas, building segmentation and extraction from high-resolution aerial images and generating building maps have become important and popular research topics. With technological developments, a large number of high-resolution images have become increasingly more accessible and convenient data sources. At the same time, due to their ability of imaging over large areas, these aerial images can be very useful for accurate building segmentation and generating building maps. As one of the most important and key features of the urban database, buildings are the building blocks for human livelihood. Due to this importance, building maps have significant role for various geoscience-related applications such as illegal building detection, change detection, population estimation, land use/land cover analysis, disaster management, and topographic and cadastral map production. Nonetheless, obtaining accurate and reliable building maps from high-resolution aerial images is still a challenging task due to various reasons such as complex backgrounds, differences in building size, shape, and colors, noisy data, roof type diversity, and many other topological difficulties. Therefore, improving the efficiency and accuracy of building segmentation and extraction from high-resolution aerial images is still a focus and a hot topic among researchers in the field. Over the past years, various methods have been used to achieve automatic building segmentation from aerial images. In earlier studies, traditional image processing methods such as object-based, shadow-based, or edge-based methods were used. The low-level features and metrics that are used with these methods such as color, spectrum, length, texture, edge, shadow, and height could vary under different conditions like atmospheric state, light, scale, and sensor quality. These methods generally take manually extracted features and apply classifiers or conventional machine learning techniques to achieve building segmentation. However, manually extracting these features is costly, time-consuming, labor intensive, and requires high experience and prior knowledge. Although over time these methods have made some progress, they have some serious shortcomings such as low accuracy, low generalization ability, and complex processing. With the technological developments and availability of large datasets, deep learning-based approaches, especially Convolutional Neural Networks (CNN), have gained a lot of attention from researchers and surpass conventional methods in terms of accuracy and efficiency. CNNs have the ability to extract relevant features directly from the input data and make predictions using fully connected layers. Many CNN architectures such as LeNet, AlexNet, VGGNet, GoogleNet, and ResNet have been used over the years. However, CNNs perform regional divisions and use computationally expensive fully connected layers. These patch-based CNNs have achieved exceptional success, but due to their reliance on small patches around the targets to perform predictions and ignoring the relations between them, they are unable to provide accurate integrity and spatial continuity of building features. To improve the performance and overcome these problems, Long et al. proposed Fully Convolutional Networks (FCN). Instead of fully connected layers in CNNs, FCNs have convolution layers that improve the prediction accuracy greatly. FCNs output the feature maps at the size of the input images and perform pixel-based segmentation through their encoder-decoder structure. However, much of the information is lost in the decoder path, due to the FCNs having just one upsampling layer. Despite their success, FCNs also have some limitations, such as computational complexity and a large number of parameters. To overcome these shortcomings, various variants have been proposed over the years such as SegNet, U-Net, and Feature Pyramid Networks (FPN). These CNN-based approaches have achieved successful results on image segmentation tasks, but they also have some bottlenecks. For example, the usage of fixed-size convolutions results a local receptive field. Due to their designs, they are successful at extracting local context but have a low ability to extract global context. To overcome these shortcomings, some approaches have been proposed and implemented. Such as attention mechanism, residual connections, and designing architecture in different depths. The Transformer was first used in natural language processing (NLP), and later on, implemented to computer vision tasks. In 2020, the Vision Transformer (ViT) approach was proposed to be used in computer vision studies and obtained successful results on the ImageNet dataset. CNNs are successful in identifying local features, but they are insufficient in identifying global features due to their structure. Transformers can compensate for these shortcomings with the use of attention mechanisms. In ViT-based methods, global information can be extracted but spatially detailed context is ignored. In addition, Transformers use all the pixels in vector operations when working with large images, and therefore require large amounts of memory and are computationally inefficient. The main objective of this thesis is to investigate, evaluate, and realize comparisons of different CNN-based and Transformer-based approaches for building segmentation from high-resolution aerial images, and propose a modernized CNN approach to deal with the mentioned shortcomings. This thesis is composed of four papers dealing with these objectives. In the first paper, four U-Net-based architectures, which are shallower and deeper versions of the U-Net, have been generated to perform building segmentation from high-resolution aerial images and they were compared with each other and the original U-Net. The models were trained and tested on datasets prepared using the Inria Aerial Image Labeling Dataset and the Massachusetts Buildings Dataset. On the INRIA test set, Deeper 1 U-Net architecture provided the highest F1 score with 0.79 and IoU score with 0.65, followed by Deeper 2 and U-Net architectures. On the Massachusetts test set, U-Net architecture provided 0.79 F1 score and 0.66 IoU score, followed by Deeper 2 and Shallower 1. Successful results were obtained with Deeper 1 and Deeper 2 architectures show that deeper architectures can provide better results even if there is not too much data. Additionally, Shallower 1 architecture appears to have a performance not far behind deep architectures, with less computational cost, and this shows usefulness for geographic applications. In the second paper, U-Net and FPN architectures utilizing four different backbones (ResNet, ResNeXt, SE-ResNeXt, and DenseNet) and an Attention Residual U-Net approach were generated and comparisons between them were realized. Publicly available Inria Aerial Image Labeling Dataset and Massachusetts Buildings Dataset were used to train and test the models. Attention Residual U-Net model has the highest F1 score with 0.8154, IoU score with 0.7102, and test accuracy with 94.51% on the Inria test set. On the Massachusetts test set, FPN Dense-Net-121 model has the highest F1 score with 0.7565 and IoU score with 0.6188, and the Attention Residual U-Net model has the highest test accuracy with 92.43%. It has been observed that FPN with DenseNet backbone can be a better choice when working with small-size datasets. On the other hand, the Attention Residual U-Net approach achieved higher success when a sufficiently large dataset is provided. In the third paper, a total of twelve CNN-based models (U-Net, FPN, and LinkNet architectures utilizing EfficientNet-B5 backbone, original U-Net, SegNet, FCN, and six different Residual U-Net approaches) were generated, evaluated and comparisons between them were realized. Inria Aerial Image Labeling Dataset was used to train models, and three datasets (Inria Aerial Image Labeling Dataset, Massachusetts Buildings Dataset, and Syedra Archaeological Site Dataset) were used to evaluate trained models. On the Inria test set, Residual-2 U-Net has the highest F1 and IoU scores with 0.824 and 0.722, respectively. On the Syedra test set, LinkNet-EfficientNet-B5 has F1 and IoU scores of 0.336 and 0.246. On the Massachusetts test set, Residual-4 U-Net has F1 and IoU scores of 0.394 and 0.259. When the results were evaluated, it has been observed that the models using residual connections are more successful than the models using conventional convolution structures. It has also been observed that the LinkNet architecture gave good results on the Syedra test set with different characteristics than the other two datasets, and could be a good option for future studies involving archaeological sites. In the fourth paper, a total of ten CNN and Transformer models (the proposed Residual-Inception U-Net (RIU-Net), U-Net, Residual U-Net, Attention Residual U-Net, U-Net-based models implementing Inception, Inception-ResNet, Xception, and MobileNet as backbones, Trans U-Net, and Swin U-Net) were generated, and building segmentation from high-resolution satellite images was carried out. Massachusetts Buildings Dataset and Inria Aerial Image Labeling Dataset were used for training and evaluation of the models. On the Inria dataset, RIU-Net achieved the highest IoU score, F1 score, and test accuracy, with 0.6736, 0.7868, and 92.23%, respectively. On the Massachusetts Small dataset, Attention Residual U-Net achieved the highest IoU and F1 scores, with 0.6218 and 0.7606, and Trans U-Net reached the highest test accuracy, with 94.26%. On the Massachusetts Large dataset, Residual U-Net accomplished the highest IoU and F1 scores, with 0.6165 and 0.7565, and Attention Residual U-Net attained the highest test accuracy, with 93.81%. The results showed that the RIU-Net approach was significantly successful in the Inria dataset compared to other models. On the Massachusetts datasets, Residual U-Net, Attention Residual U-Net, and Trans U-Net gave more successful results.
-
ÖgeDetermination of spatial distributions of greenhouses using satellite images and object-based image analysis approach(Graduate School, 2023-03-02) Şenel, Gizem ; Göksel, Çiğdem ; Torres Aguilar, Manuel Angel ; 501182620 ; Geomatics EngineeringIn the face of the expected pressure on agricultural production systems with the increasing world population, one of the most suitable options for sustainable intensification of agricultural production is greenhouse activities that allow an increase in production on existing agricultural lands. Greenhouse activities can cause environmental problems at the local and regional scales. Since the primary material used in the covering of greenhouses is plastic, ecological problems are expected in the near future due to the excessive use of plastic. Besides, they may affect the integrity of ecosystems by changing land use and land cover (LULC) into extensive agricultural areas. On the other hand, the economy of many rural regions is supported by greenhouse activities, especially in Mediterranean countries. Moreover, due to the exposure of these structures to floods, especially with climate change effects, producers face economic and social problems. While all these situations make the production system unsustainable, they also endanger the ecology and economy of the region. Thanks to synoptic data acquisition and high temporal resolution, remote sensing images allow periodic agricultural sector monitoring. Considering the positive outcomes and adverse effects of greenhouses, determining greenhouse areas using remote sensing images is essential in providing better management strategies. In that case, monitoring through remote sensing images is the most suitable approach to obtain information about the effects of greenhouses on climate and environment and improve their economic output. Within the scope of this thesis, answers to different questions were sought by using the object-based image analysis (OBIA) approach, which is stated to give better results in the literature to determine greenhouses. OBIA approach consists of mainly three stages which are image segmentation, feature extraction, and image or object classification, and these sections formed the structure of this thesis In the image segmentation step, which is the first step of the OBIA, answers were sought for two crucial questions for the segmentation of plastic-covered greenhouses (PCG). The first of these questions is which of the supervised segmentation quality assessment metrics performs better in evaluating PCG segmentation. An experimental design was formed in which segmentation metrics were evaluated together with interpreter evaluations. At this stage, sixteen different datasets consisting of different spatial resolutions (medium and high spatial resolution), seasons (summer and winter), study areas (Almería (Spain) and Antalya (Turkey)), and reflection storage scales (RSS) (16Bit and Percent) were used. Various segmentation outputs were created using the Multiresolution segmentation (MRS) algorithm. Six different interpreters evaluated these outputs and compared them with the eight segmentation quality metrics. As a result of the evaluations, it was concluded that Modified Euclidean Distance 2 (MED2) was the most successful metric in the evaluation of PCG segmentation. On the other hand, Fitness and F-metric failed to identify the best segmentation output compared to other metrics investigated. In addition, the effects of different factors on the visual interpretation results were analyzed statistically. It was revealed that the RSS is an essential factor in visual interpretation. In detail, it was concluded that when evaluating the segmentation outputs created by using the Percent format, the interpreters were more in agreement and interpreted this data type more efficiently. In the second part of the segmentation phase, how much factors or their interactions affect the greenhouse segmentation was investigated. Approximately 4,000 segmentation outputs were produced from sixteen data sets, and MED2 values were calculated. For each shape parameter in each data set, the values reaching the best MED2 value were determined and statistically tested by analysis of variance (ANOVA). The segmentation outputs calculated from the datasets showed that the optimal scale parameters clustered by taking values close to each other in Percent format and took values in a broader range in 16Bit format. This showed that it would be effortless to determine the most appropriate segmentation outputs obtained from the Percent format. In addition, statistical tests have shown that the segmentation accuracy calculated from different RSS formats is directly dependent on the shape parameter. While segmentation accuracy increases with decreasing shape parameters in Percent format, this is the opposite in 16Bit format. This situation revealed that the shape parameter selection is critical depending on the RSS. In summary, it has been revealed that the Percent format is the appropriate data format for PCG segmentation with the MRS algorithm, and in addition, low-shape parameters should be preferred in the Percent format. In the second stage of the thesis, it was hypothesized that different feature space evaluation methods and feature space dimensions affect the classification in terms of accuracy and time. Based on this hypothesis, 128 features were obtained from Sentinel-2 images of the Almería and Antalya study areas, and classification performance was evaluated by random forest (RF) algorithm by applying different feature space evaluation methods. As a result of this evaluation, it was seen that the reduction of the feature space has a direct effect on the accuracy. But moreover, it has been determined that reducing the size of the feature space significantly reduces the time required to run the classification algorithm. Therefore, among the examined feature space evaluation algorithms, it has been concluded that RF and Recursive Feature Elimination (RFE)-RF (RFE-RF) algorithms are more suitable for classification accuracy and the time required to run the algorithm. Moreover, it has been found that these algorithms are less dependent on feature space variation in terms of classification accuracy, but reducing the feature space significantly reduces the computation time. In addition, among a total of 128 features obtained from the segments, including spectral, textural, geometric features and spectral indices, Plastic GreenHouse Index (PGHI) and Normalized Difference Vegetation Index (NDVI) were the most relevant features for PCG mapping according to RF and RFE-RF methods. As a result, the necessity of including indices such as PGHI and NDVI in the feature space and the application of one of the feature space evaluation methods such as RF or RFE-RF in terms of reducing the calculation time are the main outputs of this stage. In the third and final stage of the thesis, the effectiveness of ensemble learning algorithms for the PCG classification has been tested. According to the experimental results, Categorical boosting (Catboost), RF, and support vector machines (SVM) algorithms performed well in both studied areas (Almería and Antalya), but the implementation time required for CatBoost and SVM is higher than all other algorithms studied. K-nearest neighbor (KNN) and AdaBoost algorithms achieved lower classification performance in both study areas. In addition to these algorithms, the light gradient boosting machines (LightGBM) algorithm achieved an F1 score of over 90% in both study areas in a short time. In summary, considering the computation time and classification accuracy, RF and LightGBM are the two up-front algorithms. In general, within the scope of this thesis, answers to the questions encountered in the three steps of OBIA were sought to reach the best PCG determination approach. The determination of greenhouses from satellite images was carried out in two essential study areas in the Mediterranean Basin, where greenhouse activities are intensively carried out. Although these outputs belong to selected test sites, they provide important outputs for generalizing the findings on a large scale. Determining the spatial distribution of PCG to minimize the negative effects on the environment and increase their economic returns will make an important contribution to planners and decision-makers in achieving sustainable agriculture goals.
-
ÖgeDüşey mülkiyet haklarının 3-boyutlu yönetimi için yapı bilgi modellemesi (Bim)-tabanlı bütünleşik bir modelin geliştirilmesi ve üç-parçalı döngü yaklaşımı(Lisansüstü Eğitim Enstitüsü, 2022-08-01) Güler, Doğuş ; Yomralıoğlu, Tahsin ; 501162614 ; Geomatik MühendisliğiEtkin bir arazi yönetimi yaşadığımız çevrenin sürdürülebilirliğinin sağlanması için hayati öneme sahiptir. Bununla ilişkili olarak etkin bir arazi yönetiminin gerçek uygulamalara yansıtılabilmesi için güçlü Arazi İdare Sistemleri (Land Administration Systems-AİS)'ne ihtiyaç vardır. Sözü edilen sistemler arazilerin yer altı ve yer üstünde oluşabilecek mülkiyet haklarının bileşenleri olarak Sahiplik, Sorumluluk ve Sınırlamalar (Rights, Responsibilities, and Restrictions-SSS)'a ilişkin bilgilerin kadastral bir altyapıda kayıt altında tutulmasıyla ilgilenmektedir. Kentsel alanlardaki arazilerde ise hızlı göç ve bunun sonucu olarak hızlı nüfus artışıyla birlikte çok sayıda yapı inşa edilmektedir. Geçmişten günümüze bu dönüşüm kentsel alanların da genişlemelerinin belli kesimlerine kadar devam ettiği göz önüne alındığında çok katlı yapıların inşa edilmesine neden olmuştur. Gelişen teknolojiler sayesinde bahsedilen çok katlı yapıların karmaşıklığı gün geçtikçe de artmaktadır. AİS kapsamında kayıt altına alınan mülkiyet haklarının bir diğer biçimi de düşey yönde yapılarda oluşabilen kat mülkiyetidir. Yapıların bünyesindeki kendi başına kullanılmaya elverişli bağımsız bölümlerde oluşabilen kat mülkiyeti mevcut durumda tescil edilen önemli mülkiyet haklarından biridir. AİS'ler dünya genelinde yaygın olarak iki boyutlu (2B) konumsal verilerin kullanımına dayalı olarak uygulansalar da mevzuatta tanımlandığı üzere doğası gereği 3. boyuta sahip hakların tesciliyle de ilgilenmektedirler. Ancak geçmişte yaşanan sosyo-ekonomik, çevresel ve hukuki gelişmeler AİS'lerin halihazırda oluşan arazi yönetim sorunlarıyla başa çıkmada yetersiz kalabildiklerini ortaya koymaktadır. Bu bağlamda AİS'lerin mülkiyet haklarına ilişkin tescil süreçlerinde üç boyutlu (3B) verileri işleme ve yönetebilme kapasitesine sahip bir şekilde geliştirilmelerine ihtiyaç olduğu uluslararası literatürde hâkim görüş olarak yer bulmaktadır. Diğer bir ifadeyle günümüzdeki hızlı kentleşme sürecinde çok katlı ve karmaşık yapılarda kat mülkiyetine konu olan temel hakların eksiksiz bir biçimde tapu siciline tescil edilmesinde 2B verilerin yetersiz kaldığı bir gerçektir. Kat mülkiyetiyle ilgili olarak tüm bağımsız bölümlerin, ortak alanların ve her türlü eklentinin detaylarıyla tescilinde 2B gösterimler ve bilgi notlarının kullanımı düşey mülkiyet haklarına ilişkin gerçek durumu tam olarak yansıtamamaktadır. Bu nedenle kat mülkiyetine konu olan düşey yönlü hakların 3B ve bilgi teknolojisi destekli olarak temsiline gereksinim vardır. Günümüzde özellikle mimarlık, mühendislik ve inşaat (Architecture, Engineering, and Construction-AEC) endüstrisinde bilgisayar destekli tasarımın (Computer Aided Design-CAD) yerini alan Yapı Bilgi Modellemesi (Building Information Modeling-BIM) teknolojisine yönelik giderek artan bir eğilim görülmektedir. BIM teknolojisiyle yapılara ait modeller obje tabanlı modelleme yaklaşımı kullanılarak ayrıntılı bir şekilde 3B olarak elde edilebilmektedir. Bunun yanında, BIM modellerinin farklı paydaşlar ve uygulamalar arasında birlikte çalışabilirliği ise aynı zamanda bir Uluslararası Standartlar Teşkilatı (International Organization for Standardization-ISO) standardı olan açık veri standardı Industry Foundation Classes (IFC) ile sağlanmaktadır. Etkin bir arazi idaresi uygulaması için ise yine bir ISO standardı olan Arazi İdaresi Alan Modeli (Land Administration Domain Model-LADM) ortak bir dayanak oluşturmak amacıyla arazi idaresine ilişkin aktiviteleri, paydaşları, mekânsal objeleri ve aralarındaki ilişkileri kapsayacak şekilde kavramsal bir model sağlamaktadır. Yukarıda aktarılan tüm bilgiler bağlamında bu tez çalışmasının temel amacı düşey mülkiyete dair kat mülkiyetine konu olan bütün haklara ilişkin hem fiziksel (physical) yapı elemanları hem de mantıksal mekanları (logical spaces) içerecek şekilde, kadastral tescile ilişkin semantiklerle birlikte, 3B olarak modellenebilmesi amacıyla LADM ve IFC standartları arasındaki bütünleşik yapıyı sağlamaktadır. Bu amaçla öncelikle dünyadaki 3B kat mülkiyeti uygulamaları incelenerek mevcut duruma ilişkin bir analiz gerçekleştirilmiştir. Ardından Türkiye'deki kat mülkiyeti uygulamalarına ilişkin mevzuat altyapısı ayrıntılı incelerek kat mülkiyetinin 3B tescili ve yönetimi için ihtiyaçlar belirlenmiştir. Elde edilen bilgiler ışığında LADM standardındaki özellik sınıflarıyla IFC şemasındaki varlıklar arasında uygun ilişkilerin kurulduğu bir entegre model geliştirilmiştir. Modelin uygulanabilirliğini test etmek amacıyla örnek bir yapının BIM modeli oluşturulmuş ve geliştirilen modelin içeriği zenginleştirilerek nihai IFC modeli elde edilmiştir. Bu anlamda kat mülkiyetine konu olan yasal mekanların (legal spaces) yanında çeşitli yapı elemanlarına ilişkin olarak da kadastral tescil bağlamında SSS'lerin bütüncül yapıda modellenmesi mümkün kılınmıştır. Bilhassa yapı ruhsatlandırma sürecinin bir parçası olarak yapı kullanma izni sürecinde onaylanan inşa edilmiş BIM modellerinin yeniden kullanılmasıyla kat mülkiyetine konu olan hakların belirsizliğe mahal vermeyecek bir şekilde betimlenmesinin ve tapu-siciline tescilinin imkân dahilinde olduğu ortaya konmuştur. Yapılara ilişkin düşey mülkiyet haklarının 3B tescilinin yanı sıra dikkate alınması gereken diğer bir konu da kamu hizmetlerinin dijitalleştirilmesi hususudur. Yeni inşaat başlangıcında, çevresel faktörleri de dikkate alarak yapı projelerinin mevzuatlara uygunluğunun denetlendiği ruhsatlandırma süreçlerinin iyileştirilmesi amacıyla dijitalleştirilmesi ve otomatikleştirilmesine ihtiyaç vardır. İlaveten, inşa edilmiş çevreye dair alınan kararlara bilimsel bir dayanak oluşturan ancak şehirlerde meydana gelen hızlı değişimler nedeniyle güncelliklerinin korunması hayli zorlaşan 3B dijital kent modelleri de gereklidir. Bu nedenle BIM ve Coğrafi Bilgi Sistemleri (CBS) tabanlı modeller arasındaki etkileşim kaçınılmazdır. Bahsedilen konularla ilgili olarak ortak nokta 3B dijital yapı modelleridir. Bu bağlamda tez kapsamında dijital yapı ruhsatlandırma süreçleri, 3B kent modellerinin güncellenmesi ve mülkiyet haklarının 3B tescilini içeren bir "Üç-Parçalı (3P)" döngü vizyonu önerisi sunulmuştur. Konuyla ilişkili olarak 3P döngüsün her bir parçasına ilişkin ayrıntılı incelemeler gerçekleştirilerek döngünün Türkiye'de uygulanma potansiyeli ortaya konularak, değerlendirme sonuçları verilmiştir.
-
ÖgeEnhancing UCAV operations with AI-driven point cloud semantic segmentation for precision gimbal targeting in defense industry(Graduate School, 2024-12-20) Bozkurt, Salih ; Duran, Zaide ; 501182617 ; Geomatics EngineeringThe widespread integration of technological advancements has fundamentally transformed the field of artificial intelligence, significantly enhancing the reliability of AI model outputs. This progress has led to the widespread use of artificial intelligence in various sectors, including automotive, robotics, healthcare, space technologies, and defense industries. Particularly in the field of aerial combat, target identification and engagement operations still heavily rely on human operator intervention. Within the scope of this thesis, the aim is to automate the complex and error-prone laser designation process using 3D point clouds and deep learning algorithms. The primary dataset for the study consists of 3D point clouds obtained by processing gimbal images of the Bayraktar AKINCI Unmanned Combat Aerial Vehicle (UCAV) using photogrammetric methods. For initial evaluations and parameter optimizations, the DublinCity 3D LiDAR point cloud data was used. The DublinCity dataset was created using Airborne LiDAR methodology in the capital city of Ireland, Dublin, in 2015. This dataset is hierarchically organized into 13 classes, including buildings, vegetation, ground, and undefined, divided into four main categories. Within these main categories, there are subcategories such as windows, doors, trees, and others. For this study, we used the PointNet++ and RandLA-Net algorithms, two widely recognized approaches for point cloud segmentation. Both algorithms are designed to process point clouds directly and deliver segmentation results. However, a key difference lies in their handling of data: while RandLA-Net can incorporate both geometric and color information, PointNet++ traditionally relies only on geometric features. To address this limitation, we modified the PointNet++ algorithm to utilize color attributes, allowing for a more comprehensive analysis. This enhancement represents a significant contribution of our research. By comparing the improved PointNet++ with RandLA-Net, we observed noticeable differences in their performance, particularly in how they handle datasets with combined geometric and color information. In tests conducted using only geometric features in the RandLA-Net algorithm, an accuracy rate of approximately 94% was achieved. When color information for points was also provided to the algorithm, the accuracy rate significantly increased to approximately 97%. In tests conducted with the PointNet++ algorithm, an accuracy rate of 94% was observed when only geometric features were used. However, the accuracy rate increased significantly to approximately 96% when the PointNet++ algorithm was enriched with color information. The results of this research highlight two main contributions. Primarily, combining point clouds produced from different sources with AI-driven decision-making processes provides substantial benefits for the defense industry in aerial combat activities, such as target identification, monitoring, and neutralization. Secondly, modifying the PointNet++ algorithm, which originally relied exclusively on geometric data, to include color information has greatly enhanced the accuracy of learning and decision-making in 3D point cloud processing tasks. This research seeks to offer a dependable and effective approach to reduce human involvement in laser designation procedures, especially by utilizing data gathered from the Bayraktar AKINCI UCAV. Upcoming research will aim to enhance precision by incorporating higher resolution point cloud data and examining different deep learning algorithms. Furthermore, analyzing data gathered from advanced RADAR systems and enhanced photogrammetric point clouds will be a key emphasis for upcoming studies. Additionally, this thesis features an extensive preprocessing stage, enhancing the DublinCity dataset and photogrammetrically produced Bayraktar AKINCI UCAV information by organizing them into separate categories. The intricate framework of the DublinCity dataset, consisting of 13 categories, provided a varied basis for assessing segmentation algorithms. In this process, distinguishing highly specific classes (e.g., windows, doors) created a chance to examine the connection between detail level and algorithm precision. This part of the research highlights the difficulties and constraints associated with handling high resolution, intricate data. Apart from defense applications, laser, RADAR, and gimbal systems offer numerous potential uses. The techniques and strategies created in this research can be readily modified for use in other fields. For instance, in disaster management, these systems could be employed for automatic debris identification and directing rescue teams during natural disasters like earthquakes or floods. In urban design and infrastructure management, these technologies can greatly enhance procedures such as 3D city visualization and automated extraction of building inventories. In agriculture and forestry, they might be used to improve soil productivity, identify damaging structures, and track plant health. Likewise, in the preservation of cultural heritage, these systems can aid in 3D mapping of archaeological locations, identifying relics, and comprehensive documentation of historical items. In summary, this thesis illustrates the successful use of deep learning algorithms in automating laser targeting processes for aerial combat applications involving UAVs within the defense sector. These studies not only improve the effectiveness of current technologies but also establish a base for creating autonomous systems. The findings of this research possess significant theoretical and practical implications. Upcoming developments in this area seek to integrate more intricate datasets, including those from radar technologies, and evaluate different algorithms to foster additional innovation.
-
ÖgeEstimating forest parameters using point cloud data(Graduate School, 2022-08-05) Arslan, Adil Enis ; Erten, Esra ; 501112601 ; Geomatics EngineeringThe spatial distributions and statistical properties of stand attributes must be understood in order to characterize the dynamic forest ecosystem. In this context dendrometry is an invaluable tool in forestry when quantitative characterisation of forests or individual trees are required. Diameter at Breast Height (DBH) and Tree Height (TH) are two significant parameters in dendrometry and heavily correlated with Leaf Area Index. Leaf Area Index (LAI) is described as a dimensionless parameter that has a significant impact in forestry applications and characterising the canopy's structural vegetation in general. With conventional methods, LAI can be calculated with destructive sample collection or with a relatively new non-destructive method called hemispherical photography. Conventional measurements of DBH and TH, although not destructive, are also very time and manpower consuming. With the engagement of modern surveying instruments in forestry, obtaining forest stand parameters for large areas in short time has recently become more prominent and possible with the use of LiDAR technology. Although promising, LiDAR data evaluation techniques for forest stand parameters calculation are still subject to development. This thesis work aims to make a comparative evaluation of existing novel techniques with newly proposed methods for estimating forest stand parameters, namely DBH, TH and LAI. For this purpose Point Cloud Data (PCD) from different sources such as Airborne LiDAR Systems (ALS), Terrestrial Laser Scan (TLS), and Unmanned Aerial Vehicle (UAV) have been evaluated. These data sources have been chosen since they are greatly preferred for forestry operations, and their results can be quantitatively compared against the conventional method results. In-situ data was collected to assess LAI, DBH and TH estimations from PCD through varying sample locations including deciduous, coniferous, mixed forest type. Sampling zone spans from northern parts of Istanbul Urban forest area to a research forest under the supervision of Istanbul University-Cerrahpasa, in Istanbul, Turkey. In-situ measurements were accepted as ground truth, and the results obtained from PCD evaluation were compared against them in terms of their overall error statistics, as well as their performances due to the computational cost and challenges in data acquisitions. The results obtained from the study show that segmentation and removal of wood materials from TLS based PCD by using neural network algorithms and connected component analysis methods, albeit, complex and computer resource demanding, have a promising future on the calculation of effective LAI values of large areas in a very short time span. Similarly, the forestry PCD obtained by TLS has the best performance among other PCD at both DBH and TH estimation
-
ÖgeExploring the cognitive processes of map users employing eye tracking and EEG(Fen Bilimleri Enstitüsü, 2020) Keskin, Merve ; Doğru, Ahmet Özgür ; De Maeyer, Philippe ; 656904 ; Geomatik Mühendisliği Ana Bilim DalıUnderstanding how our brain copes with complex visual information is a challenge for both cognitive psychology and cartography. If we pursue to design better and usable maps, we require building a better knowledge on the cognitive processes of map users. This thesis aims to contribute to the understanding of the cognitive processes of a group of map users in learning, acquiring and remembering information presented via digital 2D static maps. To be able to gain insight into the users' behaviors while they interact with maps, eye tracking (ET) and electroencephalogram (EEG) are enabled as synchronized data collection methods due to them being non-invasive and capturing direct responses of cognitive activities. Therefore, the preliminary goal of the research is to evaluate the use of ET and EEG for cartographic usability and spatial cognition research considering the technical and methodological aspects of this synchronization, also the limitations, possibilities and the contribution of EEG in the domain of cartography. The technical concerns refer to (i) the synchronization of ET and EEG recording systems, their accuracy and quality, and (ii) numerous processing steps (i.e. preprocessing, the alignment of the collected ET and EEG data, removal of non-cerebral activities from EEG data, segmentation and re-referencing). The methodological issues are situated in many aspects of the experimental design and its set-up, which includes identifying the research goals, participants, task and stimuli, psychological measures to use, evaluation methods and possible analyses of the collected data. These issues are pinpointed with respect to the existing literature, knowledge obtained from domain experts and hands-on experience in the neuro-lab. The fundamental object of the thesis is to investigate on the traditional expert-novice paradigm as expertise being one of the individual characteristics that influences the users' performance on map-learning tasks. Since maps are widely used by both experts and novices, to study their differences in spatial cognition enables us to determine how to use this input to enhance the map design leveraging the map users' cognitive abilities. Therefore, our main research questions are: 'Do map learning strategies of experts and novices differ? How does the cognitive load vary between expert and novices?' In this context, we conducted two mixed-methods user experiments focusing on the cognitive strategies of a group of expert and novice map users and investigated their spatial memory capabilities through cognitive load measurements. First experiment had a simple design and an exploratory characteristic, since we would initially assure that the eye tracking and EEG synchronization is of sufficient quality to explore users' cognitive behaviors towards map stimuli. Accordingly, it consisted of single trials and participants were instructed to study the main structuring elements of a map stimulus (i.e. roads, settlements, hydrography, and green areas) without any time constraints in order to draw a sketch map afterwards. On the one hand, the performance of the participants was assessed based on the order with which the objects were drawn on the digital sketch maps and the influence of a subset of visual variables (i.e. presence & location, size, shape, color). On the other hand, trial durations and eye tracking statistics such as the average duration of fixations, and number of fixations per seconds were compared. Moreover, selected AoIs, which represent the main structuring elements of the map stimulus, were explored to gain a deeper insight on visual behavior of map users. Based on the evaluation of the drawing order, we observed that experts and males drew roads first whereas; novices and females focused more on hydrographic object. According to the assessment of drawn elements, no significant differences emerged between neither experts and novices, nor females and males for the retrieval of spatial information presented on 2D maps with a simple design and content. The differences in trial durations between novices and experts were not statistically significant while both studying and drawing. Similarly, no significant difference occurred between female and male participants for either studying or drawing. Eye tracking metrics also supported these findings. For average duration of fixation, there was found no significant difference between experts and novices, as well as between females and males. Similarly, no significant differences were found for the mean number of fixation. Furthermore, based on results of time to first fixation, dwell time, fixation count, the number of fixations per second, average fixation duration for selected AoIs, the larger AoIs were gazed at earliest and the dwell times for such objects were much longer compared to those for other AoIs. The linear features were easier to learn and remember, although the viewer did not pay much attention. Longer average fixation durations for a specific AoI indicated that the chances were higher to remember that object. The objects that were absent on the sketch map received the shortest fixation durations during the study phase. However, longer fixation durations may also indicate participants' difficulty to recognize the information in the map stimulus. Regarding to the EEG Frontal Alpha Asymmetry calculations, both user groups showed greater relative right frontal activation, which is in association with the less attentional, and focus performance. The difference between experts and novices was not significant, similar to the eye tracking results. On the contrary, alpha power averaged across all electrodes demonstrated that the novices exhibited significantly lower alpha power, indicating a higher cognitive load. On the contrary, in Experiment 2, a complex and more structured approach was followed as a result of learning from the previous experiment and collaborating with the domain experts. This experiment contained a larger number of stimuli were used to study the effect of task difficulty (i.e. easy, moderate, hard) on the retrieval of map-related information. Next to the reaction time and success rate, we used fixation and saccade related eye tracking metrics (i.e., average fixation duration, the number of fixations per second, saccade amplitude and saccade velocity), and the event-related changes in EEG power spectral density (PSD) for alpha and theta frequency bands to identify the cognitive load. While fixation metrics and the qualitative analysis of the randomly selected focus/heat maps summarizing the participants' fixation behaviors indicated no statistically significant difference between experts and novices, saccade metrics proved the otherwise. EEG power spectrum analysis, on the other side, suggested an increase in theta power (i.e. event-related synchronization) and a decrease in alpha power (except moderate tasks) (i.e. event-related desynchronization) at all difficulty levels of the task for both experts and novices, which is an indicator of cognitive load. Although no significant difference emerged between two groups, we found a significant difference in their overall performances when the participants were classified as good and relatively bad learners. Triangulating EEG results with the recorded eye tracking data and the qualitative analysis of randomly selected focus maps indeed provided a detailed insight on the differences of the individuals' cognitive processes during this spatial memory task. The qualitative analysis with the 10 randomly selected focus/heat maps provided a general overview of the participants' attentional behavior towards the map elements of interest and the similarities related to their map learning strategies. However, for measurable results, we selected one map stimulus and drew AoIs around key elements of maps (i.e. green areas, water bodies, major rivers and roads, road junctions) to analyze the attention distribution of the participants using average fixation duration, time to first fixation and the number of map objects covered within AoIs. Although the results are preliminary, we found out that the eye scans through linear objects and fixates/focuses on the polygon objects. The location of the map elements is more influential on the participants' gaze behavior compared to its size. The fixation durations within the (relevant) AoIs did not depend on the task difficulty. Additionally, our analysis showed that the GL experienced the least cognitive and this finding supports the evaluation of the participants by classifying them as "good learners and bad learners" during the usability tests of maps designed for general users with basic map learning tasks. In order to increase the understandability and usability of cartographic products, the results of this research can be used as guiding experiences in production processes where design methods that minimize the factors that negatively affect user perception (e.g. exaggeration, reduction of emphasis, utilizing the visualization elements to increase visual extraction such as grids).
-
ÖgeGIS-based multi-criteria decision analysis for optimal urban emergency facility planning(Graduate School, 2022-10-13) Nyimbili, Penjani Hopkins ; Erden, Turan ; 501172611 ; Geomatics EngineeringThe growing scale of urban fire risks especially in megacities of the world such as Istanbul in Turkey arises largely as a result of the confluence of varied contemporary developmental and demographic trends that include accelerated urbanization, rising urban population, and migration to cities and socio-economic factors such as inequalities. Increasing urban development pressure brings about an expansion in built-up and urban settlement areas, often without adequate and comprehensive urban planning policies and regulations. As a result of increased human activities and interactions, these places are increasingly exposed to fire risk. To improve decision-making, a better comprehension of these relationships and interconnections as part of the complexities of human systems and urban dynamics functioning across different levels, actors, stakeholders, sectors, and disciplines is needed to mitigate fire hazard risk in urban populations. Therefore, recent advances in geospatial sciences have prompted emergency planners and managers to demand vast volumes of geographical data in order to make complex decisions. Diverse stakeholders, multidisciplinary teams, and multiple criteria are all involved in making these complex decision-making procedures. GIS-based Multi-Criteria Decision-Analysis (MCDA) strategies can be used to improve the quality of decision-making by merging spatial data and value judgments to tackle such complex planning issues, which is the fundamental strength of using this approach. In this context, fire risk and emergency planning at the spatial scale of the urban environment is a complicated and interrelated decision-making process requiring many factors and transdisciplinary stakeholder interaction. In this PhD thesis, GIS-based MCDA methods are applied to integrate the decision-makers' preferences with regard to solving such emergency planning problems of mitigating fire impacts and response action improvement by the optimization of new urban emergency and fire station site selection for the case of Istanbul province. The main aim of this thesis is therefore to develop an integrated GIS and MCDA model for effectively planning new urban emergency and fire facilities in Istanbul province, to reduce the fire response times to within five minutes. In order to achieve the main objective, there are ten (10) sub-objectives namely: using MCDA methods such as fuzzy AHP, Entropy-AHP, Best-Worst Method (BWM) and Decision Making Trial and Evaluation Laboratory (DEMATEL) for model construction, comparison and validation of resultant weights; determination of influencing criteria for effective urban emergency facility planning; utilizing the Delphi technique to conduct surveys to capture the preferences of decision-makers (DMs), evaluation of the criteria weights based on pairwise comparisons from relevant experts/DMs using the GIS-based MCDA approaches; identification of the most essential criteria for urban emergency facility site selection from the experts' judgements; using GIS to process, analyse and produce raster suitability maps that identify the most viable areas for locating new urban emergency facilities; prioritization of proposed new fire and urban emergency facilities (from low to high) for planning their construction in a phased manner based on cost and resource limitations; comparison analysis of distinct opinions and preferences of two DM groups in the group decision-making (GDM) process, comprising of fire brigade employees and academic/professional experts; using GIS capabilities to conduct a sensitivity analysis (SA) to test the sensitivity and robustness of the constructed models based on the combination of criteria weights; investigation of the interdependencies and levels of interaction among the various criteria employed in the MCDA modelling process. The thesis is, thus, comprised of three (3) papers addressing these ten sub-objectives. Istanbul province is determined as the case study area and six influencing criteria are identified with their respective weights evaluated, for each paper. In the first paper, a hybrid model of the recently developed BWM integrated with GIS is proposed. In the study, a GDM framework is suggested to support the incorporation of divergent views of two DM groups consisting of academicians and fire brigade practitioners for the emergency facility planning decision problem. Meaningful inferences from the study are made from statistical tests such as one-sample t-test, one-way ANOVA and Tukey's HSD test to analyse the preferences of the expert groups. Further, in this research, a degree of consensus or reliability in the DM process is assessed by a statistical measure called Kendall's coefficient of concordance, W. According to the study, it is revealed that the density of hazardous materials (DHM) and high population density (HPD) are perceived to be the most important by the academician and fire brigade practitioner DM group, respectively. For both DM groups, the distance from earthquake risk (DER) is viewed to be the least important. Resultant raster suitability maps for both DM groups are produced for visualizing the BWM model. In the second paper, the combination of AHP and Entropy methods with GIS is used for evaluating criteria weights both subjectively and objectively. In the study, the validation of the AHP-Entropy model is carried out on the criteria with the strongest influence on the decision outcome and spatially visualized using the One-At-a-Time (OAT) Sensitivity Analysis (SA) method. The study concludes that 28.1% of the case study area, or a third of the total area, is likely to be exposed to the risk of urban fires, necessitating the urgent planning of new urban emergency facilities to ensure adequate fire service coverage and protection. In the third paper, an integrated approach using fuzzy AHP based on a triangular membership function and GIS is implemented. For this case, the resultant fuzzy AHP weights are obtained from surveys of 19 experts and are validated using another MCDA technique, called BWM. Research results identified the most significant criteria in urban fire station site selection as the density of hazardous material facilities (DHM), a high population density (HPD) and proximity to main roads (PMR) with corresponding weights of 33.3%, 24.4% and 15.2%, respectively. By a thorough analysis of the results, a total of 34 new urban fire stations were proposed in addition to the existing 121 fire stations for addressing the increasing demands of fire protection services by minimizing the response time to less than 5 minutes. In addition, a three-level prioritization analysis from low to high was performed on the 34 proposed fire stations to plan their construction in phases based on cost and resource availability. Finally, the DEMATEL method is applied to examine the complex interrelationships and levels of influence among the criteria previously determined for optimally selecting new urban infrastructure for fire and emergency services in Istanbul as well as for model results validation of the BWM, AHP-Entropy and fuzzy AHP techniques applied. In this research, useful insights are generated by constructing an intelligible structural model visually in form of a digraph involving analysis of causal relationships among criteria and their directional influences as well as corresponding degrees of strength. The findings reveal that the high population density (HPD) is the most critical criterion followed by the density of hazardous materials (DHM) criterion in effectively planning new urban facilities for fire and emergency services and thus significantly influence and impact all the other criteria, while the distance to earthquake risk (DER) criterion does not influence any other criteria and consequently not essential in the planning procedure. The DEMATEL model results are used to validate the BWM, AHP-Entropy and fuzzy AHP model results in terms of levels of criteria significance and are therefore shown to be in high correlation. In this regard, these contextual relationships established from this research contribute toward an integrated fire risk mitigation policy formulation for planning new emergency facilities in urban environments through the engagement of all decision-makers across various backgrounds and disciplines.
-
ÖgeGlobal gravity field recovery from low-low satellite-to-satellite tracking with enhanced spatiotemporal resolution using deep learning paradigm(Graduate School, 2023-05-24) Uz, Metehan ; Akyılmaz, Orhan ; 501162610 ; Geomatic EngineeringUnderstanding climate system and ensuring survival of the planet require more attention to monitoring water resources and water-related natural disasters. Therefore, monitoring water storage is crucial for the global climate and natural ecosystems. Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GFO) missions have revealed new insights into mass transport within the Earth system. For the first 15 years, beginning in 2002, time series of terrestrial water storage (TWS) variations on the Earth were recovered from the measurements of the GRACE mission. After a gap of 11 successive months, the GFO mission has been performing this task since May 2018. Hence, over the last 20 years, TWS variations from GRACE/GFO measurements have provided an unique information on the Earth's water cycle to a wide range of hydrology, glaciology, and solid earth activities. Numerous scientific investigations have been conducted in the light of this data. Some of these efforts include estimating time-variable gravity field models with high accuracy from GRACE/GFO measurements using satellite gravimetry techniques and/or enhancing the temporal and spatial resolutions of TWS anomalies (TWSA). In this thesis, two major efforts have been investigated by applying the energy balance approach (EBA), which is a kind of satellite gravimetry technique based on the principles of energy conservation. The preliminary aim is to estimate spherical harmonic coefficients (SHC) of time-variable gravity field models of the Earth and new hybrid deep learning (DL) algorithms, namely residual deep convolutional autoencoders (ResDCAE) and super-resolution residual deep convolutional autoencoders (SR-ResDCAE). The next objective is to enhance temporal and spatial resolutions of TWSA maps that are derived from the SHCs. The SHCs are highly sensitive to the systematic errors and high-frequency noise sources in range-rate observations of GRACE/GFO K/Ka Band Ranging (KBR) as well as the orbit configurations. This is why the estimated geopotential differences (GPD) from EBA have direct relations to range rate dataset due to applied KBR alignment approach. Under these circumstances, the temporal models are estimated to have comparable accuracy with other institution models for up to degree/order (d/o) 20, but are less accurate for higher degrees of SHCs. In order to mitigate these error and noise sources, KBR empirical parameter estimation or the Bayesian filter (BF) is applied to estimated GPDs. When the number of empirical parameters are increased (from one to three cycle-per-revolution (CPR)), the heavier effect of North-South (N-S) stripes is drastically reduced, particularly in months with poor orbit configuration. However, this results in a loss of strength in the long-wavelength component of the gravitational signal. On the other hand applying both the forward filtering (FF) and backward smoothing (BS) steps of BF to the GPD residuals, high-frequency noises caused by the satellite's temperature changes are reduced and there is no signal loss in SHCs estimated by these filtered and smoothed GPDs. However, this result did not lead to any improvement in the mitigation of high-degree SHC correlations. Since the estimated GPDs are also highly sensitive to orbital configurations to represent mass variations, it is concluded that a regularization process is required in the gravity inversion step to eliminate correlations in higher-order SHCs and reduce N-S stripes in an unconstrained solution without signal loss. In the second step, the TWSA that are calculated from estimated SHCs are downscaled from monthly and 100 km resolutions to daily and 25 km resolutions using in-house developed DL, i.e., ResDCAE and SR-ResDCAE, applying step-by-step simulations from lower to higher resolutions. Internally, the performance of each GRACE-like TWSA simulation is validated using mathematical metrics such as root mean squared error (RMSE) and Nash-Sutcliffe efficiency (NSE), as well as comparisons to previous studies. Contrary to internal validation, the simulated TWSAs are also externally validated by comparison to the performance of filling the gap between GRACE and GFO missions and to non-GRACE datasets, such as the El Nino/La Nina sea surface temperature index and global mean sea level (GMSL) changes. In addition, the capability of the daily simulations to detect long- and short-term variations in the TWSA signal caused by natural disasters such as the 2011 and 2019 Missouri River Floods, Hurricane Harvey, and the 2012–2017 drought in California for Contiguous United States (CONUS) region is investigated. The droughts experienced in Türkiye during the GRACE/GFO time period, which occurred in 2007–2008 and 2013–2014, are also evaluated using daily simulations considering Fırat Dicle Basin (FDB) and Konya Close Basin (KCB), separately. Both the filling of TWSA data gaps and the simulation of daily time series using the ResDCAE algorithm have been successfully simulated. Nevertheless, the spatial downscaling step of the SR-ResDCAE algorithm requires additional physical investigation regarding the establishment of spatio-temporal correlations during training. In addition, leakage bias effects have emerged as a result of the post-processing filters used to eliminate errors in time-varying gravity field models obtained unconstrained by the EBA method. Due to post-processing filters, the true signal magnitude of TWSA is diminished. Therefore, the temporal and spatial pattern of the simulated TWSA time series is comparable to that of the other compared simulations and models, but signal power loss is readily apparent.
-
ÖgeHigh-resolution gravimetric geoid modeling in the era of satellite and airborne gravimetry(Graduate School, 2022-10-06) Işık, Mustafa Serkan ; Erol, Bihter ; 501162607 ; Geomatics EngineeringWith the advances in positioning and inertial navigation systems, the accuracy obtained from the airborne gravimetry technique has reached very important levels that aid high-resolution gravity field modeling. The data obtained from the airborne gravimetry is of great importance in complementing the deficiencies of terrestrial data in mountainous areas and land-sea transitions in coastal areas where modeling the geoid are most troublesome. In this thesis, high-resolution regional gravimetric geoid modeling was investigated in light of the recent advancements in the field of gravimetry, more specifically satellite and airborne gravimetry. With recently developed GOCE-based global geopotential models and advanced stochastic techniques to model the regional gravity field as a solution of the geodetic boundary value problem, it is possible to achieve a high-resolution geoid model which can alter the traditional vertical reference system realization. In this regard, four studies are carried out in two test regions: Colorado, the USA, and Turkey. The first study focused on the contribution of airborne gravity measurements to gravimetric geoid modeling in a high topography, Colorado, USA, via the least squares modification of Stokes (LSMSA) and Hotine (LSMHA) integrals with additive corrections techniques. The study included filtering the high-frequency airborne gravimeter data with minimum loss of signal and downward continuing it to the Earth's surface by Least Squares Collocation method with a planar logarithmic covariance model. The reduced data was optimally combined with the satellite data from the global geopotential model and terrestrial gravity data to calculate a high-accuracy gravimetric geoid model. In this combination, the error variance of each data set was taken into account to stochastically determine the variance of input gravity anomaly/disturbance data set for Stokes and Hotine integrals. To clarify the importance of airborne gravity data to the study, three gravity data sets were created: terrestrial-only, airborne-only, and combined. The computed gravimetric geoid models were tested with highly accurate GPS/leveling benchmarks collected for the validation of models along a profile passing through the rough topography of the Colorado mountains. The results indicated the contribution of airborne gravity data over the mountainous regions, clearly. In conclusion, we obtained two gravimetric geoid models calculated using combined data set via LSMSA and LSMHA methodologies whose absolute accuracies are 2.69 cm and 2.87 cm, respectively. In the rest of the thesis, we focused on improving the accuracy of the gravimetric geoid model in Turkey. The first study that concerns the geoid model of Turkey dealt with the downscaling of low-resolution gravity anomaly data set, which originally has ~9 km resolution, to a spatial resolution of ~2 km. This task was achieved via the proper modeling of the topographic attraction on gravity using planar and spherical approaches for Bouguer gravity anomalies. While the planar approach was implemented for the computation of complete Bouguer gravity anomalies using classical terrain correction based on the mass-prism technique, the spherical approach was applied using a global model for the topographic attraction that is SRTM2Gravity. Based on these two approaches, the low-resolution complete Bouguer anomalies were enriched to higher-resolution data set, and consequently, surface gravity anomalies were calculated from planar and spherical complete Bouguer anomalies. Three gravimetric geoid models were calculated via the LSMSA technique, a low-resolution reference geoid with a planar approach, and two high-resolution geoids via planar and spherical approaches. Based on the accuracy assessment at 100 homogeneously distributed GPS/leveling benchmarks, the accuracy of the best-performing geoid was found as 8.6 cm using spherical approximation. The performance of gravimetric geoid models using the down-scaled surface gravity anomalies was significantly better compared to the low-resolution solution, the spherical approach being slightly better than the planar one. Hence, the success of the down-scaling was proven in terms of the accuracy achieved by the high-resolution gravimetric geoid models.
-
ÖgeImproving the performance of remote sensing-based water budget components across mid- and small- scale basins(Graduate School, 2022-07-19) Kayan, Gökhan ; Erten, Esra ; Türker, Umut ; 501152601 ; Geomatics EngineeringIn the last few decades, many global basins have been threatened by rapid urban growth and global warming, resulting in changes in their climate regime. Climate change has increased the incidence of extreme weather events, uncertain water availability, water scarcity, and water pollution. Remote sensing (RS) has emerged as a powerful technique that provides estimations with high spatiotemporal resolution and broad spatial coverage. In recent years, the efficacy of RS products for water budget (WB) analysis has been widely tested and implemented in global and regional basins. Although RS products provide high temporal and spatial resolution images with a near-global coverage, uncertainty is still a significant problem. The main goal of this study is to utilize two different approaches to minimize the uncertainty of the products and to improve RS-based WB estimations in mid- and small- scale basins. The first approach aims to improve the efficacy of water WB estimations from various hydrological data products in the Sakarya basin by; (1) Evaluating the uncertainties of hydrological data products, (2) Merging four precipitation (P) and six evapotranspiration (ET) products using the error variances, and (3) Employing the Constrained Kalman Filter (CKF) method to distribute residual errors (r) among WB components based on their relative uncertainties. The results showed that applying bias correction before the merging process improved estimations of P products with decreasing root mean square error (RMSE), except PERSIANN. VIC and bias-corrected CMORPH products outperformed other ET and bias-corrected P products, respectively, in terms of mean merging weights. The terrestrial water storage change (ΔS) is the primary reason for non-closure errors. This is mainly caused by the two facts. First, the Sakarya basin is a relatively small basin that GRACE can not simply resolve. Second, while P, ET, and Q mostly describe the surface water dynamics, ΔS includes both the surface water and ground water. It is well known that surface water and ground water have completely different dynamic behaviors. The change in surface water is much faster than the change in groundwater. The CKF results were insensitive to variations in uncertainties of runoff (Q). P derived from the CKF was the best output, with the highest correlation coefficient (CC) and the smallest root mean square deviation (RMSD). In the second approach, the annual r in the WB equation arising from the uncertainties of the RS products was minimized by applying fuzzy correction coefficients to each WB component. For analysis, three different fuzzy linear regression (FLR) models with fourteen different sub-models were used in the two basins having different spatial characteristics, namely Sakarya and Cyprus basins. The performance of sub-models is better in the Sakarya basin than that in the Cyprus basin, which has a higher leakage error due to across ocean/land boundary. Moreover, the Cyprus basin is too small for some low-resolution RS-based products to resolve. The Zeng and Hojati sub-models outperformed Tanaka sub-models in the Sakarya basin, whereas Zeng Case-I, Zeng Case-II, and Hojati (degree of fitting index (h) =0.9) sub-models showed the best performance in the Cyprus basin. The best fuzzy sub-models reduced the error up to 68% and 52% in terms of mean absolute error compared to non-fuzzy model in the Sakarya and Cyprus basins, respectively. Further evaluations showed that the best sub-model P well captured the temporal patterns of gauge observations in both basins. Moreover, they have the best consistency with gauge observations in terms of RMSE, Kling-Gupta efficiency (KGE), and percent bias (PBIAS) in the both basins. The results proved that the second approach will provide valuable insights into WB analysis in ungauged basins by incorporating the fuzzy logic approach into hydrological RS products. In general, the FLR and CKF derived P, ET, and Q showed similar seasonal variation with peak and bottom values appeared in nearly the same years. In terms of CC, RMSE, and bias, fuzzy outputs show closest agreement with CKF outputs for Q, with slightly less agreement for P and ET, and much less agreement for ΔS. It can be concluded that the majority of the errors in the second approach are caused by fuzzy ΔS.
-
ÖgeInSAR ve makine öğrenmesi yöntemleri kullanılarak yüzey hareketlerinin zaman serileri ile modellenmesi: İstanbul Havalimanı örneği(Lisansüstü Eğitim Enstitüsü, 2023-05-30) Yağmur, Nur ; Musaoğlu, Nebiye ; Şafak, Erdal ; 501182615 ; Geomatics EngineeringUlaşım trafiğinin önemli bir bölümünü hava taşımacılığı oluşturmaktadır. Özellikle mega kentlerde yer alan ve hem ulusal hem de uluslararası taşımacılığı sağlayan havalimanları önemli bir konuma sahiptir. Yılda milyonlarca insanın yolculuk yapmasına olanak sağlayan bu havalimanlarında yapı sağlığı konusu kritik bir öneme sahip olup, yapı hasarlarının maliyetinin yanı sıra oluşabilecek kazalar sonrası yolcuların can güvenliği de dikkate alınması gereken bir diğer konudur. Bu sebeple, pist ve yapılarda meydana gelebilecek hasarlar sürekli olarak izlenerek, ihtiyaç duyulması halinde yapı iyileştirmeleri gerçekleştirilmelidir. Yapı sağlığı izleme konusunda birçok yersel ölçme yöntemi mevcuttur. GNSS, nivelman, inklonometre vb. sıklıkla kullanılan yersel ölçme yöntemlerine örnek verilebilmektedir. Ancak bu ölçme yöntemleri hassas ölçü sağlasa dahi nokta tabanlı olup, alansal bilgi çıkarımında kullanılması bir hayli güçtür. Ciddi iş yükü gereksiniminin yanı sıra maliyet ve zaman gerektirmektedir. Uzaktan algılama yöntemleri, sürekli periyotlarda uydu görüntüsü sağlama ve geniş kapsama alanı ile bu konuda önemli bir boşluğu doldurmaktadır. Son yıllarda farklı sensörlere sahip birçok uydudan ücretsiz uydu görüntülerinin temin edilebilmesi sayesinde, uydu görüntüsünün mekânsal çözünürlüğüne bağlı olarak farklı detaylarda alansal bilgi çıkarımı sağlanabilmektedir. Yapılarda veya arazi yüzeylerinde meydana gelen yüzey hareketlerinin alansal olarak belirlenmesinde yapay açıklıklı radar (Synthetic Aperture Radar-SAR) uydu görüntüleri sıklıkla kullanılmaktadır. Avrupa Uzay Ajansı'nın Sentinel-1 SAR uydu görüntülerini ücretsiz olarak servis etmesiyle, yapı sağlığı izlemelerinde interferometrik SAR (InSAR) analizleri sıklıkla uygulanmaya başlanmıştır. Yüzey hareketlerinin zaman içerisinde davranışlarını belirlemek ve izlemek için zaman serisi InSAR yöntemleri geliştirilmiştir. Sabit Saçıcı İnterferometri (Persistent Scatterer Interferometry-PSInSAR) ve Küçük Baz Altküme InSAR (Small Baseline Subset InSAR - SBAS) yöntemleri en çok kullanılan yöntemlerdendir. Havalimanlarının uygun arazi eksikliği sebebiyle deniz dolgu alanlarına veya sulak alanların ıslah edilmesiyle elde edilen boş alanlara inşa edilmesi, son yıllarda yüzey hareketlerinin oluşmasına ve bu yüzey hareketlerinin zaman serisi InSAR yöntemleriyle izlenmesi çalışmalarına konu olmuştur. Bu havalimanlarının, Türkiye'de de benzer örnekleri görülmeye başlanmıştır. İstanbul Havalimanı, bulunduğu jeolojik konum ve kullanıma açıldıktan sonra ulaşım trafiğindeki rolü sebebiyle önemli bir yere sahiptir. Terkedilen açık kum ve kömür ocaklarının zaman içerisinde sularla dolmasıyla oluşan sulak alanlar ve rehabilite edilen ağaçlık alanlar üzerine İstanbul Havalimanı inşa edilmiştir. Sulak alanlar ıslah edilmiş ve dolgu malzemeleriyle doldurularak havalimanı inşaatına uygun konuma getirilmiştir. Yüklü miktarda dolgu yapıldığı bilinen havalimanı, zemin oturması ve yoğun dış yükler sebebiyle yüzey hareketlerine karşı hassas durumdadır. Bu sebeple, çalışma alanı olarak İstanbul Havalimanı seçilmiştir. Landsat optik uydu görüntüleri kullanılarak 1984-2020 yılları arasında beşer yıllık periyotlar ile çalışma alanı sınıflandırılmıştır. Sınıflandırma sonrası gerçekleştirilen tematik doğruluk değerlendirmesi sonucunda, sınıflandırmalar yüksek doğruluk değerleri ile gerçekleştirilmiştir. Sınıflandırma sonucunda 1984 yılından 2010 yılına kadar sulak alanların 10 kat arttığı ancak havalimanı inşaatı sonrası %50'sinden fazlasının yok edildiği tespit edilmiştir. Bitki örtüsü yaklaşık olarak %24 azalmış, açık alan sınıfı ise %7 artış göstermiştir. Havalimanında meydana gelen yüzey hareketlerini belirlemek üzere LiCSBAS uygulama paketi ile SBAS yöntemi, SNAP ve StaMPS yazılımları ile PSInSAR yöntemleri uygulanmıştır. Havalimanının kullanıma açıldığı Kasım 2018 ile Eylül 2022 zaman aralığını kapsayan analizlerde, Sentinel-1 ücretsiz SAR görüntüleri kullanılmıştır. Hem yükselen hem de alçalan geometride gerçekleştirilen analiz sonuçlarının birbirine oldukça yakın sonuç verdiği tespit edilmiştir. SBAS ve PSI analiz sonuçlarının birbirini destekleyici sonuç vermesinin yanı sıra mekânsal çözünürlüklerinin farklı olması sebebiyle sonuçlar birbirini tamamlayıcı nitelikte olmuştur. Elde edilen sonuçlar doğrultusunda, B ve C pisti ile terminal binası arasında bulunan ve havalimanı inşaatı sebebiyle ıslah edilen 88,6 ha alana sahip sulak alanın olduğu bölgede çökme eğilimli deformasyon hareketi tespit edilmiştir. Bu bölgeye yakın olan pistlerde (B ve C pisti) ve terminal binasının kuzey kesimlerinde de benzer şekilde çökme eğilimli hareket görülmektedir. A pistinin kuzey kesimlerinde de çökme eğilimli negatif yönde hareket görülürken, terminal binası üzerinde, binanın güney kesimlerinde ve B pistinin güney kesimlerinde kabarma eğilimli pozitif yönde hareket görülmektedir. Kasım 2018-Eylül 2022 süre zarfında C pistinin inşaatına başlanıp tamamlanarak kullanıma açılmasından dolayı, PSI yöntemiyle sabit saçıcı noktaları tespit edilememiştir. Bu sebeple, C pistinin kullanıma açıldığı Temmuz 2020-Eylül 2022 zaman aralığında PSI analizleri tekrarlanmış ve C pistinin inşa edildiği bölgede yer alan ıslah edilmiş ufak sulak alanın üzerinde çökme eğilimli negatif yönde hareket olduğu tespit edilmiştir. Hem alçalan hem yükselen yörüngede elde edilen analiz sonuçları kullanılarak uydu bakış yönünde elde edilen yüzey hareketleri düşey ve yatay bileşenlerine ayrılmıştır. Her iki yörüngede elde edilen sonuçların birbirine benzerlik göstermesi, hareketin düşey yönde olduğunun göstergesidir. Elde edilen düşey bileşen de bu durumu doğrulamaktadır. Yatay bileşenden ise anlamlı sonuçlar elde edilememiştir. Terminal binası üzerinden alınan zaman serisi bileşenlerine ayrılarak trend bileşeni, Meteoroloji Genel Müdürlüğü (MGM)'den alınan sıcaklık verisi ile ilişkilendirilmiş ve yüksek korelasyon elde edilmiştir. Bu durum, terminal binasının çatı malzemesinin genleşmeye meyilli olduğunu göstermiştir. Sayısal yükseklik modelleri (SYM) kullanılarak havalimanı inşaatında kazı ve dolgu yapılan alanlar tespit edilmiştir. Havalimanı inşaatı öncesi topoğrafyayı ifade eden SYM, SRTM verisi ile sağlanmıştır. İnşaat sonrası topoğrafya ise stereo Pleiades görüntüleri ile 2 m mekânsal çözünürlükte oluşturulmuş ve SRTM verisinin mekânsal çözünürlüğü olan 30 m'ye yeniden örneklenmiştir. İki verinin farkının alınmasıyla elde edilen sonuçlar doğrultusunda 88,6 ha alana sahip büyük sulak alana ortalama 60 m dolgu yapıldığı tespit edilmiştir. Ayrıca elde edilen kazı ve dolgu alanlarının deformasyon sonuçlarıyla örtüştüğü tespit edilmiştir. Yüzey hareketlerinin belirlenmesi sonrası, zaman serileriyle tahmin analizleri gerçekleştirilmiştir. Tahmin analizlerinde altı farklı pilot bölge, yapı türü ve zaman serisinin karakteristiğine bağlı olarak belirlenmiştir. Tahmin analizleri geleneksel yöntemler, regresyon tabanlı yöntemler ve derin öğrenme yöntemleriyle gerçekleştirilmiştir. Analiz sonuçlarında, geleneksel yöntemler lineer bir yaklaşım sunarak başarılı bir sonuç vermiştir. Regresyon tabanlı yöntemlerden ise XGBoost Regresyon (XGBR) algoritmasının bütün bölgelerde başarılı sonuç verdiği tespit edilmiştir. Derin öğrenme yöntemlerinde ise Bütünleşik LSTM (Long Short Term Memory-LSTM) yönteminin başarılı sonuç verdiği tespit edilmiştir. Zaman serileri bileşenlerine (trend, mevsimsel etki, artık) ayrılarak, ERA5-Land meteorolojik parametreleri (hava sıcaklığı, toprak sıcaklığı, yağış ve buharlaşma) ile beraber tahmin analizlerine dahil edilmiş ve XGBR ile Bütünleşik LSTM algoritmaları kullanılarak bu özniteliklerin tahmin analizlerine katkısı incelenmiştir. Analiz sonucunda, özniteliklerin eklenmesinin tahmin doğruluğunu arttırdığı tespit edilmiştir. Her bölge için özniteliklerin önem dereceleri her iki yöntem üzerinden de belirlenmiştir. XGBR yöntemi üzerinden SHAP (SHapley Additive exPlanations), permütasyon öznitelik önemi yöntemleri ve algoritmanın kendi ağaç yapısını oluştururken kullandığı önem dereceleri ile, Bütünleşik LSTM yöntemi üzerinden ise permütasyon öznitelik önemi yöntemi ile eklenen yedi özniteliğin önem dereceleri belirlenmiştir. XGBR yöntemi ile elde edilen önem dereceleri bütün yöntemlerde benzerlik göstermiş ve trend ile artık parametrelerinin en önemli öznitelikler arasında yer aldığı belirlenmiştir. Ancak kontrollü ilerlenmediği takdirde bu parametrelerin aşırı öğrenmeye sebebiyet verdiği tespit edilmiştir. Pist üzerinden alınan iki farklı zaman serisinde buharlaşma parametresi öne çıkarken, bina üzerinden alınan zaman serilerinde de hava sıcaklığı parametresinin öne çıktığı belirlenmiştir. Bütünleşik LSTM yöntemi üzerinden permütasyon öznitelik önemi yöntemiyle belirlenen önem derecelerinde, trend bileşeni zaman serilerinin genelinde önemli öznitelikler arasında yer almaktadır. Pist üzerinden alınan zaman serilerinde yağış parametresi öne çıkarken, bina üzerinden alınan çökme ve kabarma zaman serilerinde buharlaşma parametresi öne çıkmaktadır. Terminal binası özelinde incelendiğinde ise toprak sıcaklığının öne çıktığı belirlenmiştir. Elde edilen sonuçlar doğrultusunda tahmin analizlerinde yapı türü, yapı malzemesi ve kullanılan yönteme göre sonuçların değişkenlik gösterdiği söylenebilmektedir. LSTM katmanlarında bulunan seyreltme katmanı ile aşırı öğrenme engellenebilirken, regresyon tabanlı algoritmalarda bu durumun engellenememesi yanıltıcı sonuçlar elde edilmesine sebep olabilmektedir. Geleneksel yöntemlerden ARIMA ve FFT ile XGBR ve Bütünleşik LSTM yöntemleri kullanılarak zaman serilerinin gelecek tahmini analizi gerçekleştirilmiş, elde edilen sonuçlar doğrultusunda FFT ve Bütünleşik LSTM yöntemlerinin benzer bir yaklaşım sunduğu tespit edilmiştir. Tez çalışması, İstanbul Havalimanı üzerinde gerçekleştirilen kapsamlı analizleri içermekte olup, İstanbul Havalimanı gibi büyük ve kritik öneme sahip ulaşım altyapılarında gerçekleştirilecek yapı sağlığı izleme çalışmalarına katkı sağlayacağı düşünülmektedir.