Sustainable Development Goal "Goal 9: Industry, Innovation and Infrastructure" ile 'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
Öge10. Uluslararası Lif ve Polimer Araştırmaları Sempozyumu (10. ULPAS) Bildiriler Kitabı: 13-14 Mayıs 2022, İstanbul, Türkiye: Akıllı ve teknik tekstiller / editors, Prof. Dr. Yusuf Ulcay, Prof. Dr. Ali Demir, Doç. Dr. Ali Kılıç, Dr. Gülçin Baysal, Merve Nur Sağırlı(İstanbul Teknik Üniversitesi, 2022) ; Tekstil Mühendisliği ; Ulcay, Yusuf ; Demir, Ali ; Kılıç, Ali ; Baysal, Gülçin ; Sağırlı, Merve NurSempozyumun ana misyonu, lif ve polimer alanındaki çalışan ve araştırmacıları kuruluşlarına ve topluluklarına sürekli olumlu katkıda bulunmaya hazırlamaktır. Uluslararası Elyaf ve Polimer Sempozyumu ULPAS (IF&PRC), sürekli büyüme şartlarına hazır, iyi eğitimli, bağımsız araştırmacıları hazırlamak için özenli bir ortamda proje çalışmalarına dayalı ve nitelikli sempozyum programları sağlamayı amaçlamaktadır. Ayrıca, Lif ve Polimer Araştırma Enstitüsü üyeleri arasında işbirliği ve koordinasyon kurmayı da amaçlamaktadır.
-
Öge19. yüzyıl İstanbul'unda tarihî camilerin ihyası, örnekler ve arşiv belgeleri üzerinden bir tespit ve araştırma(Lisansüstü Eğitim Enstitüsü, 2022-02-25) Çiçek Ünal, Özlem ; Mazlum, Deniz ; 502082207 ; Restorasyon ; Restoration19. yüzyılda İstanbul'da çok sayıda tarihî cami ve mescit; yaşanan yangınlar, 1894 depremi, bakımsızlık ve imar faaliyetleri gibi nedenlerle kullanılamaz duruma gelmiş ve yeniden inşa/ ihya edilmiştir. O dönemde imparatorluğun Batı ile gelişen ilişkileri, değişen mimari beğeniler, yaşanan maddi sorunlar ve İstanbul'da yaşanan değişim ve dönüşümler yeniden inşa faaliyetlerinin ölçek ve niteliğini etkilemiştir. İstanbul'un artan nüfusu ile orantılı fiziki büyümesi imar hareketlerini beraberinde getirmiş; yeni ulaşım ağları, rıhtımlar, meydanlar gibi düzenlemeler hız kazanmıştır. Üst üste yaşanan yangınlar, pek çok kayba neden olmanın yanında, sonrasında getirilen yeni düzenlemelerle Batılı bir kent görünümüne kavuşmak için fırsat sunmuştur. Yangınlar ve 1894 depremi sonrası pek çok yapının aynı anda hasar görmesi, gerekli onarımların ve inşaatların yapılabilmesi için kaynak bulunmasını güçleştirmiş ve kimi durumlarda yapıların ayakta tutulabilmesi için gerekli olan müdahaleler gecikmiştir. Osmanlı arşivinde bulunan; yangınlar sonrasında hasarlı yapılar ve bağlı bulundukları vakıfların maddi durumları hakkında hazırlanmış defterler yaşanan sorunları ortaya koymaktadır. Vakıf yapısı olan tarihî cami ve mescitler, vakıfların yönetimindeki bozulma ve suistimaller neticesinde düzenli bakım ve onarımları için gereken ödeneklerden mahrum kalmış; yangın ve deprem gibi ani hasarların yanında kimi zaman geçen zaman içinde gelişen hasarların onarım bedellerini de karşılayamayacak duruma gelmiştir. Bu durumun önüne geçebilmek için vakıf yönetimleri ve bütçelerini tek bir çatı altına toplamak için idari adımlar atılsa da yaşanan maddi sorunların önüne geçmek kolay olmamıştır. Sonuç olarak kentteki tarihî cami ve mescitler hem bağlı oldukları vakıfların sorunları hem de içinde bulundukları kentte yaşanan afetler ve değişimler neticesinde ayakta tutulamayarak ihya edilmişlerdir. Tez kapsamında yapılan ve selâtin camilerini kapsam dışında bırakan araştırma, 1780-1920 zaman aralığında İstanbul'da 153 cami ve mescidin çeşitli nedenlerle kısmi ya da bütüncül olarak yeniden inşa edildiğini ortaya koymuştur. Gerçekleşen bu ihyalarda yapıların tarihî kimlikleri değil vakıf kimlikleri önde tutulmuştur. Genel olarak ihyalarda amaçlanan hedef vakfedilen işlevi uzun süre yerine getirebilecek sağlam bir yapı elde etmektir. 19. yüzyılda Batı'da gelişen anıt eser ve koruma kavramları Osmanlı'da gecikmeli olarak yüzyılın sonları ve 20. yüzyılın başlarında tartışılmaya başlanmıştır. Batının etkisiyle antik eserler üzerinde oluşan ilk ilgi zaman içinde daha geç dönem eserlerine kaymıştır. Çoğu vakıf yapısı olan, anıt niteliğindeki eski eserlerin onarımları yaşanan afetler nedeniyle 19. yüzyılda da gerçekleştirilmiş; önemli eserlerin uygulamalarında dönemin genel pratiklerine uygun olarak yabancı ya da yurt dışında eğitim almış mimarlar ağırlıklı olarak görevlendirilmiştir. Yapılan yasal düzenlemelerle onarımların uzman kişilerce ve denetim altında yapılması sağlanmaya çalışılmıştır. Osmanlı arşiv belgeleri; gerçekleştirilen ihyaların nedenleri, ihya kararının alınması, yapıları ihya ettiren kurum ve kişiler, ihya bedellerinin belirlenmesi ve karşılanması, ihya uygulamalarında izlenen süreç ve ihyalarda kullanılan yeni mimari üsluplar konusunda bilgi vermektedir. Yapıların ihyasında yukarıda sıralanan konular her yapının kendi koşulları ve hasar durumu özelinde değişebilmektedir. Yapılar kimi zaman kısmen ayakta tutularak, kullanılabilir durumdaki mevcut malzemesi ile ihya edilirken kimi zaman ise ihya edilecek yapı tamamen ortadan kalktığı için yeni baştan bir yapı inşa edilmektedir. Yapıların ihyasında bunun gibi değişkenlik gösteren durumları ortaya koyan örnekler tez çalışması içinde detaylı olarak aktarılmıştır. Kelime olarak "yeniden canlandırma" ve "diriltme" anlamına gelen "ihya" koruma biliminde rekonstrüksiyon (yeniden yapım) eylemine karşılık gelmektedir. 19. yüzyılda gerçekleştirilen ihyaların amacı yapıyı yaşatmaktan çok vakfedilen işlevi ve vakfedenin adını yaşatmaktır. Bu nedenle yapı tamamen değişse bile adı ve işlevi değişmemektedir. Cami ve mescitlerin, kendi arsalarında yeniden inşa edilmiş olmaları nedeniyle, konumları sabit kalmakta böylece kent tarihinde değişmeyen noktalar olarak günümüze ulaşmaktadırlar. Her ne kadar ihyalarda zamanın ihtiyaç ve yönelimlerine göre; üslup, malzeme, teknik ve ek işlevler değişebilse de yapının adı, işlevi ve konumu korunarak vakıf hizmeti yeniden canlandırılmakta ve devam ettirilmektedir. Rekonstrüksiyon koruma alanında tartışılmaya başlandığı günden itibaren belli sınırlar ve kurallar koyulmaya çalışılan bir uygulamadır. Çoğu zaman maksadını aşan bu uygulama; özellikle ani eser kayıplarına neden olan savaş ve afet gibi durumlarda, toplumun hafızasının devam edebilmesine ve iyileşmesine yönelik olarak başvurulabilir bir uygulama olarak tanımlanmakta ve sınırlandırılmaya çalışılmaktadır. Günümüzde koruma için neredeyse bir problem haline gelen rekonstrüksiyon; toplumsal iyileşme ve kültürel devamlılık gibi nedenlerin dışında; eski eser-turizm ilişkisinin getirdiği ekonomik kazanç, yapılaşma kısıtlaması olan tarihî yerleşimlerde inşaat yapma fırsatı ve simge yapıların hizmet edeceği politik çıkarlar gibi motivasyonlarla uygulanabilmekte, hatta kültür varlıklarının kaybını telafi edebilen bir müdahale olarak değerlendirilmektedir. Bu tezin, günümüzde moda bir tabir ve uygulama olan "ihya"nın koruma tarihimizdeki gerçek yerini anlamaya katkıda bulunması ve incelediği örneklere yapılacak olası müdahalelere ışık tutması umulmaktadır.
-
Öge2-step indoor localization for "smart AGVs"(Graduate School, 2022-06-17) Yılmaz, Abdurrahman ; Temeltaş, Hakan ; 504142101 ; Control and Automation EngineeringWith the fourth industrial revolution, in other words, Industry 4.0 (I4.0), the transition from traditional mass production to personalized production started in factories. One of the components of the next-generation factories compatible with I4.0 is cyber-physical systems (CPSs). Smart manufacturing islands, smart warehouses, and smart material-handling vehicles are examples of CPSs. The material handling vehicles employed in today's factories, such as automated guided vehicles (AGVs), are not ready for use in smart factories, as the digital transformation has not been completed and the vehicles are not equipped with software to perform fully autonomous operations. In smart factories, it is aimed that the new generation AGVs will do all the planning themselves while performing a given task. Thus smart AGVs will be able to use the whole free space in the factory instead of being restricted to the routes reserved for them. With this development, it will be possible to increase flexibility and efficiency in production. There may be no physical difference between the traditional and smart AGVs, but thanks to the capabilities of the embedded software, smart AGVs will be able to operate autonomously. One challenging problem to be overcome for smart AGVs to effectively realize an assigned logistic task is localization. Although localization is an extensively studied topic for both indoor and outdoor environments, there are still open problems. Considering the logistics problem, the localization problem can be divided into three in the general sense. The first is global localization, which means determining where the smart AGV is in the environment at the time the vehicle wakes up. The second problem is position tracking, which means updating the pose information depending on the movements of the robot, while the instantaneous pose of the robot is known. The third and last problem is the kidnapped robot problem, which occurs when the robot is moved from one place to another without informing. Cases that reduce the reliability of the calculated pose, such as instantaneous skidding, slipping, and crashing an object, can also be addressed under this problem. The localization approach to be utilized in smart factories is supposed to overcome these three sub-problems. There are two main tasks in a logistic operation. The first is the docking stage, which covers the cases of taking a load to the smart AGV or dropping the load of the smart AGV. At this stage, the aim is to reach the target (destination) where the load will be taken or left with industrial standards. With I4.0, reaching the target with sub-centimeter precision has become a goal. Therefore, estimating the pose with high accuracy and precision is expected from the docking localization algorithm. The second is the delivery stage, which covers carrying the load to the destination in the fastest and safest way in the parts outside the docking region. It is not essential to follow the planned route exactly in this stage, so rather than the high accuracy of the localization approach, showing similar positioning performance in the entire operating field is more important. Within the scope of this thesis, different localization algorithms have been proposed for the delivery and docking stages. In addition, a probabilistic decision mechanism that determines the boundary between the delivery and docking stages is designed. A variant of the particle filter-based Monte Carlo Localization (MCL) approach, Self-Adaptive MCL (SA-MCL), is taken as the basis localization method for the delivery stage. The main reason for choosing SA-MCL is that it can solve all aforementioned sub-problems of localization. While performing the traditional SA-MCL global localization task, it uses energy maps and assumes that all range sensors are uniformly placed on the robot in energy map generation. However, this assumption is not valid for many real applications, such as AGVs with two-dimensional (2D) laser scanners front and rear. Moreover, three-dimensional (3D) sensing technology is developing day by day with the widespread use of autonomous vehicle technology. With the ellipse-based energy model proposed in this thesis, the energy map-generating part of the traditional SA-MCL has been updated to overcome both of these constraints. The pose estimation accuracy of the SA-MCL approach performs more or less the same across the entire environment, making it suitable for the delivery stage. However, since the pose estimation accuracy level is proportional to the grid dimensions of the occupancy map, it may not be possible to reach the expected sub-centimeter precision within the docking region in large areas such as factories. Therefore, it was decided to use a scan matching-based precise localization algorithm in the docking region, and for this purpose, the affine iterative closest point (ICP) algorithm was adapted to the localization problem. To make the developed method robust against factors such as noises, disturbances, and/or outliers, the correntropy criterion was utilized while constructing the cost function of affine ICP. As a result, an updated SA-MCL method with an ellipse-based energy model is proposed for the solution of global localization, position tracking, and kidnapped robot problems in the delivery stage. On the other hand, an affine ICP-based precise localization approach is presented for position tracking in the docking stage. However, the boundary between the delivery stage and the docking stage may not be clear. For example, limiting the docking stage to a zone very close to the target may require extra maneuvers to tolerate positioning errors during the delivery stage due to the physical constraints of smart AGVs. If a larger area is specified as a docking stage, it may not meet the expectations since the performance of the precise localization approach may decrease further away from the target. For this reason, there is a need for a switching mechanism that can be adapted specifically to the application and decides whether to switch from the delivery stage to the docking stage. Since the pose estimation performance of the SA-MCL-based localization approach is roughly similar on the entire map, the deciding factor in the transition to the docking stage is the performance of the precise localization method used in the docking stage. In the literature, it is emphasized that the amount of overlap between matched point sets is supposed to be above 50% for the scan-matching-based methods to yield successful results. Within the scope of the thesis, a correntropy-based similarity rate definition, which gives better results than the overlap ratio calculation methods in the literature, is presented and utilized as the decision parameter of the switching approach. To avoid instabilities, a gap is left according to Hysteresis curve behavior while switching from the delivery stage to the docking stage or vice versa. Within the scope of the thesis, the two-stage localization method developed for the next-generation AGVs to be used in smart factories has been experimentally tested on a differential drive mobile robot. First, the ellipse-based energy model addition to the SA-MCL method has been verified by field tests, and its superiority in global localization has been demonstrated. Then, the affine ICP-based localization method used in the docking stage has been tested over nine separate real-world scenarios and it has been shown that it is possible to compute pose with sub-centimeter precision and reach the target at industrial standards. In addition, an affine ICP method, which is not available in the literature, was proposed, and the point set matching performance was demonstrated over synthetic point sets. After validating its performance in point set registration, it was also used for precise localization. Finally, the whole system was tested together. The delivery was carried out with improved SA-MCL, and the switching point from delivery to the docking stage was determined by the decision mechanism. As seen through three different scenarios, it is possible to complete the localization tasks in the delivery and docking stages in the smart factories by using the proposed methods.
-
Öge3-D velocity structure of the gulf of Izmir (Western Turkey) by using traveltime tomography(Graduate School, 2022-11-07) Sağlam Altan, Zehra ; Gökaşan Ocakoğlu, Neslihan ; 505142401 ; Geophysical EngineeringThe Gulf of İzmir and its surroundings in western Anatolia are under the influence of active continental extension characterized by crustal thinning, intense seismic activity, the high heat flows associated with volcanism, and geothermal activity. These features make this region attractive for both geothermal and hydrocarbon exploration activities. The study area and surroundings are well investigated in terms of the crustal-scale tomography studies however there are only a few moderate-scale tomography studies exist aiming to understand its velocity structure and stratigraphical architecture, even though there are basins with proven hydrocarbon and geothermal sources across western Anatolia. Structural and stratigraphical interpretations from previous studies are performed on the 2-D time-migrated seismic sections, which are far from depth environment illustration and reliable velocity information. These conventional velocity estimation methods are based on the Dix inversion in which a flat-layered earth model with no lateral velocity variation and small source-receiver offset values are assumed. However, the study area is way more complex than these assumptions. Therefore, the inversion of traveltimes of the reflected events in seismic data is adopted as a velocity estimation method. In this study, the first 3-D Neogene velocity-depth model of the Gulf of İzmir is obtained by using traveltime tomography. Pre-processing steps such as trace edit, muting out unwanted signals, filtering undesired frequency content, and gaining to remove the effects of wavefront divergence are applied to the raw shot gathers to be able to delineate reflection events on the pre-stack data and make the picking phase more accurately by removing the excessive background noise. This more pickable dataset contains 401352 seismic trace recordings from eleven multi-channel seismic lines collected in the NNW-SSE oriented outer Gulf of İzmir between offshore Foça and Karaburun. The resulting grids of traveltimes were then correlated with each other at the tie-points. Additionally, the conventionally processed data was re-interpreted in detail. Three main seismic stratigraphic units (SSU1-SSU3) were interpreted on the time sections. Three subunits (SSU1a, b and c) are also distinguished within the SSU1 seismic unit. These units are bounded above and/or below by the five horizons (H1-H5). Two unconformity surfaces between Upper Miocene-Pliocene and Pliocene-Quaternary sediments are marked. For the 3-D tomography analysis, the initial velocity for the water column is set to 1500 m/s. The velocity constraints for the following layers are chosen as follows: 1500-1780 m/ for SSU1a, 1500-2000 m/s for SSU1b, 1500-2400 m/s for SSU1c, and 1500-2800 m/s for SSU2 based on the conventional velocity analysis conducted by the previous studies. The initial depth values for the reflectors H1, H2, H3, H4, and H5 are chosen as 100, 150, 300, 550, and 900 m, respectively. The principle of minimum time that uses the analytical solution of Snell's law through an iterative procedure is used to compute the synthetic traveltimes and ray paths within a model. The velocity fields between horizons and the depth of the horizons are updated sequentially. An iterative optimization method called the Simultaneous Iterative Reconstruction Technique (SIRT) is used to update the velocity fields by traveltime inversion. The principle of minimum dispersion of the estimated reflection points is used to update the depth and shape of the interfaces. The final tomographic inversion is carried out by using staggered grids. This final high-resolution tomographic image has provided 3-D stratigraphical architecture and velocity distribution of the Gulf of İzmir in the depth domain. Five seismic stratigraphic units/subunits (SSU3, SSU2, SSU1a, SSU1b, and SSU1c) are traced along the study area. These seismic units and unit boundaries are calibrated by the Foça-1 well (drilled by Turkish Petroleum) on the Pre-Stack Depth Migration (PreSDM) section of Line-25. As a result of this calibration, the acoustic basement is associated with SSU3 consisting of tuffs, sandstones, limestones, and volcanics of the Lower-Middle Miocene Yuntdağ Volcanics. They terminate onto the north-dipping horizon H5 along the southern side of the basin, which displays highly variable topography with several depressions and high. It is marked as a major unconformity separating Miocene and older rocks from the Pliocene-Quaternary younger deposits with a depth of ~200 m in the southern sector and deepens to ~900 m in the mid-central sector constituting a basin. Then, it rises to 420 m forming ridges offshore both Foça and Karaburun Peninsula between 15 to 20 km in the central sector. These volcanic ridges bound the basin, unlike the rest of the western Anatolian grabens bounded by normal faults. The depth of the horizon H5 increases considerably in the northern part of the basin, ranging from 900 m (western flank) to 1400 m (eastern flank). The basin deposits have accumulated asymmetrically across the study area following the northwest dipping Miocene-Pliocene unconformity surface (H5). It consists of two asymmetric depressions developed in the northeastern and mid-central sectors. The thickest depocenter is in the northeast (up to ~1400 m) and thinning through the mid-central sector (~850 m) and southern (~140 m) sector, respectively. SSU2 lies on top of the acoustic basement and corresponds to the sandstones, limestones, volcanics, and shales of the Bozköy Formation and the limestones of the Ularca Formation, dating from the Late Miocene to the Pliocene. The deposition of this unit is mostly concentrated on the northeastern and the mid-central sector of the basin, where acoustic basement highs create small depressions. SSU2 comprises a 20-70 m sediment thickness in the SE offshore Uzun Island. The local depression zone in the mid-central part of the basin has ~580 m thickness, whereas SSU2 has ~40 m and ~260 m thicknesses in the eastern and the western flank of the central sector. SSU2 is rapidly thickening in the northern sector. The maximum thickness of ~790 m appears in the eastern flank of the northern sector, whereas thickness gradually decreases westward up to 20 m. SSU2 is separated from the overlying unit SSU1 by the horizon H4. This boundary defines the base of the Quaternary. The depth of H4 ranges from ~200 m to 480 m from southeast to northwest. Then, it dramatically deepens to ~860 m in the northern sector (through the outer gulf). It constitutes a small basin with a depth of ~520 m at the mid-central sector around Foça. H4 is overlaid by the Plio-Quaternary sediments. From Pliocene to Quaternary (SSU1), the depression in the mid-central sector shifted gradually towards the eastern flank of the central sector (~440 m) while the depression in the northeastern sector expanded northwestwardly (~620 m). These two depression areas are separated by the ridges of horizon H4 that mimics the basement high rising in the east-west direction. By contrast, the total thickness of the Plio-Quaternary sedimentary succession is thinning abruptly up to ~180 m towards the western flank of the southern and central sector of the basin following the rising basement. SSU1 comprises three seismic subunits (SSU1c, SSU1b, and SSU1a). The inclination of the subunits decreases from north to south and from bottom to top. A member of the Bayramiç Formation, SSU1c, is deposited on top of the SSU2, which is dated as Quaternary consisting of conglomerates at the base overlain by sandstones and shales above. The thickness of SSU1c is 20 m in the southern termination of the basin and gradually increases northwardly up to ~280 m. Above that, two other members of the Bayramiç Formation lie named SSU1b and SSU1a, separated by horizons H3 and H2. SSU1b also consists of a similar sequence of conglomerates, sandstones, and shales. The thickness of the SSU1b varies between ~100-300 m. SSU1b accumulates up to ~280 m in the eastern flank of the central sector, where the underlying horizon deepens. Horizon H2 represents the upper surface of the seismic unit SSU1b. SSU1a consists of Quaternary sandstones. It has a 40 m thickness in the southern sector, ~60 m in the central sector, and ~140 m in the northern sector. Finally, H1 is located on top of seismic unit SSU1a and represents the seafloor. The seafloor is smoothly deepened from south to north from ~45 m to ~125 m. The strike-slip faulting with generally compressional character (Karaburun Fault Zone and Urla Fault Zone) is the main reason for the recent deformation of both basement morphology and overlying sedimentary succession. Overall, the Gulf of İzmir is quite different than the surrounding grabens (such as Gediz and Bakırçay grabens) in terms of structural and stratigraphical configurations. Our results also provide the first 3-D velocity model reconstructed from the reflected arrivals of the sedimentary sequence boundaries for the whole outer Gulf of İzmir. The model is presented in a set of horizontal depth slices at different depths and vertical cross-sections displaying velocity variations through the study area. The significant low-velocity zones (LVZs) (1650Vp 1850 m/s) are seen in the horizontal depth slices and vertical cross-sections in the eastern flank down to ~500 m and along the northwestern part of the basin down to ~1 km. Another feature observed in the vertical sections is the presence of high-velocity zones (HVZs) (2150Vp 2350 m/s) between low-velocity zones in the mid-central and north-central sectors of the basin. This observation are supported by the P-wave velocity perturbation that defines the velocity deviations from the initial velocity obtained using the tomography results. The velocity variation seen in the eastern flank of the study area overlaps a lenticular structure that presents on both the time migrated and PreSDM section within the Bayramiç Formation. It has ~500 m length and ~90 m width and bounded by steeply dipping faults on either side. Amplitude anomalies appear to be present at both the upper and lower surfaces of the structure. AVO (amplitude versus offset) analysis, modeling, and seismic attributes showed the presence of possible Direct Hydrocarbon Indicators (DHIs) from the top and base reflectors of the established lenticular-shaped structure interpreted as bright and flat spots, respectively. Our observations suggest the presence of a reservoir within the Quaternary-aged Bayramiç Formation, which consists of conglomerates, sandstones, and shales. It is sealed by shales of the Bayramiç Formation and bounded by an unconformity at the base together with the strike-slip faults on both sides. Therefore, it is concluded that this trap is a structural-stratigraphic. The bounding strike-slip faults allow the migration of hydrocarbons from greater depths into the local reservoir. The presence of another LVZ on top of the reservoir along the strike-slip faults indicates the leakage breaching up to the seafloor. The fault-controlled LVZs in the Plio-Quaternary sediments of the Gulf of İzmir is interpreted as the indication of gas/fluid flow and heat transfer from a deeper source to the shallow surface. The depth information provided by this thesis will further increase our understanding of the link between 3-D stratigraphic architecture and the dominant tectonic forces, and it provides a solid foundation for future numerical simulation studies on the possible fluid/heat transport mechanisms. The P-wave velocity characteristics provided in this thesis will be used to detect the vertical and lateral velocity variations, which can be further used to discover possible dramatic lateral and vertical velocity variations indicating the links between faults (tectonic), fluid escape, gas occurrences (hydrothermal processes) and discuss potential geohazard risk beneath the Gulf of İzmir.
-
Öge4 channel configurable constant-current/voltage mode biphasic implantable neurostimulator ASIC with channel centric active charge balancer(Graduate School, 2022-03-02) Cakalı, Anıl ; Karalar, Tufan Coşkun ; 504161229 ; Electronics EngineeringElectrical stimulation is a technique that let inhibition or exhibition neuron activities with charge injection to a target tissue. Neural stimulators are used as a treatment method for diseases and the restoration of dysfunctional organs. Sacral Nerve Stimulation that is used for the treatment of bladder and urinary functions, Deep Brain Stimulation (DBS) that is used for the treatment of diseases such as Parkinson's disease, epilepsy, tremor, depression, and obsessive-compulsive disorder, Spinal Cord Stimulation that is used for the treatment of chronic pain syndrome, Retinal Stimulation that is used for recovering visual functions and Cochlear Stimulation that is used to recovering of hearing functions are some of the application fields of electrical/neural stimulation. Considering application fields, most neurostimulator/neuromodulation devices are implanted in the human body. These devices are battery-powered devices that have long battery life, because of that an Application Specific Integrated Circuit (ASIC) is needed for implantable applications considering application specifications like target nerve, power consumption and output properties. Neurostimulators interface with target neurons by using electrodes. Charge accumulation on an electrode-tissue interface may cause Ph variation of electrolyte, toxic surface creation between electrode-tissue interface and variation of electrode-tissue impedance. Most importantly, it may cause permanent nerve damage. Using biphasic stimulation and active charge balancer structure together is the preferred method to achieve ideally zero net charges on the target tissue. Constant-current stimulation, constant-voltage stimulation or constant-charge stimulation methods are presented in the literature. Constant-current stimulation is the safest stimulation method. Ideally, zero net charge on tissue may be achieved by controlling anodic and cathodic current amplitudes and durations in a biphasic manner. For constant-voltage stimulation, the amplitude of current that flows through the electrode-tissue interface is determined by the impedance of the electrode-tissue interface. Due to that reason, it is not easy to control transferred charge to tissue. Constant-charge stimulation is a useful method to achieve charge balancing by using switch-capacitor structures. The disadvantage of constant-charge stimulation is that it needs larger capacitors that cause some difficulties with on-chip implementation. In literature, neurostimulator ASICs are designed for only constant-current mode stimulation or only constant-voltage mode stimulation. Similarly, most charge balancer circuits are designed for just constant-current mode stimulation or constant-voltage mode stimulation. In this work, a novel active charge balancing scheme that works with both constant-current mode and constant-voltage mode for monopolar/bipolar/tripolar/quadripolar electrode polarities is proposed. Furthermore, a novel channel circuit and novel channel centric active charge balancer circuit topologies that support both constant-current and constant-voltage stimulation mode in the same structure are developed. Constant-voltage mode stimulation is considered the standard technique of DBS applications for a long time. On the other hand, constant-current mode stimulation is emerging as an alternative solution for DBS applications. Supporting both constant-current mode and constant-voltage mode with active charge balancing makes this work appropriate for DBS applications. The purpose of this work is to increase the flexibility and safety of neurostimulators because this work allows switching stimulation mode after surgery and supplies active charge balancing for both stimulation modes for safety. Neurostimulator ASIC is constructed by 4 channels. Each channel consists of N-Block, P-Block and Channel Centric Active Charge Balancer. Each channel is configurable to supply ground, 10 V, 0-1 mA configurable sink current or 0-1 mA configurable source current in constant-current stimulation mode. Each channel is configurable to supply ground, 10 V, 1-5 V configurable low voltage or 5-9 V configurable high voltage in constant-voltage stimulation mode. N-Block circuit is designed to supply ground, 0-1 mA configurable sink current or 1-5 V configurable low voltage. P-Block circuit is designed to supply 10 V (as VDD), 0-1 mA configurable source current or 5-9 V configurable high voltage. Stimulation period, anodic phase time and interphase delay time are configurable parameters. Cathodic phase duration is not configurable because it is controlled by using outputs of Channel Centric Active Charge Balancer asynchronously. N-Block and P-Block circuits are similar to each other and complementary structures. The supply voltage of the stimulator circuit was chosen as 10 V to prevent headroom problems. Considering high voltage supply requirements, the Taiwan Semiconductor Manufacturing Company (TSMC) 0.18 um Bipolar-CMOS-DMOS (BCD) technology process was chosen. Relatively high biasing currents and enable/disable circuits were used for analog blocks to achieve higher performance with lower power consumption. The actual channel current is estimated by using differences of internal currents. Internal currents are mirrored to channel centric active charge balancer circuit to estimate channel current and use it for charge balancing. Timing setting resolution was chosen as 1 us. All analog blocks that are used in N-Block and P-Block were designed in Cadence Virtuoso considering timing, voltage and process constraints. DC, AC, transient and stability simulations were run to verify analog subblocks with Cadence Spectre. Transient simulations were run to verify constant-current stimulation mode and constant-voltage stimulation mode behaviors of N-Block and P-Block. Maximum current error results for constant-current stimulation, maximum voltage error results for constant-voltage stimulation and channel current estimation error results for both stimulation modes are given as simulation results. Channel centric active charge balancer was designed with Cadence environment. Transient simulations were run considering stimulation duration and current amplitude boundaries to verify functionality and determine performance with Cadence Spectre. Charge errors are presented as simulation results. Register Transfer Level (RTL) design of the stimulator controller was designed with Verilog Hardware Description Language (HDL). Synchronous state machines are used to implement the stimulator controller. Asynchronous digital circuits are used to handle outputs of active charge balancer circuits. The stimulator controller was synthesized by using Cadence Genus tool. Place and route process was performed by using Innovus tool. Digital blocks were integrated with analog blocks in Cadence Environment and Analog-Mixed Signal (AMS) simulations were run to verify the behavior of the neurostimulator ASIC for constant-current and constant-voltage stimulation modes with random test vectors. As a conclusion, 4 channel configurable constant-current/voltage mode biphasic implantable neurostimulator ASIC with channel centric active charge balancer was verified by using AMS simulations for both constant-current and constant-voltage stimulation modes. AMS simulation results show that the ASIC works functional and the proposed channel centric active charge balancing scheme is verified for both stimulation modes.
-
Öge4.5G frekanslarında çok bantlı geniş geliş açısı aralığında etkili yeni bir frekans seçici yüzey tasarımı(Lisansüstü Eğitim Enstitüsü, 2022-07-18) Balta, Şakir ; Kartal, Mesut ; 504132311 ; Telekomünikasyon MühendisliğiDünyada artan nüfüs ve gelişen teknolojiyle birlikte hücresel kablosuz sistemlerinin kullanımı artmakta, kısıtlı miktardaki frekans bantlarının yoğun bir şekilde kullanımı dolayısıyla artan işaretler arası girişimler, birçok hassas elektronik aygıtın çalışmasını etkileyebilmektedir. Bunun yanında, bu frekansları önlemeye yönelik herhangi bir sistem olmaması nedeniyle insanlar, günlük hayatlarında, evde, ofiste, her an her yerde bu frekanslara maruz kalmakta, bunun neticesinde sağlıklarını kaybederek yaşam kaliteleri düşebilmektedir. Bu nedenlerle böylesi sorunlara bir çözüm olabilmesi açısından bu tezde yer verilen çalışmalarla imalatı kolay, maliyeti düşük ve geniş bir kullanım alanına sahip olabilecek frekans seçici yüzey (FSY) kaplama ürünlerinin geliştirilmesi, teknolojinin insan sağlığına verebileceği zararların önlenerek insan yaşam kalitesinin artırılması açısından önemlidir. Günümüzde dünyada mobil haberleşme alanında IMT Advanced, ülkemizde de kısaca 4.5G olarak bilinen ve 800, 900, 1800, 2100 ve 2600 MHz frekans bantları içeren mobil haberleşme sistemi kullanılmaktadır. Tezin ana amacı bu frekans bantlarını engellemektir. Bu frekansların engellenmesi ile radyo dalgalarının insan sağlığına olan etkileri azaltılacak, mobil haberleşmenin olmasının istenmediği yerlerde bir engelleyici olarak kullanılabilecek, bunun yanında farklı frekanslardan gelecek işaretler arası girişimler de engellenebilecektir. Diğer bir amacımız da bir yandan bu frekansları engellerken, bir yandan da belirttiğimiz frekanslar aralığında kalan, ancak günlük hayatta oldukça yoğun kullanım alanı olan, örneğin 2.4 GHz kablosuz ağlar gibi serbest frekans bantlarını da engellememektir. Yakın gelecekte nesnelerin interneti kullanımı ile birlikte kablosuz ağların çok daha yoğun olarak kullanılacağı düşünüldüğünde, sadece ilgili frekansları engelleyen ama kablosuz ağları engellemeyen bu çalışmanın önemi giderek artacaktır. Bu nedenle yürüttüğümüz tez kapsamında, 4.5G frekans bantlarında oluşan radyo dalgalarını engelleyen aynı zamanda diğer frekans bantlarında herhangi bir engelleme yapmayan, bant durduran filtre görevi görecek yapısal yüzey malzemesi tasarlamak, bunun yanında bu frekans bantlarında ortaya çıkan işaret girişim etkilerini en aza indirmek hedeflenmiştir. Bunlara ek olarak çalışmayı yaparken tasarlanan FSY'lerin mümkün olduğunca farklı geliş açılarında etkinliğini koruması amaçlanmıştır. Bu malzemenin, durdurma bandında iletim katsayısının (S_21) minimum -10dB, iletim bandında iletim katsayısının (S_11) 0dB'e yakın bir değerde olması ve ayrıca elektromagnetik dalganın farklı geliş açılarında, ve farklı polarizasyonlarında da amaçlanan frekans karakteristiklerini sağlaması hedeflenmiştir. FSY'lerin frekans karakteristiği yüzeyi oluşturan periyodik eleman geometrilerine bağlı olduğundan çok çeşitli eleman geometrileri literatürde incelenmiştir. Benzer biçimde periyodik eleman geometrilerinin üzerine baskısının gerçekleştirildiği dielektrik tabakaların da yüzeyin frekans karakteristiği üzerine etkileri bulunmaktadır ve bu etkiler literatürde ayrıntılı olarak incelenmiştir. Tez çalışmasında FSY'lerin analiz yöntemleri de incelenmiştir. Dalga denkleminin analitik çözümü sadece bazı basit FSY geometrileri için görülmüştür. Dalga denkleminin diğer bütün FSY geometrileri için çözümü sadece sayısal çözüm yöntemleri ile elde edilebildiği görülmüştür. Bilgisayar teknolojisindeki hızlı gelişmeyle beraber sayısal analiz yöntemleri bu konuda uygulama alanı bulmaya başlamıştır. FSY geometrilerinin analizlerinde Sonlu Farklar Metodu (Finite Difference Time Method), Sonlu Eleman Metodu (Finite Element Method), Momentler Metodu (Method of Moments) gibi sayısal çözüm yöntemlerinin kullanıldığı, bunun yanında Eşdeğer Devre Modeli'nin de FSY yüzey analizlerinde kullanıldıkları literatürde görülmüştür. Yukarıda belirtilen sayısal analiz yöntemleri içinde tasarım aşamasında belirlenen FSY'lerin analizleri "Sonlu Elemanlar Metodu" ile gerçekleştirilmiş ve ilgilenilen frekans aralığında iletim ve yansıma katsayıları hesaplanmıştır. Ansoft HFSS programı "Sonlu Eleman Metodu" ile bu tür yapıların analizlerini yapabilmektedir. FSY'lerin eniyilemesi HFSS programında eşdeğer devre yönteminin yansıması ile, programın parametrik analiz özelliği kullanılarak gerçekleştirilmiştir. Tez aşamasında bu programdan aktif olarak faydalanılmıştır. Tasarımlarda mümkün olan en az sayıda rezonans ile frekans bantları arasındaki girişimler azaltılmaya çalışılarak, birden fazla frekans karakteristiğine sahip olan üç farklı tasarım geliştirilmiştir. Bunun yanında da durdurmak istenilen frekansların haricinde kalan çalışma frekanslarını engellememek amacıyla mümkün olan en dar durdurma bantlarını sağlayan, oldukça keskin kenarlı bant durduran filtreler oluşturulması için çaba harcanmıştır. Tüm bunları yaparken tasarlanan FSY'lerin mümkün olduğunca farklı geliş açılarında ve farklı polarizasyonlarda etkinliğini koruması hedeflenmiş, bu amaçla simetrik ve dalga boyuna göre çok küçük boyutlardaki geometriler kullanılmıştır. Birden fazla bandı durduran FSY tasarımlarında karşılaşılan en büyük problemlerden biri herbir frekans bandı için tasarlanan farklı geometrilerin birbirlerine olan girişim etkileri olmuştur. O nedenle birçok geometri üzerinde araştırmalar yapılmış ve problemin çözümü için farklı yaklaşımlar getirilmiştir. Tasarımlarda düşük maliyetli ürün geliştirmek amacıyla, 1mm kalınlığında, dielektrik sabiti 4.54 ve kayıp tanjant değeri 0.02 olan tek katlı FR4 tabaka üzerinde gerçeklenmiş, radyo frekanslarına FR4 tabakanın tepkisi kötü olmasına rağmen istenilen hedefler gerçekleştirilebilmiştir. Tasarımların analizleri ve eniyileştirme çalışmaları Ansoft HFSS programında yapılmış, yüzey akım yoğunluk grafikleri çıkarılarak herbir frekans bandı için geometrilerin etkinlikleri gösterilmiştir. FR4 tabakalar üzerine gerçeklenen tasarımların ölçümleri alınmış ve benzetim sonuçlarıyla karşılaştırılarak tasarımlar doğrulanmıştır. Geniş bir literatür taraması yapılmış ve 4.5G frekansları üzerinde etkin olan, çoklu rezonans gösteren böyle bir çalışmaya literatürde rastlanmamıştır. Bu çalışma bu alanda yapılmış ilk ve tek başarılı çalışma olması nedeniyle litaratüre katkı sağlamıştır.
-
Öge4x4 askeri araçlar için bütünleşik klima tasarımı(Lisansüstü Eğitim Enstitüsü, 2022-06-16) Zengin, Barbaros Bahadır ; Böke, Yakup Erhan ; 503181102 ; Isı-AkışkanBu tez çalışmasında, 4x4 tekerlekli MRAP sınıfı araçlar için klima sistemi tasarımı konusu ele alınmıştır. Bu tip askeri araçlarda, mürettebatın zamanının önemli bir kısmını araç içerisinde geçirdiğinden, araç içi termal konfor şartlarının, mürettebatın hareket kabiliyeti ve araç içi ekipmanların üzerinde önemli etkisi olduğundan dolayı, iklimlendirme sistemi, askeri araçlardaki önemli sistemlerden biri olarak öne çıkmıştır. Bununla birlikte, klima sisteminin oluşturduğu hacmin, hem mürettebat hareket alanı hem de diğer ekipmanların konumlandırılması açısından önem arz ettiği belirtilmiştir. Otomotiv klima sistemlerinin genellikle, ısıtma ve soğutma sistemi olarak iki ayrı sistemden oluştuğuna değinilmiştir. Bu iki sistemin aracın içersinde önemli bir hacim işgal ettiği anlaşılmıştır. 4x4 askeri araç içerisinde soğutma ve ısıtma sistemini, iki ayrı sistem olarak kullanmak yerine bütünleşik tek bir sistem olarak kullanmanın önemli bir hacim kazanımı sağlayacağı belirtilmiştir.
-
ÖgeA Dutch disease approach into the premature deindustrialization(Graduate School, 2022-08-18) Çakır, Muhammet Sait ; Aydemir, Resul ; 412142006 ; EconomicsWe explore the main causes and consequences of the premature deindustrialization phenomena. We argue that local currency overvaluations mainly associated with a surge in capital inflows into the emerging market economies following the deregulation of their capital accounts severely hurt the output share of manufacturing industry. First, we empirically establish a causal link from capital flows to local overvaluations. According to the two-way error component model which controls for the full set of country and time fixed effects, a surge in capital flows by one standard deviation is associated with an overvaluation of 1.67 percent. To address the possible endogeneity between capital flows and real exchange rate, we run two-variate first-order panel vector autoregressive model since the feedback effects from overvaluation to net financial inflows might introduce a bias into the fixed effect estimation. When we isolate the effect of positive capital inflow shock of one standard deviation by the Cholesky decomposition, we find that it is statistically significantly associated with an immediate overvaluation in real terms with 95 percent confidence level. Then we construct our baseline regression model. Applying the second generation estimators allowing for cross-section dependency (Augmented Mean Group and Common Correlated Effects Mean Group), we run a panel data regression model based on a sample of 39 developing countries in Latin America, Sub-Saharan Africa, East Asia, North America, and Europe from 1960 to 2017. We find that an overvaluation of 50 percent which corresponds approximately to one and half standard deviations is associated with a contraction of manufacturing output share as high as 1,25 percent over the five year period. With the turn of new century, the developing countries also experienced a massive deindustrialization by shedding manufacturing value-added as large as 1.24% of national income. Moreover, the evidence suggests that the relationship between real exchange misalignments and the manufacturing share in output might be nonlinear so that the manufacturing competencies which have been eroded by local currency overvaluations in real terms cannot simply be brought back during the undervaluation periods. We also show that the baseline regression results are robust to different data sets, alternative real exchange rate/deindustrialization measurements, and dynamic model specifications which allow us to treat the real exchange rate as endogenous variable to address any potential concern regarding the simultaneity bias. As a further robustness check on our findings, we empirically examine the effects of supply chain disruptions, inequality shocks, and institutional innovations on the path of industrialization in developing countries by running a panel vector autoregressive model. We found that deterioration in income distribution unequivocally harms the developing countries' bid for industrialization while better institutions proxied by an improvement regulatory quality invariably foster it. On the other hand, the effects of supply chain disruptions on the pace of industrialization follow a nonlinear path, showing the great resilience of local industries in absorbing imported input bottlenecks through intermediate input import substitution. We also provide evidence that backward participation into GVCs and regulatory quality do not mutually Granger-cause each other, and suggest that the well-established link from better governance to GVCs may be missing in the developing country case. Based on these empirical findings, the need for a comprehensive industrial policy along with a firm use of capital controls and macroprudential measures given a robust institutional framework comes out as the main policy implication of our work, and they are duly discussed in light of recent developments in the literature.
-
ÖgeA generalized deep reinforcement learning based controller for heading keeping in waves(Graduate School, 2022-06-21) Beyazit, Afşin Baran ; Kınacı, Ömer ; 508191229 ; Offshore EngineeringReinforcement Learning (RL) is a machine learning method where a learner (the agent) tries to maximize a reward by learning how to act under different environmental circumstances. The agent looks at the state of its environment (through the state vector), takes an action, and then gets a reward and the next state of its environment. The agent improves its action-taking strategy (policy) with every action it experiments with. RL methods have been used for many decision-making problems including control problems with promising results. Unlike many traditional control methods, a model-free RL doesn't need any environment dynamics to operate. This is especially beneficial for problems where the model dynamics are non-linear or not well-known. However, classical controllers are still the most used method of control for maritime applications. Heading-keeping is a maritime control problem where a controller's objective is to keep the heading (yaw) angle of a vehicle constant. Generally speaking, the industry standard is to use traditional feedback controllers such as PID for this problem. This study focuses on designing a generalized RL controller for the heading-keeping problem in waves. The study compares the designed RL controller to a traditional controller in terms of yaw error and rudder usage and observes that the designed RL-based controller performs better than the used traditional controller. The first iterations of the RL agent had many issues. Unlike traditional controllers, the RL agents don't inherently recognize that in an idealized environment they can deal with waves coming from 0 and 180 degrees with almost zero rudder usage. On top of that, the first few developed agents had problems with excessive rudder usage, steady-state error, and overshooting behavior. All of these problems have been solved in the final iteration of the RL agent. Instead of just explaining the final agent, the thesis starts off with a weak RL agent and explains how it can be improved iteratively. This way the thesis explains how one might approach the problem of developing an RL-based controller. The first section focuses on giving a rough summary of RL and the problem case, explains the purpose of the thesis, then talks about previous work over marine movement control in literature. Some detailed information about the used tools and simulation environment is also given here. The second section introduces LQR controllers and designs an LQR controller for the heading keeping problem. The third section explains RL in-depth to lay the foundation for the upcoming sections. The fourth section starts with a naively designed simple RL agent and iteratively improves it. In each iteration of development, the agent is compared to the designed LQR controller, its weaknesses are analyzed, and the improvements for the next iteration are determined. The fifth section summarizes the previous sections, explains the contributions of the thesis, and discusses possible future work.
-
ÖgeA hardware based gunshot sound detection system(Institute of Science and Technology, 2020) Akçocuk, Mustafa Koray ; Güneş, Ece Olcay ; 637005 ; Elektronik ve Haberleşme Mühendisliği Anabilim DalıWith the development of semiconductor technology, embedded systems' capacity of operations also increases day by day. In this way, small-sized devices are able to perform complex works. As a result, people take advantage of embedded systems in a wide variety of areas to enhance the life quality of living. As a result of technological developments, the use of tools that assist law enforcement officers in crime detection is also increasing. Available gunshot detection systems mainly focuses on preventing illegal hunting, decreasing crime rates in public space, and detecting gunshot direction in battlefield areas. When the literature is examined, it is seen that machine learning methods are used in the studies used in gunshot sound detection. However, the number of hardware-based systems used in gunshot sound detection is quite a few and mostly simple methods such as cross-correlation threshold, edge detection are implemented. In this work, it is aimed to realize a gunshot sound detection system on hardware. In this context, it is aimed to select the system that uses the advantages of machine learning methods and is the most suitable for implementation on the hardware. When the literature is examined, it has been observed that the mel coefficients, signal energy and zero crossing properties perform well in determining the gunshot sound. For this reason, the mentioned features were obtained from the audio signal and used in k-nearest neighbors (k-NN) and support vector machines (SVM) classification algorithms. An accuracy rate of 96.1538% was obtained with the k-NN classifier and 91.3462% with the SVM classifier.
-
ÖgeA holistic data analytics approach to ship inspection reporting(Graduate School, 2023-08-08) Biçen, Samet ; Çelik, Metin ; 512192016 ; Maritime Transportation EngineeringMaritime inspection analysis has become an emerging topic in recent years, as practical solutions are sought to improve the pre- and post-inspection analysis in shipping operations. With a focus on finding practical solutions to enhance the pre- and post-inspection process in shipping operations, such as The Oil Companies International Marine Forum (OCIMF) Ship Inspection Report Programme (SIRE), RightShip, The Tanker Management Self-Assessment (TMSA), Chemical Distribution Institute (CDI), there is a growing demand for effective methodologies. The objective of this research is to enhance this field by examining documented observations through the utilization of both natural language processing (NLP) and machine learning (ML) methods. The main goal of this study is to make a valuable contribution to this field by analyzing reported observations. This will be accomplished by employing a combination of natural language processing (NLP) and machine learning (ML) techniques. Additionally, a statistical algorithm model will be utilized to conduct analysis using demographic data. To achieve the objectives of the study, a robust methodology was developed, which leverages the benefits of the American Bureau of Shipping Maritime Root Cause Analysis Tool (ABS-MARCAT). This tool enables the systematic initiation of a potential causes database, incorporating a substantial number of 2383 observations. By employing ABS-MARCAT, the study aims to provide a comprehensive foundation for analyzing and understanding the causes behind reported observations and determining corrective and preventive action tips for elimination of this causes. One of the key contributions of this research is the development of an NLP-based ML algorithm. This algorithm plays an important role in predicting the causes of new entries and determining corrective and preventive action tips in the inspection report's observations. The algorithm's performance demonstrates high accuracy, with results varying between 0.90 and 0.98 across different causation categories. Such accuracy is promising, as it allows for effective identification and classification of causes, providing valuable insights for decision-making in the maritime industry. Another important contribution of this research is the statistical algorithm model that can produce frequencies of causes based on independent variables such as ship name, inspector name, oil major company name and port name. The statistical algorithm model provides predictions about the areas to be considered according to the information required before the inspection. By presenting the frequencies of the cause categories according to the independent variables, it provides a decision support system in the process of predicting the inspection parts to be considered before the inspection. Another important contribution of this research is to suggest corrective and preventive action tips to eliminate the causes of the observations after the causes are identified. The corrective and preventive action tips determined by maritime experts will add a different dimension to the decision-making processes by providing solution suggestions after the analysis of the inspection reports. The pre- and post-inspection analysis model developed in this study holds great potential for enhancing fleet safety and efficiency. By providing maritime executives with an accurate tool to analyze inspection data, it enables them to make informed decisions and take proactive measures to address potential issues. The model serves as a third-party solution for the shipping industry, offering an independent and reliable means of analyzing and assessing inspection data. Looking ahead, future studies are planned to further refine and expand this model. The aim is to conceptualize it as a platform as a service (PaaS) offering, which would enable wider access and utilization by stakeholders in the maritime industry. By transforming the model into a PaaS, it has the potential to become a valuable resource for multiple organizations, facilitating improved fleet safety, operational efficiency, and informed decision-making. In conclusion, this study addresses the emerging field of maritime inspection analysis by developing a robust pre- and post-inspection analysis model. Through the integration of statistical algorithm model, NLP, ML, and the MARCAT tool, the study offers a holistic approach to analyzing reported observations and statistical data. With its high accuracy, the model has the potential to make a significant contribution to the improvement of fleet safety and efficiency. Furthermore, by conceptualizing it as a platform as a service, the study paves the way for wider adoption and application of the model within the shipping industry.
-
ÖgeA molecular dynamics study of the prion protein(Graduate School, 2023-05-12) Tavşanlı, Ayşenaz ; Balta, Bülent ; 521152101 ; Molecular Biology-Genetics and BiotechnologyTransmissible spongiform encephalopathies are caused by the conversion of the cellular prion protein PrPC into a misfolded form, PrPSc. In sheep populations there is a polymorphism at positions 136 (alanine/valine), 154 (arginine/histidine) and 171 (arginine/glutamine). While the A136-R154-R171 (ARR) variant confers highest resistance to scrapie, the V136-R154-Q171 (VRQ) variant leads to highest scrapie susceptibility. The A136-R154-Q171 (ARQ) variant with intermediate resistance is considered as wild type. To identify important conformational rearrangements at the initial steps of misfolding, microseconds long restrained and unrestrained molecular dynamics simulations have been prefomed at neutral pH, at 310 K and 330 K on naturally existing prion variants. Also, unfolding potentials of all three helicas of prion protein structure were also conducted at differentiated temperatures with the help of replica exchange molecular dynamic simulations. Moreover, at differentiated pH conditions unfolding potential of helix 1 and interaction of helix 1 with some other sequences were also conducted. Susceptibility of the disease might be related to hyrophobic side chain of the valine at position 136 which seemed to ease the unfolding process. While arginine at position 171 worked as a clamp to keep helix 2 and helix 3 of the cellular prion protein structure together. That might be the reason why VRQ is the most susceptable one where ARR is the most resistance. On the other hand, unfolding of helix 1 played the most critical role since it was the most stable helical structure in all conducted simulations. Inter- and/or intramolecular salt bridges of helix 1 were important to keep helix 1 stable in both helical structure and/or unfolded structure. Energy calculation showed that not high energy was needen to unwind helix 1. This helical structure of hydrophilic H1 might be broken by another hydrophilic sequence of the same prion protein, and its unwinding might be the key point to catalyze the complete unfolding of the protein
-
ÖgeA multi-disciplinary design approach for conceptual sizing of advanced rotor blades(Lisansüstü Eğitim Enstitüsü, 2022-07-19) İbaçoğlu, Hasan ; Arıkoğlu, Aytaç ; 511072102 ; Aeronautics and Astronautics EngineeringRotorcrafts are versatile vehicles with their unique hovering flight capability. However, their forward flight speed limitations and high noise levels are shortened to their usage in much wider areas. Therefore, the rotorcraft industry working on advanced rotorcraft, which are called compound rotorcrafts, development projects increasingly to overcome these problems. The conceptual design phase is the beginning of a development project where the most critical decisions are taken in this stage. So, vehicle-level optimization algorithms are needed for decision-making to lead the project correctly. On the other hand, simplified low-level approaches must be used during conceptual design optimization because of too many design parameters to avoid impractical solution times. Furthermore, rotorcrafts with advanced rotors require advanced design approaches to obtain superior performance, structural, and noise-level characteristics. Therefore, advanced conceptual design approaches are needed to overcome this contradiction. The rotor is the most critical component, which is also the source of the most problems of a rotorcraft such as lack of performance and noise. Therefore, rotor blade optimization is the main issue in the conceptual design phase at the beginning of a project. A multidisciplinary rigid rotor blade design optimization approach that is suitable for the conceptual design, sizing, and evaluation stages of helicopter development processes is suggested. Performance, structural strength of the blade, and noise-level predictions are considered for the objective function. Blade outer surface and structure are represented by a geometrical model in which the chord, thickness ratio, chamber ratio, and twist distributions along the blade radial stations can be defined as linear or nonlinear functions. The distribution of the number of layers for both skin and spar was also defined in the presented model parametrically. Low-level but sufficient fidelity analysis methods were chosen to be able to reduce the computing time. Performance analysis and sizing of the vehicle were obtained by Blade Element Momentum Theory (BEMT) based in-house developed helicopter sizing code called ROTAP. A trim algorithm for compound helicopters that may have additional lifting surfaces and thrust components is suggested. Airfoil Characteristics are calculated by the well-known panel method code Xfoil. Both these codes are modified and embedded in the code developed for this study. Structural analysis was obtained using the 1D FEM approach. Cross-sectional properties of the composite beam are calculated by VABS and displacements under the loads are calculated by GEBT. Reduced FfowcsWilliams-Hawkings equations are used to estimate loading, thickness, and high-speed impulsive noise levels. A hybrid optimization algorithm is suggested to get optimal results. Sequential Quadratic Programming (SQP) can be used to find local optimal points. And then the global optimal point is searched by RSM over local optimal points iteratively. RSM-based surrogate modeling, evaluation, and optimization tool was also developed for manual inspection of the design space. As a case study, multi-objective aerodynamic performance optimization of aircraft propeller is performed.
-
ÖgeA new approach in studying the engineering behavior and mechanical properties of artificial bonded soils in the laboratory(Graduate School, 2022-01-31) Ricardo, Richard Vall Ngangu ; Lav, Musaffa Ayşen ; 501142303 ; Soils Mechanics and Geotechnical EngineeringThe construction of structures on structured soils or the exploitation of such materials for construction purposes, such as in road pavement projects, has gained more importance with time. In some parts of the world, their study has become a necessity. Such soils, like residual soils, are widely encountered in tropical and subtropical regions. Even though their names may vary according to local culture or their morphology, they have all in common the bond structures. This property is a key parameter of those soils. However, to better study their behavior, the use of the artificial bonded sample in the laboratory has been adopted, offering an effective simulation. In the present study, the behavior of residual soil-like has been investigated under undrained conditions in triaxial equipment by using a large number of artificial samples made in the laboratory. The artificial bonded and unbonded samples were made from a mixture of sand, kaolin, and water. A thermal process was applied for the bonded specimens, whereas the unbonded samples were not fired. A preliminary investigation was carried out on four different particle size distribution curves. In those gradation curves, the dry ratio of kaolin/sand, and the kaolin particle size distribution paths, were kept the same, only the sand grain size distribution was varied. The study was conducted on the chosen best-fitted gradation curve of sand-kaolin. Besides the triaxial tests, direct shear box apparatus was also used, for comparative purposes. For every type of the tested material, three different initial effective confining pressures or normal stresses were applied. Throughout this process, five different bonding levels were used. Several properties of such soils were examined, among them: the stress-strains, the pore water pressure evolution, the stress ratio, other strength parameters, and so on. The equivalent artificial bonded specimens, but in an unbonded state, were used to gain a better understanding of their mechanical characteristics. A novel approach was investigated and established, based on a new parameter called bonding index (B_i). This parameter was set from the bounding surface, which is one of the most important features of bonded soils studied under triaxial tests. The proposed method was evaluated as an effective and practical one. The strength parameters of the bonded soils such as the cohesion intercept, the angle of internal friction, the peak strength, and the stress ratio, were found to be straightly related to B_i. The latter asserted well the enhancement of bonding. Furthermore, B_i would be used to define the confining stress level, from which a B_i close to zero value implies the highest stress level for the artificial bonded soils. However, independent of the stress level, all unbonded soils display a B_i equal to zero value. The coupled effect of B_i and the confining pressure was grouped in three main stages. The first stage, at lower confining stresses, where a remarkable high value of B_i is recorded. The second stage is a step of moderate stress and, the third stage, as where the smallest B_i value was observed. Every stage was associated with a particular behavior of those soils according to the bonding level in presence. It is worth pointing out that a soil sample of higher B_i was found to be less ductile. The suggested method was observed to be an appropriate alternative means for the geotechnical evaluation and analysis of the behavior of structured soil materials. Comparison from the results of both CIU tests and DST revealed a good agreement for weakly and unbonded samples, particularly for strength parameters, the cohesion intercept, and the angle of internal friction. However, for highly bonded materials important divergence was observed, with an overestimation from the DST results. A study of the debonding process was carried out through a new approach. This method was constructed from the deviatoric stress increment (∆q) against the axial strain (ε_a) curves, drawn in a natural scale. Six important features, points, were found to be typical of bonded soils, while only two of them were observed for unbonded samples. The first yield was identified at the initial point, after which the slope of ∆q decreased significantly coupled with the maximum pore water pressure increment 〖d∆u〗_max. This point revealed the debonding process starting point. The second point is at 〖∆q〗_max, at the second yield, a point of major loss of strength. The third and fourth points were at d∆u=0 and ∆q=0 (q_max), respectively; while the fifth point was identified as where 〖∆q〗_min. The last point was at the critical state or the equivalent state. Every point represented a particular behavior state of bonded soils. Throughout the study, it was observed that confining pressure influences considerably the response of bonded soils. For example, the aforementioned six features, specific to bonded soils, were found to be reduced to only two points, particularly for weakly and moderately bonded materials, with the increase of σ_3 from 30 kPa to 700 kPa. Furthermore, a bigger value of the bonding index was achieved at lower confining stress. Therefore, it is recommended, for a better understanding of the behavior of the bonded soil materials, to conduct such investigations at lower initial effective stress, especially for the analysis of the debonding process.
-
ÖgeA new numerical approach for the sauter mean diameter in high speed diesel engines(Institute of Science and Technology, 1993) Buğdanoğlu, Selim ; Sağ, Osman Kamil ; 39429 ; Naval Architecture and Marine EngineeringThis thesis is concerned with Ship Propulsion Bearings and Computer Aided Calculation and Operation Program for Propulsion Bearings. It is the main purpose to put forward this study that is the lack of the studies and knowledge concerning ship propulsion bearings. Therefore, it has been discussed arrangements, preliminary design parameters, performance, and routine maintenance tasks, and troubleshooting procedures of ship propulsion bearings. It has also attempted to provide guidance on these subjects along with the "Computer Aided Calculation and Operation Program for Propulsion Bearings" which is entirely written by the author. This computer program written in MATLAB programming language is capable of calculating preliminary design parameters, performance and providing the user routine maintenance tasks, and trouble shooting procedures of ship propulsion bearings. Key feature is to include flexible menu configuration with the aid of visual menu buttons and easy solutions with graphics utilization. Furthermore, It has been also discussed in detail water lubricated rubber bearings, evaluated previous experiments, and statistical values, and then obtained good results. It is believed that this study would be a useful guide concerning "Ship Propulsion Bearings'' to the designers, operators, and researchers.
-
ÖgeA new public key algorithm and complexity analysis(Graduate School, 2023-06-23) Çağlar, Selin ; Özdemir, Enver ; 707201029 ; Cybersecurity Engineering and CryptographyWith the development of technology, many processes have begun to digitize. As a result of this digitalization, digital communication has become inevitable in our lives. Digital communication is faster and easier to access than traditional communication methods. Especially with the Covid-19 pandemic, the contribution of digitalized processes to our daily life has been visibly felt. As a result of digitization, a lot of data belonging to different data classes has been transferred to the digital environment. The transfer of information to digital media has brought about a change in the methods of storing and using data. At this point, the importance of issues such as data privacy and security has increased and the concept of secure digital communication has come to the fore. Secure digital communication deals with the provision of cornerstones of security such as confidentiality, integrity, and authentication while transferring data over digital channels. Confidentiality is the process of preventing unauthorized parties from viewing sensitive data and ensuring that only those who have been given permission can do so. This can be achieved through data encryption, access controls, and secure channels. Integrity refers to the assurance that data remains unaltered and uncorrupted during transmission, storage, and processing, ensuring that the data can be trusted and relied upon. Techniques such as digital signatures and hash functions can be used to verify the integrity of data. Verifying a user's or a device's identity when they want to access data or services is referred to as authentication. This is typically achieved through the use of digital signatures, which are cryptographic techniques that provide a way to verify the authenticity of data by verifying the identity of the sender. Together, these three principles form the foundation of secure communication. When sharing data in a public environment, the data to be transferred must be protected. In other words, there is a need to ensure that the principle of confidentiality, which is the main starting point of this study, can be provided. Cryptography, which enables encryption structures, is used to ensure confidentiality. Symmetric key cryptography, which is more efficient in terms of key length and cryptographic operation and uses the same key in encryption and decryption processes, is widely used in encryption processes. In symmetric key cryptography, the party that encrypts and decrypts the data must use the same cryptographic key. Sharing of this cryptographic key must be done securely between the parties. Asymmetric key cryptography is used at the point of sharing the symmetric key, especially in processes that are established in a public environment and where there is no opportunity for the parties to directly share keys physically. Symmetric key cryptography is based on the use of a key pair consisting of a public and private key. A public key is a key that can be shared publicly with the parties used to send encrypted data. The private key, on the other hand, is the key used in decrypting the sent encrypted data, which the owner of the key pair must keep securely. Asymmetric key cryptography is used to provide confidentiality and authentication. The fact that it can also provide authentication is a factor that increases security in key exchange processes. After the parties verify each other cryptographically at the key exchange, asymmetric key cryptography provides an environment for sharing the symmetric key to be used to secure the communication. The RSA algorithm is one of the oldest and most widely used asymmetric key algorithms. The security of the algorithm is based on the difficulty of factoring integers. In the RSA algorithm, the public key modulus is equal to the product of two large prime numbers of the same size. Revealing these two prime numbers is enough to break the algorithm. At the same time, there is the possibility of returning the message without factoring from the encrypted data. This is called the RSA problem. Research studies have shown that there may be an easier way to return a message from encrypted data without factoring. If an effective method is developed for the RSA problem, the security of many RSA-based systems will be under threat. In this thesis, a new public key algorithm, which can be an alternative to the RSA algorithm, is proposed in the case of solving the RSA problem. This algorithm is based on the use of nodal curves and the group structure is different from the RSA algorithm. In the proposed algorithm, the discrete logarithm problem is thought to be harder, since the group structure in which the algorithm works is based on polynomial arithmetic and is also inspired by elliptic/hyperelliptic curves. At this point, it is assumed that the proposed new algorithm may be more durable to the problem in the RSA algorithm. At the same time, a new group operation algorithm, which is an addition algorithm, is presented by modification of the Mumford Representation and Cantor Algorithm in order to perform the group operation on the nodal curves. The performance comparison of the group operation presented on the nodal curves and the Cantor algorithm has been made. Compared to the Cantor algorithm, the presented new group operation was found to be more effective. In addition, the proposed algorithm has a probabilistic behavior. In other words, even if the data to be encrypted does not change, a structure is presented that can enable the encrypted data to be formed differently. The RSA algorithm has a deterministic behavior, additional padding is needed to produce different encrypted results from the same data. Since the proposed public key algorithm is based on polynomial arithmetic, there is no performance advantage compared to the RSA algorithm. We can state that there is a trade-off between security and performance. In order to show the practical applicability of the presented new solution, a performance comparison with the RSA algorithm has also been made. The performance problem is caused by the exponential increase in the secret key with the increase in the degree of the nodal curve used. In other words, it has been seen that the algorithm proposed in the decryption phase is slower than the RSA algorithm. However, since the decryption process in asymmetric key cryptography is generally not performed by individual users, it is thought that powerful servers will not be affected by this performance problem. During the tests, the SageMath library and the Python programming language were used.
-
ÖgeA novel antenna configuration for microwave hyperthermia(Graduate School, 2022-11-28) Altıntaş Yıldız, Gülşah ; Akduman, İbrahim ; Abdulsabeh Yılmaz, Tuğba ; 504182309 ; Telecommunications EngineeringBreast cancer affects approximately 2.5 million women each year and the consequences can be fatal. When treated correctly, however, the survival rates are very high. Surgical operation such as lumpectomy or mastectomy are invasive techniques that remove the partial or the whole breast. With early diagnosed cancers and the post-surgical patients, the most used therapy techniques are the radiotherapy, chemotherapy and the use of other anti-cancer agents. The economic and the psychological repercussions may be minimized by the increase efficiency of the treatments. It has been shown that with the artificial hyperthermia, elevated temperature levels at the cancer regions, the effectiveness of these modalities increase. Microwave breast hyperthermia (MH) aims to increase the temperature at the tumor location over its normal levels. During the procedure, the unwanted heated regions called hotspots can occur. The main aim of the MH is to prevent the hotspots while obtaining the necessary temperature at the tumor. Absorbed heat energy per kilogram at the breast, specific absorption rate (SAR), needs to be adjusted for a controlled MH. The choice of the MH applicator design is important for a superior energy focus on the target. Although hyperthermia treatment planning (HTP) changes for every patient, the MH applicator is required to be effective for different breast models and tumor types. In the first part of the thesis, the linear antenna arrays are implemented as MH applicators. We presented the focusing maps as an application guide for MH focusing by adjusting the antenna phase values. Furthermore, these focusing maps put forward the basic principle of focusing the energy at the breast. Sub-grouping the antenna, we obtained two phase main parameters that control the horizontal and vertical focus change. By adjusting these two phase values, we could focus the energy onto the target locations and we showed that with this simple structure, there is no need for optimization methods. However, the linear applicator performance was not successful for some target points, especially when the target is far away from both of the arrays. In the second part of the thesis, we improved the linear MH applicator. We concluded that the reason for the low performance of the linear applicator is mainly due to non-symmetrical geometry of the applicator and the resulting poor coverage. we proposed to radially re-adjust the position of the linear applicator for a better focusing ability while fixing the breast phantom. This generates multiple different applicator scheme without actually changing the applicator design. Particle swarm optimization (PSO) method is used for antenna excitation parameter selection. For the examined two targets, 135° rotated linear applicator gave 35-84% higher T BRS and 21-28% higher T BRT values than the fixed linear applicator, where T BRS stands for the target-to-breast SAR ratio and T BRT stands for the target-to-breast temperature ratio. Not only the rotated linear applicator gave higher performance, but also the circular array is rotated and the results were improved for one target. One of the main results of this study is that, for one target, the rotated linear applicator gave better results than the circular array, which is the state of the art. For the deep-seated target, 135° rotated linear applicator has 80% higher T BRS and 59% higher T BRT than the circular applicator with the same number of antennas. For the other target, the results of the linear and circular were comparable. However, the results obtained with the PSO were not robust. With different initial values (random in our study), the results were very different from each other, and we did 10 repetitions and took the best performing results. In the third part of the thesis, we presented deep-learning based antenna excitation parameter selection method. This method utilizes the learning ability of convolutional neural networks (CNN), rather than searching the solution space from random initial values as PSO does. The data set for CNN training was collected by superposing the electric fields obtained from individual antenna elements. We implemented a realistic breast phantom with and without a tumor inclusion. We used linear and circular applicators to validate the method. CNNs were trained offline with the data sets created first for the phase and then for the amplitude of the antennas. A mask of 1s and 0s is used to define the target region to be focused. This mask was given as the input to CNN models, and the corresponding phase and the amplitude values are calculated within seconds from the CNN models. The proposed approach outperforms the look-up table results, as the phase-only optimization and phase–power-combined optimization show a 27% and 4% lower hotspot-to-target energy ratio, respectively, than the look-up table results for the linear MH applicator
-
ÖgeA novel risk assessment approach for data center structures(Institute of Science and Technology, 2020) Çiçek, Kubilay ; Sarı, Ali ; 634545 ; Department of Civil EngineeringStructural safety includes evaluation of both structural and nonstructural components of buildings. Although structural design is completed only considering structural elements of buildings, nonstructural components are crucial in an earthquake event. Post-earthquakes areas show that structural safety may not be ensured even when the load-bearing system is undamaged. Failure of nonstructural components resulted in loss of enormous economic losses and loss of life in past earthquakes. Therefore, nonstructural components should also be included in seismic safety evaluation of structures. Researches show that cost of the nonstructural components ranges from \%70 to \%90 of the total cost of buildings. Therefore, nonstructural component failure in structures with high-tech equipment, laboratories, data centers can damage economy significantly. Additional to economic losses from downtime of these structures, repairing or replacement of equipment inside increase the cost extremely. Apart from the economic losses, damaged nonstructural components can be the cause of deaths directly by falling onto people and closing pathways. During and after an earthquake event, damaged nonstructural components can prevent escape of people inside and entry of medical staff. Moreover, operational failures caused by nonstructural components in critical facilities such as hospitals and fire stations, can lead to higher number of deaths after earthquake occurred. Nonstructural components do not participate to load-bearing systems in structures. However, they are still subjected to external loads with the load-bearing system. Therefore, it is crucial to design structures by considering the nonstructural systems inside. Nonstructural components can be classified in 3 groups by their functions: (i) architectural components such as, partition walls and lighting systems, (ii) mechanical-electrical components such as piping systems and generators, and (iii) building equipment such as, computers and file cabinets. Researches show that some of nonstructural components are sensitive to acceleration whereas the rest are sensitive to floor displacement ratio. According to the function of the structure, design should be completed to limit the defining response. This study aims to propose a new method and generate risk curves for structural design and structural evaluation of data centers in high seismic risk regions. A sample structure with base isolation system is selected from current literature in companion with standards for data centers. Structural properties are also selected in companion with standards. After the structure model is generated, probabilistic seismic hazard assessment is completed for the selected site where the main campus area of Istanbul Technical University in Maslak, Istanbul. Source-to-site distances are determined by using online map in General Directorate of Mineral Research and Exploration website. The closest point of main line of Western North Anatolian Fault is approximately 28 km away from ITU campus and the longest effective distance is selected as 65 km on the Western NAF. Probability of rupture distance is taken as uniform and 6 different values of distances between 28 km and 65 km are used in Ground Motion Prediction Equations. Characteristic earthquake method is considered and the characteristic magnitude is used as 7.2 in GMPEs. Probabilistic study is conducted on this structure by using Monte Carlo simulations with the selected structural parameters. Probabilistic distributions for different parameters are taken from various studies in literature. Random samplings are generated for each parameters according to the belonging probabilistic distributions. For comparison purpose the structure is also analyzed as a fixed-base structure. Same procedures are repeated for the fixed-base structure. Failure of nonstructural components are investigated in two different ways. The first failure criterion is overturning-sliding behavior of server racks. FEMA P58 and ASCE 7-16 is used to calculate acceleration limits for anchored nonstructural components. The second failure criterion is the acceleration limitations of servers given by producers and researchers. A special MATLAB code script is generated to run Monte Carlo simulations on OpenSees platform. Fragility curves are generated according to the predefined failure criteria. Risk curves are created for both structures with the site specific annual hazard curve and generated fragility curves. Results show that base-isolation systems reduces the accelerations significantly comparing to the fixed-base structures in higher floors. Another outcome is the isolation systems are highly sensitive to earthquake characteristics rather than structural variables in terms of accelerations. It was also understood that the critical failure mod in data centers is the overturning-sliding behavior rather than vibration failure of servers.
-
ÖgeA numerical approach for plasma based flow control(Graduate School, 2023-04-05) Ata, Reşit Kayhan ; Şahin, Mehmet ; 511132114 ; Aeronautics and Astronautics EngineeringIn the present study, a novel numerical method has been developed to solve incompressible magnetohydrodynamics (MHD) and electrohydrodynamics (EHD) flow problems in a parallel monolithic (fully-coupled) approach. To solve the fluid flow, incompressible Navier-Stokes equations are discretized using face/edge centered unstructured Finite Volume Method (FVM). The same formulation is used for the magnetic transport equation to model the magnetic effects. The side-centered approach, where the velocity and magnetic field components are placed at the center of each cell face while pressure and Lagrange variables are placed at the center of the control volume, provides a stable numerical algorithm without the need of modifications for pressure-velocity coupling. The discretization of both MHD and EHD equations described above results in saddle point problem in fully coupled (monolithic) form. In order to solve this problem an upper triangular right preconditioner is used and restricted additive Schwarz preconditioner with FGMRES algorithm is employed to solve the system. Domain decomposition is handled by METIS library. For these numerical algorithms PETSc software package is used. For the solution of incompressible MHD flow problems, the continuity, incompressible Navier-Stokes, magnetic induction equation are solved along with the divergence free condition of magnetic field. Due to the interaction between magnetic field and conducting fluids, Lorentz force term is added to the fluid momentum equation. For the numerical stability, a Lagrange multiplier term is used in the magnetic induction equation, which has no physical meaning nor effect on the solution. The original approach satisfies the mass conservation within each element but it is not necessarily satisfied in the momentum control volume. Two modifications are proposed as a remedy. First, the convective fluxes are computed over the two-neighbouring elements which then resulted in improved mass conservation over the momentum control volume and increased stability. The second modification applies to only two-dimensional MHD flows. The Lorentz force term in the momentum equation is replaced with $\sigma [\textbf{E} + \textbf{u} \times \textbf{B}] \times \textbf{B}$. Neglecting $\textbf{E}$ makes this term similar to mass matrix if $\textbf{B}$ is taken from the previous time step. Therefore, this modification improves the preconditioning of the monolithic approach. The developed solver is first validated for two-dimensional Hartmann flow of which the analytical solution is known. Then lid-driven cavity and backward facing step problems are investigated under external magnetic field both in 2D and 3D with insulating walls. Three-dimensional MHD flow in ducts is another case where analytic solutions exist. Both conducting and insulating wall boundary conditions are employed and validated. Finally two-dimensional flow over circular cylinder and NACA 0012 profile are investigated for vertical/horizontal external magnetic field and insulating/conducting boundaries. The eletrohydrodynamics (EHD) flow problems involve the interaction between electric field and charged particles inside the fluid. In the present study, the effect of plasma on the flow over lifting bodies is investigated and the working fluid is air, which is neutral at standard conditions. Therefore, a device called Dielectric Barrier Discharge (DBD) is used to ionize the air in a small volume near the surface. DBD consists of two electrodes separated by a dielectric layer. When a voltage is applied to the electrodes, ionization takes place. In order to simulate this phenomenon, Suzen\&Huang model is employed in which Poisson equation is solved for electric potential and charge density, separately. Once potential and charge density are known Coulumb force can be calculated and added as a body force term in the incompressible Navier-Stokes equation. The side-centered approach is used for the velocity components and pressure is placed at the element center for the momentum and continuity equations. For the solution of Poisson equation the charge density and electric potential are placed at the element center while gradients are defined at the edge centers. The solver is first applied to an EHD flow in quiescent air and compared with both experimental and numerical solutions. Then, two electrodes are placed at the bottom wall of 2D cavity with a moving lid to investigate the effect of electric field on classical cavity problem. Finally, EHD flow over NACA 0012 airfoil at angle of attacks up to $\alpha=7$ is investigated in terms of flow structure, lift and drag coefficients.
-
ÖgeA variational graph autoencoder for manipulation action recognition and prediction(Graduate School, 2022-06-23) Akyol, Gamze ; Sarıel, Sanem ; Aksoy, Eren Erdal ; 504181561 ; Computer EngineeringDespite decades of research, understanding human manipulation actions has always been one of the most appealing and demanding study problems in computer vision and robotics. Recognition and prediction of observed human manipulation activities have their roots in, for instance, human-robot interaction and robot learning from demonstration applications. The current research trend heavily relies on advanced convolutional neural networks to process the structured Euclidean data, such as RGB camera images. However, in order to process high-dimensional raw input, these networks must be immensely computationally complex. Thus, there is a need for huge amount of time and data for training these networks. Unlike previous research, in the context of this thesis, a deep graph autoencoder is used to simultaneously learn recognition and prediction of manipulation tasks from symbolic scene graphs, rather than using structured Euclidean data. The deep graph autoencoder model which is developed in this thesis needs less amount of time and data for training. The network features a two-branch variational autoencoder structure, one for recognizing the input graph type and the other for predicting future graphs. The proposed network takes as input a set of semantic graphs that represent the spatial relationships between subjects and objects in a scene. The reason of using scene graphs is their flexible structure and modeling capability of the environment. A label set reflecting the detected and predicted class types is produced by the network. Two seperate datasets are used for the experiments, which are MANIAC and MSRC-9. MANIAC dataset consists 8 different manipulation action classes (e.g. pushing, stirring etc.) from 15 different demonstrations. MSRC-9 consists 9 different hand-crafted classes (e.g. cow, bike etc.) for 240 real-world images. The reason for using such two distinct datasets is to measure the generalizability of the proposed network. On these datasets, the proposed new model is compared to various state-of-the-art methods and it is showed that the proposed model can achieve higher performance. The source code is also released https://github.com/gamzeakyol/GNet.