Tezler
Bu topluluk için Kalıcı Uri
Gözat
Sustainable Development Goal "none" ile Tezler'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
Öge4-(triflorometoksi)fenoksi grupları içeren ftalosiyaninlerin sentezi, tıpta, biyolojide ve ileri teknolojide kullanılabilirliklerinin araştırılması(Lisansüstü Eğitim Enstitüsü, 2021) Farajzadeh, Nazlı ; Koçak, Makbule ; 675427 ; KimyaFtalosiyaninler (Pcs) ve onların metalli türevleri benzersiz optik, elektronik, katalitik ve yapısal özelliklerinden dolayı son yıllarda büyük ilgi çekmektedir. Geleneksel olarak, ftalosiyaninler boya ve pigment olarak kullanılmıştır, fakat son zamanlarda kataliz, sıvı kristaller, kimyasal sensörler, fotodinamik tedavi, güneş enerjisi dönüşümü, optik veri depolama, yarı iletkenler ve nonlineer optik gibi farklı bilimsel ve teknolojik alanlarda geniş uygulama alanı bulmuşlardır. Ayrıca, metal ftalosiyaninler, elektrokatalitik, elektrokromik ekranlar ve sensör uygulamaları gibi çeşitli elektrokimyasal teknolojilerde işlevsel malzemeler olarak yaygın bir şekilde kullanılmaktadır. Metal ftalosiyaninlerin redox zenginliği, bu uygulamalardaki işlevsellikleri için anahtar faktördür ve bu özellikler, metal merkezler ve sübstitüentler değiştirilerek kolayca uyarlanabilir. Son yıllarda, araştırmacılar Pc moleküllerinin antioksidan, antimikrobiyal ve enzim inhibisyon aktivitelerini de incelemiştir. Reaktif oksijen türleri (ROS), canlı organizmaların metabolizmalarında üretilir ve bu ROS, hücresel biyomoleküllerin çoğu ile kolayca reaksiyona girerek bir organizmanın normal savunma mekanizmalarını azaltır. Çoklu ilaç direnci ile sonuçlanan ticari antibiyotik ilaçların yaygın kullanımı nedeniyle, antimikrobiyal aktiviteye sahip yeni Pc'lerin sentezi önemli hale gelmiştir. Düzlemsel makrosiklikler arasındaki π istiflenme (kümelenme) nedeniyle, sübstitüe edilmemiş ftalosiyaninler, organik çözücülerde ve suda çözünmez veya az çözünür, bu durum onların uygulamalarını sınırlandırmıştır. Makrosikliklerin periferal veya eksenel olarak sübstitüe edilmesi, 18π - elektron konjüge sistemleri arasındaki mesafeyi arttırır, kümelenmeyi azaltır böylece çeşitli çözücülerde çözünürlüklerini arttırır ve optik özelliklerini geliştirir. Flor atomları, yüksek oranda elektron çekme özelliğine sahiptir ve florlu Pc'ler için olağanüstü elektron geçişi, manyetik ve ışığa duyarlılık özellikleri sunar. Ayrıca, multi-floro-sübstitüe ftalosiyaninler mükemmel termal stabilite, kimyasal direnç ve polar/apolar çözünürlük gibi benzersiz özellikler gösterdiklerinden yeni malzemeleri üretmek için flor içeren Pc'lerin geliştirilmesine odaklanan çalışmaların sayısında son zamanlarda büyük bir artış vardır. Bu tez kapsamında fotodinamik terapide, biyolojide ve yüksek teknolojide kullanılabilme potansiyellerini araştırmak amacıyla, ikisi asimetrik ve 26 sı yeni olmak üzere 27 adet ftalosiyanin bileşiği sentezlendi. Bu amaçla tezin birinci aşamasında 4-nitroftalonitril veya 3-nitroftalonitril ile 4-(triflorometoksi)fenol kuru dimetil formamid (DMF) içerisinde, potasyum karbonat varlığında, azot atmosferi altında, 45 oC de reaksiyona sokularak, sırasyla 4-(4-(triflorometoksi) fenoksi) ftalonitril (1) ve 3-(4-(triflorometoksi) fenoksi) ftalonitril (2) bileşikleri sentezlendi. 4,5 dikloroftalonitril ile 4-(triflorometoksi)fenol kuru dimetil sülfoksit (DMSO) içerisinde, potasyum karbonat varlığında, azot atmosferi altında, 80 oC de reaksiyona sokularak 4,5-bis(4-(triflorometoksi)fenoksi) ftalonitril (3) bileşiği hazırlandı. Elde edilen 1, 2 ve 3 nolu ftalonitrillerin ilgili metal tuzları (metalsiz ftalosiyain için metal tuzu kullanılmadan) ile uygun çözücülerde (pentanol katalitik miktarda DBU, dimetilaminoetanol ve kinolin) azot atmosferi altında ve uygun sıcaklıklarda (135 oC, 165 oC, 170 oC) siklotetramerizasyon reaksiyonları sonucu hedeflenen simetrik mono metalli (1-Zn, 2-Zn, 3-Zn, 1-Co, 2-Co, 3-Co, 1-Cu, 2-Cu, 3-Cu, 1-Pd, 3-Pd, 1-Ga, 2-Ga, 3-Ga, 1-In, 2-In, 3-In, 1-Lu, 2-Lu, 3-Lu), metalsiz (1-H2, 2-H2), ftalosyaninler ve sandviç tipi lutesyum ftalosiyaninler (1'-Lu, 2'-Lu, 3'-Lu) sentezlendi. Yine bu aşamada elektron itici gruplar içeren 4-(triflorometoksi)fenoksi)ftalonitril (1), 4,5-bis(4 triflorometoksi)fenoksi)ftalonitril ligantları (3) ve elektron çekici 4-((4-nitrofenil)etinil)ftalonitril ligantı (4) ile istatiksel kondenzasyon yöntemi kullanılarak A3B tipi halka dışı üçlü bağlar içeren push-pull yeni asimetrik çinko ftalosiyaninler sentezlendi. Reaksiyon 135 oC de azot altında DMAE içinde gerçekleştirildi. Sentezlenen bütün moleküllerin saflıkları ince tabaka kromatorafisi ile takip edilip, yapıları Ft-ır, 1H NMR (paramamanyetik ftalosiyaninler hariç), 13C NMR (paramamanyetik ftalosiyaninler hariç), kütle spektroskopisi ve UV-vis gibi spektral teknikler kullanılarak aydınlatıldı. Sandviç tipi lutesym ftalosiyaninlerin (1'Lu-3'Lu) yapılarını aydınlatmak için elektro spin resonas ölçümleri (ESR) de kullanıldı. Sentezlenen ftalosiyaninlerin elektronik spektrumları, farklı çözücülerde ve farklı derişimlerde UV-vis spektrometresi aracılığıyla ölçülerek çözücünün cinsinin ve derişiminin spektoskopik ve agregasyon (kümeleşme) özellikleri üzerindeki etkisi araştırıldı. Tezin ikinci aşamasında sentezlenen ftalosiyaninlerin biyolojide kullanılabilme potansiyellerini araştırmak amacıyla 1, 1-H2, 1-Cu, 1-Pd bileşiklerinin antioksidan aktiviteleri ve tirozinaz inhibisyon özellikleri, 3, 3-Zn, 3-Cu, 3-Co bileşiklerinin antimikrobiyal ve antioksidan aktiviteleri incelendi. Ayrıca 4 veya 3 konumlarında 4-(triflorometoksi)fenoksi (1,2) veya 4-(triflorometoksi)tiyofenoksi gruplarını (1S, 2S) içeren ftalonitrillerden hazırlanan metalsiz ve bakır ftalosiyaninlerin (1-H2, 1-Cu, 2-H2, 2-Cu, 1S-H2 , 1S-Cu, 2S-H2, 2S-Cu) bağlayıcı atomun cinsinin ve konumunun bu bileşiklerin biyolojik özelliklerinin üzerindeki etkisini incelemek için antioksidan ve tirozinaz inhibisyon aktiviteleri araştırıldı. Tezin üçüncü aşamasında, fotodinamik terapide kullanılabilme potansiyellerini araştırmak amacıyla, 1-Zn, 1-InCl, 1-Ga, 1-Lu, 2-Zn, 2-InCl, 2-Ga, 2-Lu, 3-Ga, 3- Lu bileşiklerinin, UV-vis ve floresans spekroskopisi kullanılarak, floresans kuantum verimleri, singlet oksijen kuantum verimleri ve fotobozunma kuantum verimleri hesaplandı. Ayrıca bu tez kapsamında, ilk defa, mono lutesyum ftalosianinlerin (1-Lu, 2- Lu, 3- Lu) SPDT'de kullanılabilme potansiyellerini araştırmak amacıyla, fotofiziksel ve fotokimyasal özellikleri incelendi. Tezin dördüncü aşamasında sentezlenen ftalosiyaninlerin elektrokataliz, elektroalgılayıcı, görüntüleme ve optoelektronik alanlarında kullanılabilme potansiyellerini araştırmak amacıyla 1-Zn, 1-Co, 1-In, 2-Zn, 2-Co, 2-In, 3-Zn, 3-Co, 3-In bileşiklerinin elektrokimyasal ve in-situ spektroelektrokimyasal özellikleri incelendi. Tezin son aşamasında ise 1 ve 3 nolu ftalonitrillerden hazıralana simetrik ve A3B tipi asimetrik çinko ftalosiyaninlerin (1-Zn, 3-Zn, 1-AZn, 3-AZn) non lineer optik malzemeler olarak kullanılabilme potansiellerini arştırmak amacıyla NLO ve optik sınırlama özellikleri açık yarık Z-tarama yöntemi kullanılarak incelendi.
-
Öge40 katlı asimetrik betonarme bir binanın deprem performansının zaman tanım alanında doğrusal olmayan hesap yöntemi ile belirlenmesi(Fen Bilimleri Enstitüsü, 2020) Aksoylu, Taner ; Gündüz, Abdullah Necmettin ; 629385 ; Yapı MühendisliğiDeprem kuşağı üzerinde bulunan ülkemizde her gün irili ufaklı onlarca deprem meydana gelmektedir. Çoğunluğunu küçük şiddetli depremlerin oluşturduğu bu depremler, kimi zaman da büyük şiddetlerde meydana gelip can ve mal kaybına neden olmaktadır. 17 Ağustos 1999 Gölcük, 12 Kasım 1999 Düzce, 23 Ekim 2011 Van depremleri bunlara en büyük örneklerdir. 26 Eylül 2019'da meydana gelen 5.8 büyüklüğündeki Marmara Denizi depremi ve 21 Ocak 2020'de meydana gelen 6.8 büyüklüğündeki Elazığ depremi de mevcut yapıların deprem performanslarının değerlendirilmesini yeniden gündeme getirmiştir. 18 Mart 2018 tarihinde yayınlanan ve 1 Ocak 2019 itibari ile yürürlüğe giren Türkiye Bina Deprem Yönetmeliği Bölüm 15'te Deprem Etkisi Altında Mevcut Bina Sistemlerinin Değerlendirilmesi ile ilgili kurallar verilmiştir. Bu çalışma kapsamında, verilen kurallar ve hesap esasları çerçevesinde, zaman tanım alanında doğrusal olmayan hesap yöntemi kullanılarak 40 katlı, asimetrik kat planına sahip betonarme bir yapının bina deprem performansı incelenmiştir. Birinci bölümde, yürütülen tez çalışmasının amacı ve kapsamı hakkında bilgi verilmiştir. İkinci bölümde doğrusal olmayan davranış ve bu davranışın modellenmesi incelenmiştir. Doğrusal olmayan hesap yöntemleri ile ilgili bilgiler verilmiştir. Üçüncü Bölümde performans kavramından bahsedilmiş ve Türkiye Bina Deprem Yönetmeliğindeki yaklaşıma değinilmiştir. Dördüncü bölümde, deprem etkisi altındaki binalar için hesap esasları irdelenmiştir. Beşinci bölümde, zaman tanım alanında hesap için kullanılacak ivme kayıtlarının seçilmesi veya oluşturulması ve ölçeklendirme ile ilgili bilgiler verilmiştir. Altıncı bölümde, tez çalışmasına konu olan mevcut binanın performans analizi bilgisayar programı yardımıyla yapılmıştır. Beton ve çelik malzemenin doğrusal olmayan davranışları modellenmiştir. Kesitler bu malzemeler ile oluşturulmuştur. Eleman açıklık uçlarına, kiriş ve kolonlar için yığılı plastik davranış modelince plastik mafsal atamaları yapılmıştır. Perdeler için ise yayılı plastik mafsal modeli kullanılmıştır. Zaman tanım alanında doğrusal olmayan analiz için, on bir adet gerçek deprem kaydı seçilerek ölçeklendirilmiştir. Analiz, deprem kayıtlarının birbirine dik iki doğrultuda yapıya etkitilmesi ile yapılmıştır. Deprem kayıtlarının kendi içlerinde doksan derece döndürülerek analizler tekrarlanmıştır. Analiz sonucunda kolon, kiriş ve perde elemanlar için plastik şekildeğiştirme ve plastik dönme değerleri incelenerek, hasar bölgelerindeki eleman sayılarına ulaşılmıştır. Tez çalışmasının yedinci bölümünde, bir önceki bölümde yapılan analiz sonuçları irdelenmiş ve bu sonuçlardan yapılan çıkarımlar özetlenmiştir. Kolon ve perde elemanlarda plastik şekildeğiştirmeler çok az meydana gelmiştir. Sadece kirişlerde göçme bölgesine geçen elemanlar olmuştur. Bu elemanların oranları da göçmenin önlenmesi performans seviyesi sınırlarını aşmamıştır. Seçilen on bir deprem kaydı için, yapı performansı normal performans hedefi olarak belirlenen, göçmenin önlenmesi performans düzeyi olarak bulunmuştur.
-
ÖgeA high-order finite-volume solver for supersonic flows(Lisansüstü Eğitim Enstitüsü, 2022) Spinelli, Gregoria Gerardo ; Çelik, Bayram ; 721738 ; Uçak ve Uzay MühendisliğiNowadays, Computational Fluid Dynamics (CFD) is a powerful tool in engineering used in various industries such as automotive, aerospace and nuclear power. More than ever the growing computational power of modern computer systems allows for realistic modelization of physics. Most of the open-source codes, however, offer a second-order approximation of the physical model in both space and time. The goal of this thesis is to extend this order of approximation to what is defined as high-order discretization in both space and time by developing a two-dimensional finite-volume solver. This is especially challenging when modeling supersonic flows, which shall be addressed in this study. To tackle this task, we employed the numerical methods described in the following. Curvilinear meshes are utilized since an accurate representation of the domain and its boundaries, i.e. the object under investigation, are required. High-order approximation in space is guaranteed by a Central Essentially Non-Oscillatory (CENO) scheme, which combines a piece-wise linear reconstruction and a k-exact reconstruction in region with and without discontinuities, respectively. The usage of multi-step methods such as Runge-Kutta methods allow for a high-order approximation in time. The algorithm to evaluate convective fluxes is based on the family of Advection Upstream Splitting (AUSM) schemes, which use an upwind reconstruction. A central stencil is used to evaluate viscous fluxes instead. When using high-order schemes, discontinuities induce numerical problems, such as oscillations in the solution. To avoid the oscillations, the CENO scheme reverts to a piece-wise linear reconstruction in regions with discontinuities. However, this introduces a loss of accuracy. The CENO algorithm is capable of confining this loss of accuracy to the cells closest to the discontinuity. In order to reduce this accuracy loss Adaptive Mesh Refinement (AMR) is used. This algorithm refines the mesh near the discontinuity, confining the loss of accuracy to a smaller portion of the domain. In this study, a combination of the CENO scheme and the AUSM schemes is used to model several problems in different compressibility regimes, with a focus on supersonic flows. The scope of this thesis is to analyze the capabilities and the limitations of the proposed combination. In comparison to traditional implementations, which can be found in literature, our implementation does not impose a limit on the refinement ratio of neighboring cells while utilizing AMR. Due to the high computational expenses of a high-order scheme in conjunction with AMR, our solver benefits from a shared memory parallelization. Another advantage over traditional implementations is that our solver requires one layer of ghost cells less for the transfer of information between adjacent blocks. The validation of the solver is performed in different steps. We assess the order of accuracy of the CENO scheme by interpolating a smooth function, in this case the spherical cosine function. Then we validate the algorithm to compute the inviscid fluxes by modeling a Sod shock tube. Finally, the Boundary Conditions (BCs) for the inviscid solver and its order of accuracy are validated by modeling a convected vortex in a supersonic uniform flow. The curvilinear mesh is validated by modeling the flow around a NACA0012 airfoil. The computation of the viscous fluxes is validated by modeling a viscous boundary layer developing on a flat plate. The BCs for viscous flows and the curvilinear implementation are validated by modeling the flow around a cylinder and a NACA0012 airfoil. The AUSM schemes are tested for shock robustness by modeling an inviscid hypersonic cylinder at a Mach number of 20 and a viscous hypersonic cylinder at a Mach number of 8.03. Then, we validate our AMR implementation by modeling a two-dimensional Riemann problem. All the validation results agree well with either numerical or experimental results available in literature. The performance of the code, in terms of computational time required by the different orders of approximation and the parallel efficiency, is assessed. For the former a supersonic vortex convection served as an example, while the latter used a two-dimensional Riemann problem. We obtained a linear speed-up until 12 cores. The highest speedup value obtained is 20 with 32 cores. Furthermore, the solver is used to model three different supersonic applications: the interaction between a vortex and a normal shock, the double Mach reflection and the diffraction of a shock on a wedge. The first application resembles a strong interaction between a vortex and a steady shock wave for two different vortex strengths. In both cases our results perfectly match the ones obtained by a Weighted Essentially Non-Oscillatory (WENO) scheme documented in literature. Both schemes are approximating the solution with the same order of accuracy in both, time and space. The second application, the double Mach reflection, is a challenging problem for high-order solvers because the shock and its reflections interact strongly. For this application, all AUSM-schemes under investigation fail to obtain a stable result. The main form of instability encountered is the Carbuncle phenomenon. Our implementation overcomes this problem by combining the AUSM+M scheme with the formulation of the speed of sound of the AUSM+up scheme. This combination is capable of modeling this problem without instabilities. Our results are in agreement with those obtained with a WENO scheme. Both, the reference solutions and our results, use the same order of accuracy in both, time and space. Finally, the third example is the diffraction of a shock past a delta wedge. In this configuration the shock is diffracted and forms three different main structures: two triple points, a vortex at the trailing edge of the wedge and a reflected shock traveling upwards. Our results agree well with both, numerical and experimental results available in literature. Here, a formation of a vortex-let is observed along the vortex slip-line. This vorticity generation under inviscid flow condition is studied and we conclude that the stretching of vorticity due to compressibility is the reason. The same formation is observed when the angle of attack of the wedge is increased in the range of 0-30. In general, the AUSM+up2 scheme performed best in terms of accuracy for all problems tested here. However, for configurations, in which the Carbuncle phenomenon may appear, the combination of the AUSM+M scheme and the computation of the speed of sound formula of the AUSM+up scheme is preferable for stability reasons. During our computations, we observe a small undershooting right behind shocks on curved boundaries. This is imputable to the curvilinear approximation of the boundaries, which is only second-order accurate. Our experience shows that the smoothness indicator formula in its original version, fails to label uniform flow regions as smooth. We solve the issue by introducing a threshold for the numerator of the formula. When the numerator is lower than the threshold, the cell is labeled as smooth. A value higher than 10^-7 for the threshold might force the solver to apply high-order reconstruction across shocks, and therefore will not apply the piece-wise linear reconstruction which prevents oscillations. We observe that the CENO scheme might cause unphysical states in both inviscid and viscous regime. By reconstructing the conservative variables instead of the primitive ones, we are able to prevent unphysical states for inviscid flows. For the viscous flows, temporarily reverting to first-order reconstruction in the cells where the temperature is computed as negative, prevents unphysical states. This technique is solely required during the first iterations of the solver, when the flow is started impulsively. In this study the CENO, the AUSM and the AMR methods are combined and applied successfully to supersonic problems. When modeling supersonic flow with high-order accuracy in space, one should prefer the combination of the AUSM schemes and the CENO scheme. While the CENO scheme is simpler than the WENO scheme used in comparison, we show that it yields results of comparable accuracy. Although it was beyond the scope of this study, the AUSM can be extended to real gas modeling which constitutes another advantage of this approach.
-
ÖgeA mathematical model and two-stage heuristic for the container stowage planning problem with stability parameters(Lisansüstü Eğitim Enstitüsü, 2021) Bilican, Mevlüt Savaş ; Evren, Ramazan ; 668833 ; Endüstri MühendisliğiOver the past two decades there has been a continuous increase in demand for cost efficient containerized transportation. To meet this demand, shipping companies have deployed larger container vessels, which can nowadays transport more than 20000 TEUs (Twenty-Foot Container Equivalent Units). These vessels sail from port to port loading and unloading thousands of containers. As the size of the vessels increases, the loading sequence of containers onto the vessels presents an important challenge for planners since the liner companies try to shorten their stay at ports in order to improve their profits. An efficient stowage plan which delineate the location of each container is required to keep the vessel duration at port minimum. Because containers must be stacked on top of each other, unloading and loading of a container at the same port results from over stowage. Over-stows arise either when planners want to unload containers destined for current port which however are beneath those destined for subsequent ports, or when planners want to reorder the sequence of containers to prevent more over-stows in the future. Usually it is called necessary shifting in the former case and voluntary shifting in the latter. Shifting containers are time-consuming and money-consuming activities. Therefore, the arrangement of containers on board is crucial to achieve effective operations by reducing the number of over-stows. The task of determining the arrangement of containers is called stowage planning. On the other hand, while keeping the number of over-stows at the minimum level, the stowage plan must comply with the stability requirements of the ship`s sailing safely. Fail to meet basic stability constraints may lead to catastrophic consequences in terms of both ship and cargo safety. Moreover, container ships with loading plans that no not meet the stability requirements are not allowed to sail by port authorities. Therefore, over-stow instances and stability parameters play a crucial role for the efficiency of the loading plan. In this study, the container stowage planning problem with stability constraints (e.g. shear force, bending moment, trim) is considered and a mixed integer linear programming (MILP) formulation which generates load plans by minimizing total cost associated with the over-stows and trimming moments is developped. The study adopts a holistic perspective which encompasses several real-world features such as different container specifications, a round-robin tour of multiple ports, technical limitations related to stack weight, stress, and ballast tanks. A two-stage heuristic solution methodology that employs an integer programming (IP) formulation is proposed along with a swapping heuristic (SH) algorithm. This approach first acquires a lower bound on the total over-stow cost with the IP model, thereby creating an initial bay plan. Then, it applies the SH algorithm to this initial bay plan to minimize cost resulting from trimming moments. The efficiency of the MILP formulation and heuristic algorithm is investigated through numerical examples. The results have shown that the heuristic has greatly improved the solution times as well as the size of the solvable problems compared to the MILP formulation. In particular, the two-stage heuristic can solve all size problem instances within an average optimality gap of 0-25% in less than 8 minutes, whereas the MILP can only achieve an approximate optimality gap of 55-80% in 2 hours.
-
ÖgeA modified anfis system for aerial vehicles control(Lisansüstü Eğitim Enstitüsü, 2022) Öztürk, Muhammet ; Özkol, İbrahim ; 713564 ; Uçak ve Uzay MühendisliğiThis thesis presents fuzzy logic systems (FLS) and their control applications in aerial vehicles. In this context, firstly, type-1 fuzzy logic systems and secondly type-2 fuzzy logic systems are examined. Adaptive Neuro-Fuzzy Inference System (ANFIS) training models are examined and new type-1 and type-2 models are developed and tested. The new approaches are used for control problems as quadrotor control. Fuzzy logic system is a humanly structure that does not define any case precisely as 1 or 0. The Fuzzy logic systems define the case with membership functions. In literature, there are very much fuzzy logic applications as data processing, estimation, control, modeling, etc. Different Fuzzy Inference Systems (FIS) are proposed as Sugeno, Mamdani, Tsukamoto, and ¸Sen. The Sugeno and Mamdani FIS are the most widely used fuzzy logic systems. Mamdani antecedent and consequent parameters are composed of membership functions. Because of that, Mamdani FIS needs a defuzzification step to have a crisp output. Sugeno antecedent parameters are membership functions but consequent parameters are linear or constant and so, the Sugeno FIS does not need a defuzzification step. The Sugeno FIS needs less computational load and it is simpler than Mamdani FIS and so, it is more widely used than Mamdani FIS. Training of Mamdani parameters is more complicated and needs more calculation than Sugeno FIS. The Mamdani ANFIS approaches in the literature are examined and a new Mamdani ANFIS model (MANFIS) is proposed. Training performance of the proposed MANFIS model is tested for a nonlinear function and control performance is tested on a DC motor dynamic. Besides, ¸Sen FIS that was used for estimation of sunshine duration in 1998, is examined. This ¸SEN FIS antecedent and consequent parameters are membership functions as Mamdani FIS and needs to defuzzification step. However, because of the structure of the ¸Sen defuzzification structure, the ¸Sen FIS can be calculated with less computational load, and therefore ¸Sen ANFIS training model has been created. These three approaches are trained on a nonlinear function and used for online control. In this study, the neuro-fuzzy controller is used as online controller. Neuro-fuzzy controllers consist of simultaneous operation of two functions named fuzzy logic and ANFIS. The fuzzy logic function is the one that generates the control signal. It generates a control signal according to the controller inputs. The other function is the ANFIS function that trains the parameters of the fuzzy logic function. Neuro-fuzzy controllers are intelligent controllers, independent of the model, and constantly adapting their parameters. For this reason, these controllers' parameters values are constantly changing according to the changes in the system. There are studies on different neuro-fuzzy control systems in the literature. Each approach is tested on a DC motor model that is a single-input and single-output system, and the neuro-fuzzy controllers' advantages and performances are examined. In this way, the approaches in the literature and the approaches added within the scope of the thesis are compared to each other. Selected neuro-fuzzy controllers are used in quadrotor control. Quadrotors have a two-stage controller structure. In the first stage, position control is performed and the position control results are defined as angles. In the second stage, attitude control is performed over the calculated angle values. In this thesis, the neuro-fuzzy controller is shown to work perfectly well in single layer control structures, i.e., there was not any overshooting, and settling time was very short. But it is seen from quadrotor control results that the neuro-fuzzy controller can not give the desired performance in the two-layered control structure. Therefore, the feedback error learning control system, in which the fuzzy controller works together with conventional controllers, is examined. Fundamentally, there is an inverse dynamic model parallel to a classical controller in the feedback error learning structure. The inverse dynamic model aims to increase the performance by influencing the classical controller signal. In the literature, there are a lot of papers about the structure of feedback error learning control and there are different proposed approaches. In the structure used in this work, fuzzy logic parameters are trained using ANFIS with error input.The fuzzy logic control signal is obtained as a result of training. The fuzzy logic control signal is added to the conventional controller signal. This study has been tested on models such as DC motor and quadrotor. It is seen that the feedback error learning control with the ANFIS increases the control performances. Antecedent and consequent parameters of type-1 fuzzy logic systems consist of certain membership functions. A type-2 FLS is proposed to better define the uncertainties, because of that, type-2 fuzzy inference membership functions are proposed to include uncertainties. The type-2 FLS is operationally difficult because of uncertainties. In order to simplify type-2 FLS operations, interval type-2 FLS is proposed as a special case of generalized type-2 FLS in the literature. Interval type-2 membership functions are designed as a two-dimensional projection of general type-2 membership functions and represent the area between two type-1 membership functions. The area between these two type-1 membership functions is called Footprint of Uncertainty (FOU). This uncertainty also occurs in the weight values obtained from the antecedent membership functions. Consequent membership functions are also type-2 and it is not possible to perform the defuzzification step directly because of uncertainty. Therefore, type reduction methods have been developed to reduce the type-2 FLS to the type-1 FLS. Type reduction methods try to find the highest and lowest values of the fuzzy logic model. Therefore, a switch point should be determined between the weights obtained from the antecedent membership functions. Type reduction methods find these switch points by iterations and this process causes too much computation, so many different methods have been proposed to minimize this computational load. In 2018, an iterative-free method called Direct Approach (DA) was proposed. This method performs the type reduction process faster than other iterative methods. In the literature, studies such as neural networks and genetic algorithms on the training for parameters of the type-2 FLS still continue. These studies are also used in the interval type-2 fuzzy logic control systems. There are proposed interval type-2 ANFIS structures in literature, but they are not effective because of uncertainties of interval type-2 membership functions. FLS parameters for ANFIS training should not contain uncertainties. However, the type-2 FLS should inherently contain uncertainty. For this reason, Karnik-Mendel algorithm is modified, which is one of the type-reduction methods, to apply the ANFIS on interval type-2 FLS. The modified Karnik-Mendel algorithm gives the same results as the Karnik-Mendel algorithm. The modified Karnik-Mendel algorithm also gives exact parameter values for use in ANFIS. One can notice that the ANFIS training of the interval type-2 FLS has been developed successfully and has been used for system control.
-
ÖgeA multi-objective optimization framework for trade-off among pedestrian delays and vehicular emissions at signal controlled intersections(Graduate School, 2021-12-14) Akyol, Görkem ; Çelikoğlu, Hilmi Berk ; 501181409 ; Transportation Engineering ; Ulaştırma MühendisliğiTraffic congestion has numerous negative effects on urban life. Increased travel time and vehicular emissions are some of these negative effects. On one hand, the transportation sector is the leading factor in contributing to climate change air pollution based on the greenhouse gas emission of 29%. On the other hand, pedestrian traffic management requires extreme caution, especially in Central Business Districts. In classic traffic signal control applications, allocation of pedestrian green time is held at the minimum value mostly. However, in crowded intersections located in city centers, the number of pedestrians that need to be served can be excessive due to a number of reasons (gatherings, touristic, sport event, etc.). In this study, an integrated methodology for optimizing traffic signal control considering pedestrian delay and vehicular emissions is developed. VISSIM is used as the microscopic traffic simulator, the Non-dominated sorting genetic algorithm-II is adopted to solve the multi-objective optimization problem at hand, and MOVES3 is used to calculate vehicular emissions on a microscopic scale. To interfere with the traffic signal control settings, COM feature of VISSIM is used in conjunction with MATLAB. By using COM interface, one can change the signal control settings, vehicle and pedestrian inputs, routes of vehicles, and many other features that can be read and changed during simulations. To illustrate the trade-off between pedestrian delay and vehicular emissions, two objective functions are formulated. The input for these functions are obtained from VISSIM via COM interface. Since the objective functions are conflicting with each other, one tries to maximize the pedestrian green time while the other tries to maximize vehicle green time, a trade-off is observed between the objectives. In addition, a case study is conducted at Kadıköy, Istanbul to evaluate the proposed approach. Data is retrieved using camera recordings. Collected data involves the vehicle and pedestrian counts, and average crossing times of pedestrians. Calibration of the simulation model is done considering GEH statistics. After the calibration, two main scenarios are designed. The first main scenario involves a gradual change in vehicles loaded to the network. The second main scenario is produced to test the different prioritization approaches with changing vehicle demand. Three different sub-scenarios are generated in this manner. First, the sub-scenario is the situation where pedestrian movement is prioritized by giving more pedestrian time compared to vehicles. The second sub-scenario is created to achieve a balance between pedestrian and vehicle green times. The third sub-scenario is produced to prioritize vehicles over pedestrians. In the second scenario, all the signal timings are chosen from the Pareto front set acquired from the multi-objective optimization solved with MATLAB. Results acquired from simulations suggest a trade-off between pedestrian delay and vehicular emissions. In conclusion, a novel method is proposed in this study to assess through trade-off the signal control settings considering pedestrian delay and vehicular emissions. Despite the fact that an optimization problem is solved in the thesis, a unique global solution is not acquired. Because more than one objective is overlooked, multiple solutions are obtained after the optimization process. The multi-objective optimization problem is handled with a posteriori approach which enabled us to sense some intuition over the problem and its Pareto optimal solutions. By using this unique feature, scenarios are designed to test the solutions. In future research, the proposed framework can be applied to a variety of networks and traffic conditions. Safety measures can be added to the multi-objective optimization framework. 3-D Pareto fronts can be acquired for pedestrian delay, emissions, and safety in an optimization framework.
-
ÖgeA novel energy-saving device for ships- gate rudder system(Lisansüstü Eğitim Enstitüsü, 2022) İlter Tacar, Zeynep ; Korkut, Emin ; 723963 ; Gemi İnşaatı ve Gemi Makineleri MühendisliğiIntelligent use of energy is one of the most important issues today. The increasing need for energy and the decreasing traditional energy resources have long ago shown us that the use of renewable clean energy sources is essential. On the basis of countries, it is obvious that the countries that dominate energy have a higher potential to exist and preserve their power in the future compared to other countries. On the other hand, we have only one planet where we can live for now, and it has already signalled global climate change. Considering all these, the importance of the management and efficiency of clean energy resources can be understood. However, it is still not possible to use renewable energy completely in most areas. Ship transportation, for example, continues its way using fossil fuels. In this case, our duty as engineers should be to use it most efficiently in the systems we design, whether the energy source is fossil or renewable. Reducing fuel consumption on ships is possible by various methods. These can be generalized as optimizing the hull design, decision of the main and auxiliary machinery used in ships following technological developments, not disrupting the routine maintenance and repair works on ships and planning them correctly, route optimization and installing systems to improve ship propulsion efficiency. The use of systems to improve ship propulsion efficiency, which is the subject of this thesis (energy saving devices-ESD), has been seen as a very interesting saving method in recent years due to the rules on the restriction of international emissions and due to the increase in cost when fuel prices are considered. In addition, the fact that the energy efficiency index (EEXI) of existing ships of the International Maritime Organization (IMO) will enter into force in 2023 has made the retrofit applications of energy-saving systems quite up to date. Energy conservation systems are appendages mostly static systems, positioned in front of the ship's propeller, in the same frame as/on the propeller, or after the propeller. According to the working principles: • Preventing the flow separation/improving wake field quality • Reducing or compensating rotational losses • Reducing hub vortex losses can be grouped as systems. In this thesis, three different ESDs on two different ships were investigated. The first ship is a 7000 DWT chemical tanker that has been studied in the STREAMLINE (European Union) project. In this ship (λ=16.5), a duct positioned in front of the propeller and improving the inflow to the propeller and a stator positioned at the same location, reducing rotational losses, are studied separately. In this study, the parametrically investigated duct was generated using the MARIN19A geometry and the location, diameter, chord length are the parameters investigated. Due to the restrictions imposed by the ship's stern form, it was decided that the position of the duct should be 0.3Dp (Dp: propeller diameter) ahead of the propeller plane. Nine different ducts were obtained by changing the diameter of the duct to 0.7, 0.8 and 0.9Dp and the chord length to 0.3, 0.4 and 0.5Dp. Numerical studies were carried out in StarCCM+ using the Computational Fluid Dynamics (CFD) method. In the calculations, the free surface effect is ignored and the calculation cost is minimised by using the "double body" method. In the CFD study, the RANS equations are solved using the SST k-ω turbulence model. Open water propeller analyses and bare hull resistance analyses were validated with the test results, and then the propulsion analyses with and without ducts were performed using the MRF method. As a result of the study, the duct with a diameter of 0.9Dp and a chord length of 0.4Dp increased the general propulsive efficiency the most compared to the case without a duct. The ESD, which was reviewed second on the same tanker vessel, is the pre-swirl stator (PSS). The analyses were carried out using the RANS method and the SST k-ω turbulence model, without taking into account the free surface effects. A stator with a diameter of 0.9Dp, a chord length of 0.25Dp and a cross-section of NACA0012, which is also positioned 0.3Dp forward of the propeller, has been developed. This stator is designed as four blades in its initial state, and the stator blades are named port upper, port central, port lower and starboard central. Their angular positions are 315°, 270°, 225° and 90°, respectively (when viewed forward from the stern, 0° represents the upper blade tip of the propeller). Position-2 and 3 are obtained by rotating 15° and 30° clockwise from this starting position, 15° counterclockwise to obtain Position-4. Firstly, the stator was investigated with 4 blades, without starboard blade, starboard blade with the half-length, and port without upper blade in Position-1, and the general propulsive efficiency of the stator design without starboard blade was found to give the best results compared to the no stator case based on ηD. The study continued with the stator design without the starboard blade and analyzes were also carried out for other angular blade positions (positions 2, 3 and 4). After it was seen that Positions 1 and 2 gave the best results, work was continued with Position 1, which is the initial position, and this time the stator designs were obtained by changing the pitch angles of the blades from 0° to 4°, -4° and -8° were examined. As a result, it has been seen that the stator with a pitch angle of -8° gives the best result in terms of efficiency compared to the case without a stator. The second ship type is a 2400 GT cargo ship. The full-scale vessel is available and is in service in Japanese inland waters. A new energy conservation system called the "Gate Rudder" system has been studied on this ship. The Gate Rudder System (GRS) is a propulsion unit consisting of twin rudders and rudder blades located aside propeller. In this system, the rudder blades regulate the flow to the propeller, like a large nozzle covering the propeller, while providing additional thrust to the thrust produced by the propeller. In addition, the rudder blades can be controlled separately, which increases the manoeuvrability of the ship. In this study, the GRS was compared with the sister ship equipped with a conventional rudder system (CRS). The two vessels operate on similar routes in Japan and sea trial measurement results are available for both. The results of numerical and experimental studies were compared with the results of this trial. The scale effect is a phenomenon that should be considered when determining the performance of ESDs. Efficiency and power values obtained from model scale tests or analyzes of ESDs may differ on the full-scale ship. For this reason, in this study, two different model scales (λ1=50.95 and λ2=21.75) and full-scale ship were studied. The resistance and propulsion tests of the model with λ1=50.95 were carried out in Japan. However, the model at this scale is small, and another larger model was needed to examine the performance of the GRS and to investigate the scale effect. In this case, the model with λ2=21.75 was produced and the resistance, nominal wake, propulsion and flow visualisation experiments were carried out in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University. Resistance tests were carried out with the bare hull model and the model with the conventional rudder. The self-propulsion experiments were carried out with the GRS and CRS. The same model propeller was used in both rudder systems. In addition to these two model scales, numerical analyzes of the full-scale ship were also carried out. In CFD studies, RANS equations are solved by taking into account the free surface effects and using the SST k-ω turbulence model. Since the effect of scale effect on GRS performance will be examined, the same mesh structure was used in all three scales. Since a large number of cells would be required to provide y+<5 on a full-scale ship and the cost of the solution would increase, the mesh was generated with y+>30 for all three scales. In the model scales, resistance analyzes were performed for both y+ values, but self-propulsion analyses were performed only for y+>30. Propeller open-water curves are used to calculate propulsion efficiencies and hence power requirements. Experimental open-water curves are used for CRS, while results from CFD are used for GRS. For this, the GRS system was analyzed as an open-water propeller and the efficiency and power values of the GRS were calculated with the help of the curves obtained from here. To compare the results in different scales and to examine the effect of the scale on the results, the results in the model scale were converted to full scale. For this, the 1978 ITTC performance prediction procedure was used. Corrections were made in the extrapolation to full scale by considering factors such as the boundary layer thickness being relatively larger than the full scale, differences in friction resistances, and surface roughness in the model scale. The full-scale results obtained were compared with the sea trial measurement results of the ships and λ2=21.75 model test results and the full-scale CFD results were compatible.
-
ÖgeA peak current controlled dimmable sepic led driver with low flicker(Graduate School, 2022-01-18) Örüklü, Kerim ; Yıldırım, Deniz ; 504181056 ; Electrical Engineering ; Elektrik MühendisliğiNowadays, a considerable part of the energy consumption in the world has been formed by lighting sources used in buildings, industry, transportation, and commercial. Yet, there has been a rapid decrease in traditional energy resources. Therefore, an energy efficient lighting system could be a solution to global energy problem. Light-emitting diodes (LEDs) have been taken much attention lately and expected to replace with classical lamps due to their special characteristics like high efficiency, long lifetime, environment friendly, robustness, and small size. However, a driver circuit is required to operate LEDs and constant current drivers can improve the LEDs performance. Hence, studies on LED driver circuits and its control method have recently been increased both in industry and in academia. In some applications, it is desirable to have control on LED brightness. This can be done by a current-control method that adjust the current flowing through LEDs. But, there are recommended practices while modulating current in High-Brightness LEDs for mitigating health risk to viewers in IEEE Std. 1789-2015. Most of the driver circuit have put on the market without any flicker measurements and checking these recommended practices about percent flicker and flicker index. All light sources may have flicker with various levels. However, the flicker generally exists in LED lighting when AC to DC conversion is present. Because of the full-wave bridge rectification in AC-DC LED drivers, LED lamps will have a peak-to-peak current ripple at twice the line frequency (100 Hz or 120 Hz). Hence, the flicker is mainly dependent on the driver circuit for LED lighting. Health risks and biological effects of flicker to the viewers such as headache, eyestrain, and seizures cannot be ignored and should be taken into consideration when designing a LED driver. A flicker-free LED driver can improve the visual performance and offer a human health friendly lighting. In this thesis, a peak-current control method is proposed for 30-Watt Single Ended Primary Inductor Converter (SEPIC) LED driver with adjustable output current. The proposed control strategy is based on measuring MOSFET peak current value using a shunt resistor. When this voltage reaches peak threshold value, controller turns off switch. The output current is adjusted to desired levels by changing this peak threshold value. Both simulation and implementation of the driver have been carried out. 220V rms, 50 Hz AC main is used as input of the driver. Pulse Width Modulation (PWM) signals are generated by using UC3842 and TL3845 Integrated-Chips (IC). Flicker measurements are taken from the output current curve. To validate proposed peak current control method, a 33.6 Watt, 112 V / 0.3 A SEPIC LED driver prototype is constructed and tested. Analysis and measurements have been carried out for different output current levels. Peak efficiency is obtained as 88.4% at nominal output current. Furthermore, 5.806% and 6.540% of percent flicker have been obtained at 300mA and 100mA, respectively. It has been found that the proposed Peak-Current-Mode-Controlled SEPIC LED driver offers LED brightness control for the consumer comfort, a high efficient system for energy efficiency, and a low-risk level of flicker for human health.
-
ÖgeA quantitative approach on human factor analysis in maritime operations(Lisansüstü Eğitim Enstitüsü, 2021) Erdem, Pelin ; Akyüz, Emre ; 686914 ; Deniz Ulaştırma MühendisliğiThe maritime authorities and international organizations have taken the issue of the pivotal role of the human element and its contribution to the safety of ship operations very seriously due to the growing global concern over maritime disasters. At this point, that at least 80 per cent of shipping casualties are related to the human element is underpinned by the conducted studies and investigation reports published by organizations such as IMO (International Maritime Organization), ILO (International Labour Organization) and experts in the field. Despite the economic and technological improvements, since tragic events had caused the worst environmental disasters in recent years, never has the human element been so crucial in the safe operation of ships. However, although the issue of human contribution to unsafe shipboard operations needs to be the focal point of the researches, there has not been a qualified novel study that can meet the gap of the maritime transportation industry. The purpose of this thesis is to develop a uniquely quantitative approach to evaluate the human error probabilities (HEPs) and to analyse the increasing operational risks due to human errors. In this context, a hybrid approach incorporating Fault Tree Analysis (FTA) and Interval type-2 fuzzy-based Success Likelihood Index Method (SLIM) is developed. The approach, additionally contributing to current human error probability assessment methods in academic literature, is applicable to all shipboard operations regardless of vessel type. With the study under this thesis, it is predicted to provide supportive guidance that enables shipping companies to undertake the early detection of unsafe cargo operations before they get out of control. With the risk assessment concentrated on the concept of the human-related operational failure by implementing the hybrid approach, system vulnerabilities that could result in an undesired event are considerably detected and the awareness in shipping safety management is increased. It is also predicted to reach solid targets by providing both qualitative and quantitative data to maritime container transportation safety as well as an insight into what measures may be necessary to reduce future losses. A hybrid approach that differs from a traditional HEP assessment, suitable customization to containership platform, a methodology that involves key risk and performance shaping factors (PSFs) based on the literature, industry standards, technical knowledge of marine experts and analysis of marine accident investigation reports, increased consistency in expert judgements, and analysis of the root causes of major risks to operational safety can be mentioned as original aspects of the thesis. Implementation of management of human error probability analysis integrating with risk analysis will provide a consistent tool for the maritime industry. As a result, the study offering proactive solutions to related issue of unsafe shipboard operations that closely related to both economic and environmental aspects of the maritime transportation industry will provide tangible contributions for enhancing safety.
-
ÖgeA risk management framework for smart distribution systems(Graduate School, 2021-03-08) Soykan Üstündağ , Elif ; Bağrıyanık, Mustafa ; 702052002 ; Computational Science and Engineering ; Hesaplamalı Bilim ve MühendislikSmart grid enables an intelligent, effective, and reliable way of delivering energy by using information and communication technologies (ICT). It addresses environmental requirements with the integration of green energy resources and paves the way for new consumption areas like electric vehicles. The increased adoption of ICT, on the other hand, makes the smart grid assets a prime target for cyber threats. Therefore, having a proper cybersecurity strategy with the defined risk management processes has become more crucial for power distribution operators. Additionally, assessing the security with the customer perspective brings random behavior and needs several computational simulations to represent this behavior. Smart grid distribution systems employ demand response programs to manage consumer demand with the timely adjustment of the demand by encouraging consumers. Demand response programs enable distribution operators to balance the power grid load with the planned and implemented methodologies. To achieve this, operator-consumer cooperation is inevitable so that utilities can guide consumers to change their consumption tendency by adopting price-based or incentive-based programs. Incentive-based programs are used to attract the consumers via contract-based or notification-based incentives e.g., when the peak load occurs operator can send an SMS message to inform the consumer regarding the demand response event. Although SMS notifications are very common and effective way to reach the consumer they open a new attack surface. The first part of this study concentrates on the risk assessment of demand response for smart grid distribution systems. A new domain-specific risk assessment methodology based on the combination of the methodologies of Smart Grid Information Security (SGIS) risk toolbox and the Open Web Application Security Project (OWASP) methodology is proposed to identify the threats and their impacts. Proposing a new approach, the deficiency of SGIS risk methodology is complemented by OWASP methodology as the SGIS does not directly provide a method for likelihood analysis. A five-scale likelihood method is developed to accomplish the likelihood analysis in a broader sense. Based on the proposed risk assessment, a new threat to disturb the power grid reliability using SMiShing (SMS Phishing) is explored. It is revealed that SMiShing attacks can damage the power grid through customer behavior by victimizing customers even if the attacker has no access to the power grid communication domain. In the second part, the newly identified attack is simulated for the defined demand response use case, Demand Response for Residential Customers and attack simulation is extended with a second use case, Demand Response for Electric Vehicle (EV) Charging, to analyze the impacts on the power grid. This is the first implementation in the smart distribution domain that focuses impacts of SMiShing attacks via use case realization. In the first use case, residential customers that are enrolled in an incentive based demand response program are the target for the SMiShing attack. The implementation is simulated on a test system to analyze the reaction of the system under attack. The European Low Voltage Feeder Test System provided by IEEE is utilized for deterministic and randomized attack scenarios. For the second use case, first, the security requirements and threats for the EV ecosystem that originated from different interfaces are investigated. Then the attack targetting to change the EV charging behavior of the EV owners is simulated using the test system taking the stochastic EV charging characteristics into account. In both use cases, the simulations performed so that the attacker launches SMiShing attacks with fake incentives aiming to change residential customer's/EV owner's behavior to create a high the residential/EV charging load leading to power grid disruptions. To measure how the attack scenarios affect the power grid the open-source Gridlab-D power system simulator is used with the load profiles produced by attack scenarios. The power flow solutions are evaluated using voltage, current, and power outputs to observe if any voltage imbalance, line failure, or transformer loading are occurring. For both use cases, the analyzes set out that attacks can severely affect the grid when the voltage and current values cannot stay within the tolerable limits. These consequences affect the delivery of power, distribution operator's business and reputation, and consumer's service quality. Based on the these outcomes and our findings, we proposed some mitigation strategies that are beneficial for both operators and customers. To mitigate beforementioned consequences and prevent possible attacks, some countermeasures are provided for both attack scenarios, from both the operator and customer perspectives. Some solutions and discussion are given on how the distribution operators should handle the attack, how they should interact with the consumer to prevent attacks, what kind of preventive actions they can take on the power grid to mitigate the attacks, and what the customer should do to protect themselves from SMiShing attacks. It is concluded that SMiShing attacks are important as the security vulnerability originates from external sources not directly from the power grid or smart grid ICT components. Although the attack target is the power grid, the distribution operator may not realize the root cause of the anomalies as the consumer is the decisive actor and can act unintentionally. As the load demand is increasing, especially the rapid penetration of EVs and controllable smart appliances will put more load and challenge on the power grid. Disruptions like voltage collapse, transformer overloading, and line failures may cascade to larger areas and lead to a significant negative impact on the operation of the power grid. Therefore, distribution operators should consider the SMiShing threat via demand response notifications and address the necessary countermeasures.
-
ÖgeA semi-automatic façade generation methodology of architectural heritage from laser point clouds: A case study on Architect Sinan(Lisansüstü Eğitim Enstitüsü, 2021) Kıvılcım, Cemal Özgür ; Duran, Zahide ; 709850 ; Geomatik MühendisliğiTangible cultural assets from different periods and civilizations reinforce historical and cultural memories that are passed from generation to generation. However, due to natural events, lack of proper maintenance, or wars, the heritage structures can be damaged or destroyed over time. To preserve tangible cultural assets for the future, it is crucial to ensure that these buildings' maintenance, repair, and restoration are of high quality. Hence, the preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. In this direction, the acquired data and derived models are used for various purposes in the fields of engineering and architectural applications, digital modeling and reconstructions, virtual or augmented reality applications. However, conventional measurement techniques require tremendous resources and lengthy project completion time for architectural surveys and 3D model production. With technological advances, laser scanning systems became a preferred technology as a geospatial data acquisition technique in the heritage documentation process. Without any doubt, these systems provide many advantages over conventional measurement techniques since the data acquisition is carried out effectively and in a relatively short time. On the other hand, obtaining final products from point clouds is generally time-consuming and requires data manipulation expertise. To achieve this, the operator, who has the knowledge about the structure, must interpret the point cloud, select the key points representing the underlying geometry and perform the vectorizing process over these points. In addition, point data contains systematic and random errors. The noisy point cloud data and ambiguities make this process tedious and prone to human error. The purpose of this thesis is to reduce the user's manual work cycle burden in obtaining 3D models and products from point cloud data: A semi-automatic user-guided methodology with few interventions is developed to easily interpret the geometry of architectural elements and establish fundamental semantic relationships from complex, noisy point clouds. First, the conventional workflow and methodologies in cultural heritage documentation were researched, and the bottlenecks of the current workflow were examined. Then, existing methodologies used in point cloud-based 3D digital building reconstruction were assessed. From this, semi-automatic methods are evaluated for a more suitable approach to 3D digital reconstruction of cultural heritage assets, which are more complex than modern buildings. Recently, Building Information Modeling (BIM) process applications have gained momentum. BIM systems make many contributions to project management, from the design to the operation of new modern buildings. Research on the applications for existing buildings in BIM has increased. Particularly, such applications and research in cultural heritage are gathered under the term of Heritage/Historic-Building Information Modeling (HBIM). In HBIM, dedicated architectural style libraries are generated, and geometric models are produced by associating the geometries of architectural elements with point clouds. Such applications generally come for Western architectural elements, in which construction techniques and geometrical relations of architectural rules and orders have been documented with sketches and drawings for centuries. Detailed descriptions and fine sketches pertaining to the rules and style studies of Ottoman architecture are limited. Having been the capital of many civilizations, historic Istanbul is crowned with the many mosques of Architect Sinan, dating from the 16th century, the golden era of the Ottoman Empire. For his innovative structures, Architect Sinan is considered an architectural and engineering genius. Unfortunately, Sinan did not leave enough written or visual documentation of his works, and although many aspects of Sinan's works have been researched, few have worked on the geometry of the facade elements. Previous architectural research examines the ratios and compares the general architectural elements of Sinan's works (comparing the dimensions and location of the elements). Building on this and our observations of Sinan's mosques, we designed an object-oriented library of parametric objects for selected architectural facade elements. In addition, some fundamental semantic relations of the prepared object library elements were introduced. A case study for procedural modeling was then carried out. In the next stage, we evaluated that an algorithmic approach can be used to obtain parametric architectural elements from noisy point cloud data. We benefited from the Random Sample Consensus (RANSAC) algorithm, which has a wide range of applications in computer vision and robotics. The algorithm is based on the purpose of obtaining the parameters of a given mathematical model; it is a non-deterministic method based on selecting the required number of random data from the data set to create the model and measuring the extent to which the hypothesis produced is compatible with the entire data set by evaluating the model. The basics of this method work with a certain number of iterations and return outputs of the most suitable model parameters, the dataset that makes up the model, and the incompatible data. In addition, model-specific criteria and rules based on architectural knowledge were added to the developed methodology to reduce the number of iterations. All algorithmic codes were produced in Python language. In addition, we used libraries such as NumPy and for arrays and mathematical operations. For visualization studies, the open graphics library (Open Graphics Library, OpenGL) was carried out using the Visualization Tool Kit (VTK) on the graphics application development interface. In addition, python modules of VTK C++ source libraries were compiled using CMake software and Microsoft Visual Studio. As the application area of the study, one of the most important mosques of Istanbul Şehzade Mosque, which is Mimar Sinan's first selatin complex, was chosen. Point cloud data acquired with a terrestrial laser scanner for the documentation studies of the mosque was obtained for this study. Different case areas were determined from the point cloud datasets. Windows on the Qibla direction façade and the domes from the roof covering of the mosque were used, respectively. While making this choice, we considered the variety of window elements and Sinan's use of the dome influenced. In the case applications, the point cloud selected from the window areas was segmented semi-automatically using proposed method recursively at different window levels from the inside to the outside. In the other case study, the algorithm performed the segmentation of the main dome. As a result of this segmentation, point groups that are not included in the model are evaluated once more time using the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm from Python's scikit-learn and presented to the user as a guiding output in the determination of architectural elements and deformations. Using the above-mentioned Sinan architectural dome typology relations with the main dome of the mosque, it was ensured that point clusters were formed in the modeling of other dome structures in the mosque. Finally, as an example, the parametric dome model was converted to Industry Foundation Class (IFC) format using open source CAD software. Integrity and accuracy comparisons were made using the outputs of the presented methodology and the CAD drawings produced by the restoration architects using the same data. The results were within acceptable limits for general-scale studies. Additionally, the presented method contributed to the interpretation of the data by saving time for expert users. In summary, a method has been developed for the semi-automatic extraction of architectural parametric models working directly on the 3D point cloud, specific to the Ottoman Classical Era Mosque, particularly Architect Sinan's works, using a data and model-oriented hybrid 3D building reconstruction approach.
-
ÖgeA stable, energy and time efficient biped locomotion(Lisansüstü Eğitim Enstitüsü, 2021) Yılmaz, Sabri ; Gökaşan, Metin ; 725780 ; Kontrol ve Otomasyon MühendisliğiThis thesis presents two different walking strategies for biped robots while ensuring energy efficiency. The first strategy is a closed-loop walking controller based on the most used 3-Dimensional (3D) Linear Inverted Pendulum Model (LIPM) which is used to calculate the Zero Moment Point (ZMP) approximately. The closed-loop Proportional Integral (PI) controller's coefficients are searched by the Genetic Algorithm (GA), which is developed to overcome the 3D LIPM's dynamical insufficiency. Because of its ease of modeling, the key concept is to continue to use the 3D LIPM with a closed-loop controller. For this purpose, the biped is modeled using the 3D LIPM, which is one of the most well-known modeling approaches for humanoid robots due to its ease of use and quick computations during trajectory planning. Model Predictive Control (MPC) is applied to the 3D LIPM once the simple model is obtained to search the reference trajectories for the biped while meeting the ZMP criteria. The second strategy is to express the ZMP in a detailed model instead of an approximate model. For this purpose, the biped is modeled with the conventional robot modeling methods and the detailed expression of the ZMP is obtained. Then the problem is redefined as a Nonlinear MPC problem. The highly complicated biped model is implemented in Matlab with the use of CasADi Library which is a symbolic library and used on large symbolic solutions. The optimal control problem is solved with the Interior Point Optimizer (IPOPT), which is an optimization solver for large equations. With the solution of the optimal control problem, reference trajectories are found for the biped while satisfying the ZMP criteria. Both strategies suggested in this thesis are studied and implemented on a biped robot which means the robot has no upper body elements. The main idea is that if the dynamic flaws are suppressed without any upper body elements, this study will open a way to work on more modular robots. After obtaining two different walking strategies, the energy-efficient trajectory for the swing leg is searched to have longer working durations on the field. The Big Bang Big Crunch with Local Search (BBBC-LS) global optimization algorithm is used for energy efficiency. With the newly defined trajectory there became nearly 10% energy consumption reduction compared to the sinusoidal trajectory. To implement the algorithms to the real biped, a new communication library is written to meet the desired communication speed. But with the increased speed in communication, there became random packet losses on the feedback from the motors. These packet losses are examined and it is observed that these random packet losses may make the system unstable, so to suppress the effects of packet losses the problem is redefined as a time delay problem. With the redefinition of the problem, the well-known Smith Predictor method is used to overcome the packet losses and from the results, it can be seen that with this redefinition the instability risk because of the packet losses has disappeared. In a short summary, a two-legged robot has been modeled using conventional methods in the literature. First, the dynamic defects of the simple model are eliminated with a conventional controller. Secondly, a more detailed dynamic model is obtained. Walking planning is done with both methods and comparisons are made with the method commonly used in the literature. The success of the proposed methods has been demonstrated in both simulations and experimental results. With the two methods proposed in this thesis, the oscillation problem encountered by one of the most widely used walking models in the literature has been resolved. After obtaining stable walking, energy optimization is studied so that the robot could work longer in the outdoor environment and trajectory improvement is made to reduce energy consumption during the robot's movement. Finally, a faster communication library is written to apply the designed algorithms to the real system and to solve the problems caused by communication speeds, the problem is redefined with a different approach and the traditional method, Smith Predictor, is used. Packet losses that are random thanks to the communication interfaces prepared for the mechanism; become predictable and the effects of packet losses are eliminated with Smith Predictor. Finally, all these control methods are applied to the system and used in experimental studies.
-
ÖgeA study on optimization of a wing with fuel sloshing effects(Graduate School, 2022-01-24) Vergün, Tolga ; Doğan, Vedat Ziya ; 511181206 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiIn general, sloshing is defined as a phenomenon that corresponds to the free surface elevation in multiphase flows. It is a movement of liquid inside another object. Sloshing has been studied for centuries. The earliest work [48] was carried out in the literature by Euler in 1761 [17]. Lamb [32] theoretically examined sloshing in 1879. Especially with the development of technology, it has become more important. It appears in many different fields such as aviation, automotive, naval, etc. In the aviation industry, it is considered in fuel tanks. Since outcomes of sloshing may cause instability or damage to the structure, it is one of the concerns about aircraft design. To prevent its adverse effect, one of the most popular solutions is adding baffles into the fuel tank. Still, this solution also comes with a disadvantage: an increase in weight. To minimize the effects of added weight, designers optimize the structure by changing its shape, thickness, material, etc. In this study, a NACA 4412 airfoil-shaped composite wing is used and optimized in terms of safety factor and weight. To do so, an initial composite layup is determined from current designs and advice from literature. When the design of the initial system is completed, the system is imported into a transient solver in the Ansys Workbench environment to perform numerical analysis on the time domain. To achieve more realistic cases, the wing with different fuel tank fill levels (25%, 50%, and 75%) is exposed to aerodynamic loads while the aircraft is rolling, yawing, and dutch rolling. The aircraft is assumed to fly with a constant speed of 60 m/s (~120 knots) to apply aerodynamic loads. Resultant force for 60 m/s airspeed is applied onto the wing surface by 1-Way Fluid-Structure Interaction (1-Way FSI) as a distributed pressure. Using this method, only fluid loads are transferred to the structural system, and the effect of wing deformation on the fluid flow field is neglected. Once gravity effects and aerodynamic loads are applied to the wing structure, displacement is defined as the wing is moving 20 deg/s for 3 seconds for all types of movements. On the other hand, fluid properties are described in the Ansys Fluent environment. Fluent defines the fuel level, fluid properties, computational fluid dynamics (CFD) solver, etc. Once both structural and fluid systems are ready, system coupling can perform 2-Way Fluid-Structure Interaction (2-Way FSI). Using this method, fluid loads and structural deformations are transferred simultaneously at each step. In this method, the structural system transfers displacement to the fluid system while the fluid system transfers pressure to the structural system. After nine analyses, the critical case is determined regarding the safety factor. Critical case, in which system has the lowest minimum safety factor, is found as 75% filled fuel tank while aircraft dutch rolling. After the determination of the critical case, the optimization process is started. During the optimization process, 1-Way FSI is used since the computational cost of the 2-Way FSI method is approximately 35 times that of 1-Way FSI. However, taking less time should not be enough to accept 1-Way FSI as a solution method; the deviation of two methods with each other is also investigated. After this investigation, it was found that the variation between the two methods is about 1% in terms of safety factors for our problem. In the light of this information, 1-Way FSI is preferred to apply both sloshing and aerodynamic loads onto the structure to reduce computational time. After method selection, thickness optimization is started. Ansys Workbench creates a design of experiments (DOE) to examine response surface points. Latin Hypercube Sampling Design (LHSD) is preferred as a DOE method since it generates non-collapsing and space-filling points to create a better response surface. After creating the initial response surface using Genetic Aggregation, the optimization process is started using the Multi-Objective Genetic Algorithm (MOGA). Then, optimum values are verified by analyzing the optimum results in Ansys Workbench. When the optimum results are verified, it is realized that there is a notable deviation in results between optimized and verified results. To minimize the variation, refinement points are added to the response surface. This process is kept going until variation comes under 1%. After finding the optimum results, it is noticed that its precision is too high to maintain manufacturability so that it is rounded into 1% of a millimeter. In the end, final thickness values are verified. As a result, optimum values are found. It is found that weight is decreased from 100.64 kg to 94.35 kg, which means a 6.3% gain in terms of weight, while the minimum safety factor of the system is only reduced from 1.56 to 1.54. At the end of the study, it is concluded that a 6.3% reduction in weight would reflect energy saving.
-
ÖgeA study on static and dynamic buckling analysis of thin walled composite cylindrical shells(Graduate School, 2022-01-24) Özgen, Cansu ; Doğan, Vedat Ziya ; 511171148 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThin-walled structures have many useage in many industries. Examples of these fields include: aircraft, spacecraft and rockets can be given. The reason for the use of thin-walled structures is that they have a high strength weight ratio. In order to define a cylinder as thin-walled, the ratio of radius to thickness must be more than 20, and one of the problems encountered in the use of such structures is the problem of buckling. It is possible to define the buckling as a state of instability in the structure under compressive loads. This state of instability can be seen in the load displacement graph as the curve follows two different paths. The possible behaviors; snap through or bifurcation behavior. Compressive loading that cause buckling; there may be an axial load, torsional load, bending load, external pressure. In addition to these loads, buckling may occur due to temperature change. Within the scope of this thesis, the buckling behavior of thin-walled cylinders under axial compression was examined. The cylinder under the axial load indicates some displacement. When the amount of load applied reaches critical level, the structure moves from one state of equilibrium to another. After some point, the structure shows high displacement behavior and loses stiffness. The amount of load that the structure will carry decreases considerably, but the structure continues to carry loads. The behavior of the structure after this point is called post-buckling behavior. The critical load level for the structure can be determined by using finite elements method. Linear eigenvalue analysis can be performed to determine the static buckling load. However, it should be noted here that eigenvalue-eigenvector analysis can only be used to make an approximate estimate of the buckling load and input the resulting buckling shape into nonlinear analyses as a form of imperfection. In addition, it can be preferred to change parameters and compare them, since they are cheaper than other types of analysis. Since the buckling load is highly affected by the imperfection, nonlinear methods with geometric imperfection should be used to estimate a more precise buckling load. It is not possible to identify geometric imperfection in linear eigenvalue analysis. Therefore, a different type of analysis should be selected in order to add imperfection. For example, an analysis model which includes imperfection can be established with the Riks method as a nonlinear static analysis type. Unlike the Newton-Rapson method, the Riks method is capable of backtracking in curves. Thus, it is suitable for use in buckling analysis. In Riks analysis, it is recommended to add imperfection in contrast to linear eigenvalue analysis. Because if the imperfection is added, the problem will be bifurcation problem instead of limit load problem and sharp turns in the graph can cause divergence in analysis. Another nonlinear method of static phenomena is called quasi-static analysis which is used dynamic solver. The important thing to note here is that the inertial effects should be too small to be neglected in the analysis. For this purpose, kinetic energy and internal energy should be compared at the end of the analysis and kinetic energy should be ensured to be negligible levels besides internal energy. Also, if the event is solved in the actual time length, this analysis will be quite expensive. Therefore, the time must be scaled. In order to scale the time correctly, frequency analysis can be performed first and the analysis time can be determined longer than the period corresponding to the first natural frequency. For three analysis methods mentioned within this study, validation studies were carried out with the examples in the literature. As a result of each type of analysis giving consistent results, the effect of parameters on static buckling load was examined, while linear eigenvalue analysis method was used because it was also sufficient for cheaper analysis method and comparison studies. While displacement-controlled analyses were carried out in the static buckling analyses mentioned, load-controlled analyses were performed in the analyses for the determination of dynamic buckling force. As a result of these analyses, they were evaluated according to different dynamic buckling criteria. There are some of the dynamic buckling criteria; Volmir criterion, Budiansky-Roth criterion, Hoff-Bruce criterion, etc. When Budiansky-Roth criterion is used, the first estimated buckling load is applied to the structure and displacement - time graph is drawn. If a major change in displacement is observed, it can be assumed that the structure is dynamically buckled. For Hoff-Bruce criterion, the speed - displacement graph should be drawn. If this graph is not focused in a single area and is drawn in a scattered way, it is considered that the structure has moved to the unstable area. As in static buckling analyses, dynamic buckling analyses were primarily validated with a sample study in the literature. After the analysis methods, the numerical studies were carried out on the effect of some parameters on the buckling load. First, the effect of the stacking sequence of composite layers on the buckling load was examined. In this context, a comprehensive study was carried out, both from which layer has the greatest effect of changing the angle and which angle has the highest buckling load. In addition, the some angle combinations are obtained in accordance with the angle stacking rules found in the literature. For those stacking sequences, buckling forces are calculated with both finite element analyses and analytically. In addition, comparisons were made with different materials. Here, the buckling load is calculated both for cylinders with different masses of the same thickness and for cylinders with different thicknesses with the same mass. Here, the highest force value for cylinders with the same mass is obtained for a uniform composite. In addition, although the highest buckling force was obtained for steel material in the analysis of cylinders of the same thickness, when we look at the ratio of buckling load to mass, the highest value was obtained for composite material. In addition, the ratio of length to diameter and the effect of thickness were also examined. Here, as the length to diameter ratio increases, the buckling load decreases. As the thickness increases, the buckling load increases with the square of the thickness. In addition to the effect of the length to diameter ratio and the effect of thickness, the loading time and the shape of the loading profile are also known in dynamic buckling analysis. In addition, the critical buckling force is affected by imperfections in the structure, which usually occur during the production of the structure. How sensitive the structures are to the imperfection may vary depending on the different parameters. The imperfection can be divided into three different groups as geometric, material and loading. Cylinders under axial load are particularly affected by geometric imperfection. The geometric imperfection can be defined as how far the structure is from a perfect cylindrical structure. It is possible to determine the specified amount of deviation by different measurement methods. Although it is not possible to measure the amount of imperfection for all structures, an idea can be gained about how much imperfection is expected from the studies found in the literature. Both the change in the buckling load on the measured cylinders and the imperfection effect of the buckling load can be measured by adding the measured amount of imperfection to the buckling load calculations. In cases where the amount of imperfection cannot be measured, the finite element can be included in the analysis model as an eigenvector imperfection obtained from linear buckling analysis and the critical buckling load can be calculated for the imperfect structure using nonlinear analysis methods. In this study, studies were carried out on how imperfection sensitivity changes under both static and dynamic loading with different parameters. These parameters are the the length-to-diameter ratio, the effect of the stacking sequence of the composite layers and the added imperfection shape. The most important result obtained in the study on imperfection sensitivity is that the effect of the imperfection on the buckling load is quite high. Even geometric imperfection equal to thickness can cause the buckling load to drop by up to half.
-
ÖgeAASHTO LRFD'ye göre betonarme bir köprünün tasarımı ve doğrusal olmayan statik itme yöntemi ile performansının belirlenmesi(Fen Bilimleri Enstitüsü, 2020) Bulut, Şahin ; Darılmaz, Kutlu ; 636963 ; İnşaat MühendisliğiBu tez çalışmasında, prefabrik öngerilmeli betonarme bir köprü, AASHTO LRFD 2017 yönetmeliğine göre tasarımı yapılmıştır. Ayrca, doğrusal olmayan statik itme analizi yöntemi ile tasarımı yapılan köprünün, DD-1 ve D-2 depremleri altındaki performansı incelenmiştir. Çalışmanın birinci bölümünde, konuya giriş yapılarak tezin amacı açıklanmış ve bu tez çalışması esnasında yapılan literatür çalışmalarına değinilmiştir. Çalışmanın ikinci bölümünde, köprünün betonarme tasarımı AASHTO LRFD 2017 yönetmeliğine göre yapılmıştır. Bu kapsamda, köprü boyutları ve köprüde kullanılacak malzeme özellikleri tanıtılmıştır. Köprüye gelecek sabit ve hareketli yükler belirlenip prefabrik kirişin öngerilme hesabı yapılarak tasarımı yapılmıştır. Ardından, köprünün bilgisayar modeli SAP2000 programında oluşturulmuştur. Programa tüm düşey, yatay ve deprem yükleri ile yönetmeliğin öngördüğü yük kombinasyonları tanımlanıp yapının analizi yapılmıştır. Analiz sonuçlarına göre başlık kirişi ve orta ayaklardaki kolonların betonarme tasarımları yapılmıştır. Üçüncü bölümde, doğrusal olmayan analiz için genel bilgilendirme verilmiş ve kolonlarda oluşacak plastik mafsal boyu ve kolon kesitinin moment eğrilik ilişkileri açıklanmıştır. Ayrıca, doğrusal olmayan analizde kullanılacak olan DD-1 ve DD-2 depremlerine göre yatay elastik tasarım spektrumları verilmiştir. Ardından, KGM ile beraber Yüksel Proje'nin hazırladığı Karayolu ve Demiryolu Köprü ve Viyadükleri Mayıs ayı taslak raporuna göre performans hedeflerine göre elemanların plastik dönme sınırları belirlenmiştir. Son olarak, doğrusal olmayan statik itme analizinin adımları verilmiştir. Dördüncü bölümde, yapının doğrusal olmayan analizi yapılmış ve DD-1 ve DD-2 depremleri altında performansı belirlenmiştir. Bu kapsamda, SAP2000 programı kullanılarak köprünün statik itme ve kapasite eğrileri elde edilmiştir. Elde edilen kapasite eğrileri ile her iki deprem durumu yatay elastik tasarım spektrumları ile beraber incelenip köprünün performans noktası belirlenmiştir. Talep edilen yerdeğiştime miktarına kadar yapılan itme analizi sonucunda yapıda plastik mafsalların oluşmadığı görülmüştür. Plastik mafsalların oluşmamasından dolayı tasarımı yapılan köprü hem DD-1 hem de DD-2 depremi altında elastik davrandığı ve Kesintisiz Kullanım performans seviyesinde olduğu belirlenmiştir. Beşinci bölümde, elde edilen sonuçlar özetlenmiş ve yorumlanmıştır.
-
ÖgeAction-guiding virtue ethics: The indispensability of practical wisdom and eudaimonia(Lisansüstü Eğitim Enstitüsü, 2021) Bozkaya, İkbal ; Bove, Geoff ; 669147 ; Siyaset ÇalışmalarıThe goal of this dissertation is to respond to the question whether virtue ethics can provide action-guidance. In order to do so, the work is divided into eight chapters. First chapter introduces the thesis question and the main arguments of the dissertation and provides the synopsis of chapters as well as definition of some terms and themes. The second chapter presents the revival of virtue ethics and action-guidance objection. The following three chapters present different strands of virtue ethics which particularly deal with action-guidance and account of right action. Third chapter presents eudaimonistic virtue ethics by introducing the central concepts such as virtue, practical wisdom and eudaimonia as understood in Aristotle's ethics and focusing on Rosalind Hursthouse's systematic account of eudaimonistic virtue ethics. Forth chapter explores Christine Swanton's target-centered (pluralist) virtue ethics and discusses the difficulties it bears in terms of locating practical wisdom, and the relations of virtues with each other. Fifth chapter looks at Michael Slote's agent-based (sentimentalist) virtue ethics and discusses the importance of fine inner states in his account of right action, yet emphasizes the failure of accounting for doing the right thing and doing it for the right reasons. Sixth chapter focusses on the revisited eudaimonistic accounts of right action, and argues that practical wisdom is the key notion to understand how virtue ethics can guide action, even of novices'. Seventh chapter argues the interconnectedness of practical wisdom and eudaimonia, and that eudaimonia is indispensable for a virtue ethics to provide action guidance because it gives content to practical wisdom and validates virtues. Eight chapter provides the general summary of the dissertation and the main arguments.
-
ÖgeAdaptive signal processing based intelligent method for fault detection and classification in microgrids(Lisansüstü Eğitim Enstitüsü, 2021) Azizi, Resul ; Şeker, Şahin Serhat ; 724566 ; Elektrik MühendisliğiThe ever-increasing energy demand, the environmental issue of fossil fuels and the high investment cost for the establishment of bulk power plants lead energy plans to more flexible and scattered small-scale energy sources. The main feature of these new topologies is that they consume renewable energy sources for electricity generation. It also requires less time to plan, build and operate. Moreover, they are close to energy sources and local loads. So, there are more efficient, with minimal environmental issues. However, besides their benefits and advantages, they pose a new challenge for traditional power systems. These challenges include protection issues, stability concerns, and complex control systems and so on. Traditional power systems include mass generation followed by transmission and distribution. In this topology, it is possible to plan generation because consumption at the transmission level of the power system is more predictable and fuel resources are always available for generation units. On the other hand, the transmission system and its conditions can be controlled by state estimators and SCADA system. Therefore, production and consumption uncertainties are minimal and conventional protection is sufficient to protect these systems. Also, distribution systems have no generating units, systems are mostly radial and overcurrent protection systems are sufficient to protect them. In these passive networks, it is not necessary to have fast and reliable protection systems as in transmission systems. The initial role of these new energy sources was to act as a backup for mass production and to eliminate the small generation and consumption mismatch during peak consumption. On the other side, huge demand growth and investment time of mass production units and environmental concerns make these distributed energy resources (DERs) (wind, solar, biomass, etc.) popular in the distribution system. However, the contribution of the early DER groups to the total production is low and the control systems are very sensitive to voltage disturbances such as faults. Thus, according to the grid codes, after any minor fault or disturbance in the system, the DERs are disconnected, synchronized manually and reconnected after the fault is cleared. With the increasing penetration of DERs in distribution systems, they play an important and rapidly increasing role in the total production of the system. Therefore, de-energizing all these DERs in an area in the distribution system after a fault has occurred can lead to stability problems due to generation and consumption imbalance. Accordingly, a new concept called microgrid emerged and mainly established in distribution systems. This topology is the microscale of the power system. It can operate autonomously and cover the total demand of this local distribution system. Like the SCADA power system, it has an equivalent centralized monitoring and control system. The total generation is almost sufficient for the total demand of the loads in distribution networks converted to microgrid. It can operate as a standalone ecosystem separated from the main grid and is self-sufficient. The basic requirement of this topology for connecting to the main grid through PCC (point of common coupling) is to increase the total inertia of the system and increase the post-fault stability region. In addition, this topology can transfer energy to the main system if it produces more power than the loads consume. This can reduce the stress of mass production units. Last but not least, if the main upper grid disturbed, the microgrid can continue to supply its loads by disconnecting from the grid. In this new concept, grid codes expect the micro grid to be able to ride through faults and disturbances thanks to low voltage ride through (LVRT) systems. In fact, as a micro-scale model of the power system, the voltage of the DERs at the time of fault occurance is controlled by the LVRT, and the DERs continue to operate without disconnection after the fault is cleared by circuit breakers or other elements). Therefore, more complex control systems are required for DERs. However, microgrids are distribution systems and unlike traditional power systems, there is a high amount of uncertainty in generation and consumption (loads). The distribution system has changed from a passive network to an active dynamic network. In this system, topology, generation and consumption are changed faster and faster than in conventional power systems. This situation constantly changes the fault current level and direction, and the conventional overcurrent protection is completely insufficient to protect them. Also, due to the high penetration of sensitive DERs, prolonged fault current is not allowed (stability concerns). Moreover, inverter-based DERs have a very small contribution to the fault current level. The current protection method of microgrids is adaptive protection. In this model, all operating conditions of the system are extracted and all components of the systems are continuously monitored by central or decentralized control system or even dynamic load estimation. This model cannot be applied to a central control system because it has to process large amounts of data at a high sampling rate and it is impossible to make real-time decisions. Based on these facts, a new intelligence-based method for fault detection and classification of microgrid is proposed in this thesis. In the proposed method, three different adaptive signal processing methods are used to extract the short-time transient component of the signal instead of the fault current level. It transfers data (feature extraction) into three different data spaces. The main feature of these signal processing methods is that they do not use a predefined basis to decompose a signal. The basis is adaptive to signal and extract components depend on the noise penetration level and frequency components of the signal. An intelligence-based method called Brwonboost is used to make decisions in these data spaces, and the total decision is taken by the majority of votes of these three intelligence-based methods in these three data spaces. The main unique feature of the proposed method compared to traditional machine learning methods is its adaptability and uses a non-convex optimization method for detection and classification. The proposed method is a set of weak classifiers and tries to learn the data space step by step and iteratively. It tries to adapt the data by classifying the data that was misclassified in previous iterations. On the other hand, the unique non-convex optimization feature of the proposed method gives it an intelligence to select or discard misclassified data. It can decide step-by-step removal of the algorithm's iteration data in the training process if there is an outlier or a violation in another class area. This feature provides evidence against overfitting and becomes as practical a method as it is for real-world measured data. Finally, a Brownboost decision is also made by a majority vote of the weak classifiers. An intelligence-based method called Brwonboost is used to make decisions in these data spaces, and the total decision is taken by the majority of votes of these three intelligence-based methods in these three data spaces. In this method the classifier works base on the margin. This means, instead of only finding a classifier that minimize the classification error, it selects a classifier that has maximum discrimination between data of every class. The unique feature of the proposed method compared to traditional machine learning methods is its adaptability and uses a non- convex optimization method for detection and classification. The proposed method is an ensemble of weak classifiers and tries to learn the data space step by step and iteratively. It tries to adapt to the data by classifying the data that was misclassified in previous iterations. On the other hand, the unique non-convex optimization feature of the proposed method gives it an intelligence to select or discard misclassified data. During this step-by-step process, the algorithm can detect outliers or misclassified data that intensely violated other class area and remove it. This feature makes it robust against overfitting and becomes as practical method for real-world measured data. In total, the proposed method tries to classify the data in three different data spaces. The data area that makes maximum distinction between the data of each class is less sensitive to noise. Thus, a classifier has are fewer generalization errors to unseen new data (higher margin). Therefore, its Brownboost has more voting power in decision making. The results are test in test benchmark microgrid. DERs are modeled with the detailed model to extract the true detail form of the signal. Various types of control model and fault ride thruogh feature of DERs are implemented.
-
ÖgeAdsorptive removal of heavy metal ions from aqueous solution using metal organic framework(Lisansüstü Eğitim Enstitüsü, 2021) Elaiwi, Fadhil Abid ; Sirkecioğlu, Ahmet ; 711381 ; Kimya MühendisliğiIndustrialization and rapid increase in human population are the cause of increase in wastewater generation. Depending on the source, these wastes may contain hazardous pollutants such as heavy metals, toxic organic compounds, dissolved inorganic solids and etc. Heavy metals are the serious threat to environmental and human health. Due to their toxicity and carcinogenic effects, close attention must be paid to heavy metals containing wastewaters. Even very small amounts of heavy metals can result in severe physiological and neurological damages. Therefore, numerous processes have been developed to treat wastewater minimize this health hazard potential. These processes include membrane filtration, ion exchange, adsorption, chemical precipitation, nanotechnology treatments, electrochemical and advanced oxidation processes. Ion exchange and adsorption are both physicochemical methods used to treat heavy metal containing wastewaters. In both cases high surface are plays an important role. As a new generation of crystalline porous materials, metal-organic frameworks (MOFs) possess high surface area, tunable pore structure and functionalizable surfaces. With these attributes, MOFs have an essential role in several fields, including wastewater treatment. Based on the affinity of amino groups in chelating sites for heavy metal ions, a porous metal-organic framework (MOF) [ED-MIL-101(Cr)] were synthesized as an adsorbent for lead, copper, and cadmium ions. Hydrothermal method was used to synthesize the MOF samples. The functionalized MOF samples were characterized by powder X-ray diffraction (PXRD) to investigate the functionalization process and compare the synthesized MOF with the pristine MIL-101(Cr) samples. Fourier Transform Infrared (FT-IR) spectroscopy was used to analize the functional groups of the adsorbent before and after the treatment process which can be useful in estimating the mechanism for the recovry process and assess the relationship between the ions and the adsorbents sites. Scanning electron microscopy (SEM) and thermogravimetric analysis (TGA), were also performed to investigate crystal structure and the thermal stability of the MOFs in a specified temperature range, respectively. Finally, the surface characteristics of the samples and the particles size distribution were investigated with N2 adsorption-desorption conducted at 77 K. In order to investigate the adsorption performances of ED-MIL-101(Cr) for the chosen heavy metal cations (Pb(II), Cu(II), and Cd(II) ion), batch experiments were conducted with single, binary, and ternary metal solutions. During these experiments the effect of experimental conditions such as pH, adsorbent dosage, initial concentration, were investigated. With the aim of evaluation of conditions for removing of the three metal ions using ED-MIL-101(Cr), several isotherm models were tested to choose the best fit model with the experimental data. Normal and extended forms of Freundlich, Langmuir, and Sips isotherms were adopted to analyze the adsorption behavior of the MOF samples. ED-MIL-101(Cr) exhibits maximum adsorption capacities (mg/g) of 82.55, 69.9 and 63.15 mg/g for Pb(II), Cu(II) and Cd(II), respectively. The experimental data revealed that the adsorption capacity of the adsorbent for the different metal ions at the same concentration mainly depends on the affinity of the adsorbent which was in the order of Pb(II) ˃Cu(II) ˃ Cd(II) in single ion solution. This selectivity order is governed mainly by ionic features such as ionic radius, electronegativity, and hydrated ionic radius. The influence of ionic interaction between the competitive ions in a multi-ion solution namely interaction factor is quantitatively studied and tabulated its values for multi-ion systems. For further studies, kinetics models applied to investigate the Pb(II), Cu(II), and Cd(II) ions adsorption mechanism on ED-MIL-101(Cr). Also, rate-control steps were determined using kinetic method. Linear forms of pseudo-first order, pseudo-second order, and intra-particle diffusion equations were used to interpret the kinetic data. It was observed that the kinetic data that obtained with batch adsorption processes were well fitted with pseudo-second-order model. Also the regeneration process for exhausting ED–MIL–101(Cr) was carried out to assess the recyclability of ED-MIL-101(Cr) for adsorption of lead, copper, and cadmium ions. It was observed that there was an insignificant change in the adsorption efficiency of ED-MIL-101(Cr) samples after three adsorption-regeneration cycles. In order to simulate the real-life experience adsorption experiments conducted also in dynamic system. For this part of the experimental work, a fixed bed of ED-MIL-101(Cr) was prepared for the continuous removal of Pb(II), Cu(II), and Cd(II) ions from the aqueous solutions. A series of experiments were carried out in the fixed bed system to obtain the breakthrough curves data for the adsorption of single and ternary metal ions. The effects of different operating conditions such as static bed height (2, 4, and 6 cm), flow rate (10, 15, and 20 mL/min), and initial concentration of heavy metal ions (50, 75, and 10 mg/L) on the removal efficiency were investigated. The experimental breakthrough data of three metal ions were fitted well with the theoretical model. The breakthrough curves for single and multiple systems showed that Pb(II) has the longest breakthrough time compared with other metals indicating a high affinity toward this ion while Cd(II) had the shortest breakthrough time. Thomas Model and Yoon-Nelson models were used to evaluate the breakthrough curves and evaluate the dynamic data. The results from these two models suggest that the maximum adsorption capacity of the investigated heavy metal ions from single aqueous solutions are in the order of Pb(II) > Cu(II) > Cd(II). These results are in agreement with the experimental data which are also related to the affinity of the adsorbent for the adsorbed ions. Comparably, Yoon-Nelson model is the best model for the data obtained for the metal adsorption experiments conducted with various bed lengths. It can be concluded that amino-functionalized MIL-101(Cr) was found to be a promising candidate for metal ion removal from the aqueous environment.
-
ÖgeAerospike nozzle design and analysis(Institute of Science and Technology, 2020-07-21) Farrag, Sherif ; Edis, Fırat Oğuz ; 511171132 ; Aeronautical and Astronautical Engineering ; Uçak ve Uzay MühendisliğiThis research is done to design an aerospike nozzle contour with theory discussion and investigation. Contour is determined using a written Matlab code that gives maximum performance for given conditions. Excel is used to treat the contour points then 3D and 2D model suitable to be imported to Ansys is designed using Solidworks and imported to Ansys Fluent CFD for parameter calculations and analyzing. Truncated Nozzle is analyzed with different percentages, 40% truncation showed maximum performance. Base bleed is added and analyzed. A new conceptual design is first introduced and analyzed in this research "Hybrid Aerospike-Conical Nozzle". It is CFD analyzed and showed a dramatic increase of thrust of 4.6%. Secondary Jets for thrust vector control are added, analyzed and optimized at different positions (20% and 90% measured from the throat). 90% position showed the maximum performance since the amplification factor maximized.
-
ÖgeAfet sonrası sahra hastanelerinin yerleşimi için genetik algoritma uygulaması: İstanbul vakası(Fen Bilimleri Enstitüsü, 2020) Kömürcü, Yeşim ; Uğurlu, Seda ; 632858 ; Endüstri MühendisliğiAfet sonrası kayıpların çoğu, insani yardım planlamasının olmaması ya da yetersiz uygulamalardan kaynaklanmaktadır. Geçici sahra hastanelerinin yerleşimi ve yaralıların hastanelere atanması doğal afet yönetiminde anahtar konulardır. Mevcut hastanelerin acil servis birimleri bulunmasına rağmen, İstanbul'da ciddi bir deprem olması durumunda bu kapasitelerin yaralılar için yeterli olmayacağı düşünülmektedir. Bu nedenle, felaketin ardından hızla inşa edilecek ve ek kapasite görevi görecek sahra hastanelerine ihtiyaç duyulmaktadır. Bu sahra hastanelerinin en uygun yerlerinin belirlenmesi, yaralıya yanıt verme süresini azaltmak için önemlidir. Ayrıca, yaralıların mevcut hastane ve sahra hastanelerine en uygun şekilde atanması da yanıt süresinin azaltılmasına ve kapasitenin verimli kullanılmasına yardımcı olacaktır. Bu çalışmanın amacı, yaralıların tümüne olabildiğince çabuk yanıt vermek için toplam seyahat maliyetini ve sahra hastanesi kurulum maliyetini en aza indirmektir. Problem NP-Zor türünde olduğundan ve matematiksel modellerin çözümünün çok uzun sürelerde sonuçlanacağı ya da sonuç bulmada yetersiz kalacağından dolayı meta sezgisel yöntemlere başvurulmuştur. Bu amaçla MATLAB'da sezgisel çözüm yöntemi olan genetik algoritma (GA) geliştirilmiştir. Algoritmanın performansını artırmak için farklı çaprazlama ve yer değiştirme stratejileri test edilmiştir. Deneysel çalışmada, 4 farklı GA stratejilerinin performansları, optimal sonucunun bilindiği deneysel veri kümeleri kullanılarak karşılaştırılmıştır. Çaprazlama tiplerinden birleşim çaprazlama, iki noktalı çaprazlamaya göre daha iyi performans göstermiş ve optimal sonucu bulmuştur. Yer değiştirme tiplerinde ise en kötü bireyi eleyerek yeni jenerasyon oluşturan algoritma popülasyonun %50'sini eleyerek yeni jenerasyon oluşturan algoritmadan daha kısa sürede optimal sonuca ulaşmıştır. Karşılaştırma sonucuna göre, gerçek İstanbul veri seti için en iyi GA seçilmiş ve gerçek İstanbul verisinde uygulanmıştır. Japonya İş Birliği Uluslararası Ajansı'na göre olası yıkıcı bir İstanbul depreminde en fazla ölü ve ağır yaralı olacak ilçelerden Bahçelievler ve Küçükçekmece uygulama için seçilmiştir. Uygulamada Bahçelievler ve Küçükçekmece ilçelerinde mesafe kısıtı olmaksızın ve mesafe kısıtı eklenerek 2 model çözülmektedir. Duyarlılık analizi kapsamında farklı yaralı sayıları, mesafe kısıtı ve sahra hastanesi kapasitesi ile oluşturulan 8 model çoklu koşumlar sonucunda değerlendirilmiştir. Yaralı sayısı ve sahra hastanesi kurulum maliyeti sabit olan mesafe kısıtının değiştiği modeller incelendiğinde mesafe kısıtı yarıçapı azaldıkça açılan sahra hastanesi sayısı arttığından seyahat maliyeti azalsa da toplam maliyet artmaktadır. Yaralı sayısı ve mesafe kısıtı aynı olup sahra hastanesi kapasitesi ve dolayısıyla kurulum maliyeti arttığında ise açılan sahra hastanesi sayısı oldukça azalmaktadır. Sahra hastanesi kapasitesi ve mesafe kısıtı aynı olup yaralı sayısı değiştiğinde yaralı sayısının artışına göre seyahat maliyeti ve kurulum maliyeti doğru orantılı olarak artmaktadır. Model 4 en az maliyetli model olup Model 4'ün özelliklerine bakıldığında; yaralılar en fazla 5 km yarıçapında bulunan hastanelere atanmaktadır, hem maliyet daha az hem de yaralıların seyahat süresi oldukça kısalmış olmaktadır.