LEE- Uçak ve Uzay Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Çıkarma tarihi ile LEE- Uçak ve Uzay Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeTeknoloji geliştirme bölgelerinin hizmet kalitesinin ölçümü: Türkiye genelinde bir uygulama(Fen Bilimleri Enstitüsü, 2020) Özyurt, Mehmet Akif ; Özkol, İbrahim ; 656880 ; Uçak ve Uzay Mühendisliği Ana Bilim DalıBilgi Üretimine ve bunun bir çıktısı olan teknolojik üretime dayalı ürünler bugün çağımıza damgasını vurmuş ve yaşadığımız zaman dilimi, bir çok düşünür tarafından "Bilgi Çağı" olarak adlandırılmıştır. Bu çağda yüksek teknoloji üretiminin merkezinde olan ülkelerin gücü, toprak ya da sermaye büyüklüğünden değil, kaliteli eğitilmiş insan gücünün büyüklüğünden ve bu gücün yüksek teknoloji içeren üretimlere aktarılmasından kaynaklanmaktadır. Eğitim seviyesi yüksek insanlara sahip ülkelerin, üretim kalite ve seviyeleri de yüksektir. Yaşadığımız yüzyılda ülkelerin bilimsel ve teknolojik gelişim hızı çok artmıştır. Bugüne kadar ortaya çıkan bu gelişmelerin çoğu, son 30 yıl içerisinde meydana gelmiş olup, bu hız her geçen gün katlanarak artmaktadır. Dolayısı ile, gelecek kısa vadeli zaman diliminde de, bilimsel ve teknolojik açıdan, şu an yaşadığımızdan çok daha ileride bir dünyanın ortaya çıkacağını öngörmek yanlış olmaz. Yüksek teknoloji üretimi günümüzde, rekabet üstünlüğü yarışının da en belirleyici unsuru haline gelmiştir. Bu nedenle, rekabet gücünün artırılması, sadece maliyetleri düşürmeye değil, tüketici tercih ve taleplerine hızlı bir şekilde yanıt vermenin ötesinde, sürekli gelişime, yenilik ve icatta bulunmaya bağlı bir duruma gelmiştir. Teknolojik bulguları, pazarlama şansı olan bir ürün ya da hizmete, yeni bir üretim veya dağıtım yöntemine, ya da yeni bir hizmet mekanizmasına dönüştürmede, yani teknolojik yenilik üretiminde (inovasyonda) başarılı olanlar artık, dünya pazarlarına egemen olmaktadırlar. Bu tür Ar-Ge'ye dayalı teknolojik gelişmelerin ve yeniliklerin ortaya çıkartıldığı, kaliteli eğitilmiş insan gücünün istihdam edildiği, yüksek katma değerli ürünleri üreten şirketleri ve kurumları bünyesinde barındıran bölgelere, "Teknopark" ya da ülkemizde ilgili yasanın verdiği ad ile "Teknoloji Geliştirme Bölgesi" (TGB) adı verilmektedir. Kavramsal olarak, teknoparklar, Ar-Ge yapıcılar ile, üniversiteler ve sanayi (firmaları) arasında bilim ve teknoloji akışını sağlamaya ve yaymaya yardımcı olan araçlardır. Ayrıca teknoparklar, kuluçka mekanizmalarının oluşturduğu sinerji ile, bilim ve teknoloji tabanlı firmaların gelişimini kolaylaştırmaktadırlar. Bu alanlarda, yüksek teknoloji ve destek araçları kullanılarak, firmalar yenilikçi olmaya teşvik edilmekte, bu yolla katma değeri yüksek ürünler ortaya çıkartılmaktadır. Uluslararası Bilim Parkı Birliği tarafından ise teknoparklar, temel amaçları yenilikçilik kültürünü ve işletmelerinin ya da bilgi merkezli kurumların rekabet gücünü artırmayı destekleyerek, toplumun refah seviyesini yükseltmek olan, alanında profesyonel ekipler tarafından yönetilen yapılar şeklinde tanımlanmaktadır. Bu hedeflere ulaşmak için teknoparklar, üniversiteler, Ar-Ge yapıcıları ve firmalar arasındaki bilgi ve teknoloji akışını sağlar, yönetir, kuluçka ve spin-off mekanizmaları ile yenilikçilik eksenli şirketlerin oluşmasını ve gelişmesini kolaylaştırır, kaliteli yapılar üreterek, diğer katma değer sunan şirket ve hizmetlerin de ortaya çıkmasına altyapı hazırlarlar. Bu tanımlamalar doğrultusunda, diğer adı ile TGB'lerin aslında bilim ve teknoloji kümelenmesi oldukları da söylenebilir. Çünkü genel anlamda teknoparklar, yenilikçi fikirlerle bir araya gelen, ileri teknoloji üreten veya kullanan ve aynı zamanda bu teknolojiyi pazarlayan, Ar-Ge merkezinden ya da üniversiteden faydalanan işletmelerin oluşturduğu bir küme olarak da tabir edilmektedirler. Teknoparklara yönelik yapılan bu tanımlamaların farklılığı büyüklüklerinden ve işkolu faaliyetlerindeki farklılıklardan kaynaklanmaktadır. Yüksek teknoloji üreticilerinin konumlanma merkezi olan teknoparklar, istihdam imkanlarının artırılmasında, gerekli bilgi birikimi sağlanarak sanayinin geliştirilmesinde, üniversiteler ile birlikte eğitim olanaklarının artırılması için firmalara destek verilmesinde ve KOBİ'lerin sayısının artırılmasının yanı sıra bunların desteklenmesinde de etkili bir araç olarak kullanılmaktadırlar. Bu açıdan teknoparkların en temel amaçlarından bir tanesi üniversite, sanayi ve devlet arasında iş birliği sağlamak ve buna bağlı olarak bilgi ve teknoloji ağırlıklı mekânların kurulması ile bölgesel, ulusal ve uluslararası rekabetçilik seviyesinin artırılarak, ülke kalkınmasına katkı sağlamaktır. Teknoparklar, ülkelerin istihdam yapısını olumlu yönde değiştiren ve işsizlik oranının düşmesinde önemli bir etken olan, yeni ve yüksek teknoloji altyapısına sahip alanlardır. Bunun örneklerini teknopark tecrübeleri eskiye dayanan gelişmiş ve sanayileşmiş ülkelerde görmek mümkündür. Bu değişim ve gelişmenin de etkisi ile istihdamın sektörel dağılım anlamında da farklılaştığı görülmektedir. Bilindiği gibi geçmişte gelişmişliğin bir ölçütü, işgücü dağılımının tarım ve sanayi sektörlerindeki durumu olarak görülmekteydi. Şimdilerde ise gelişmişliğin ölçütü olarak, teknoloji sektöründeki istihdam oranı bir ölçüt olarak görülmektedir. Örneğin gelişmiş bir ülke durumunda olan Almanya'da, tarım ve geleneksel sanayilerindeki yüksek istihdam oranı günümüzde ciddi bir azalış göstererek istihdam, yüksek teknolojik ürün üreten sektörlere doğru kaymıştır. Teknoparklarda, Üniversite - Sanayi - Devlet üçgeninde yer alan bütün aktörlerin karlı çıkması hedeflenerek, Ar-Ge için yatırım yapacak yeterli gücü olmayan firmaların da desteklenmesi ve üniversitelerde üretilen bilginin ticarileştirilerek bu firmalara aktarılması düşüncesi de gerçekleştirilmeye çalışılmaktadır. Buna bağlı olarak oluşturulan teknopark ara yüzünün, üniversite, sanayi, bölge ve ülke ekonomik yapısına önemli katkılar sağlaması beklenmektedir. Nitekim teknoparklardan sanayiye akan bu bilgi, sanayi üretiminin modern ölçülerde yapılmasında ve üretim tabanının bilgi ve teknoloji kaynaklı olmasında etkili bir rol oynamaktadır. Bir diğer deyiş ile teknoparklar vasıtası ile, sanayinin üniversitede üretilen bilgiye ulaşması ve üniversitede üretilen bu bilginin de sanayi tarafından uygulama alanı bulması hedeflenmektedir. Bu çalışmada, Türkiye'de faaliyet gösteren teknoparkların sunmuş olduğu hizmet kalitesi ile bu hizmetlerden istifade eden oyuncuların algıladığı hizmet kalitesi arasındaki farkı ortaya çıkarmak, Servqual ölçeğinden yararlanılarak müşterilerin (Ar-Ge yapıcılarının) memnuniyet düzeylerini belirlemek amaçlanmıştır. Çalışmada ayrıca teknoparkların faaliyette bulundukları süre ile müşterilerin teknoparklara ilişkin hizmet kalite algıları arasında bir ilişki olup olmadığı araştırılmıştır. Teknoparklar arasında geçiş yapan firmalarda, teknopark değiştirme kararı verirken hizmet kalitesinin etkisinin de belirlenmesi hedeflenmiştir. Çalışmada son olarak Vikor yöntemi kullanılarak Türkiye'de faaliyet gösteren teknoparklar, hizmet kalitesi açısından sıralanmıştır. Araştırmada Servqual ölçeğinde yer alan hizmet ölçüm faktörleri yer almıştır. Parasuraman ve ark. (1988) tarafından geliştirilen ve hizmet kalitesini belirlemek için ortaya koydukları Servqual ölçme aracı, bugüne kadar spor tesislerinden, otel hizmetlerine kadar tüm hizmet işletmelerinde sıklıkla kullanılmıştır. Bu ölçek hem yurtiçi hem de yurtdışında birer hizmet işletmesi olarak ele alınan teknoparkların hizmet kalitesini ölçmek için, ilk defa bu çalışmada kullanılmıştır. Bu nedenle öncelikle ölçeğin teknoparklara adaptasyonu yapılmış ve bu adaptasyonun güvenilirlik ve geçerlilik çalışması gerçekleştirilerek, analizlere geçilmiştir. Araştırmada ölçeğinde, Servqual Hizmet Kalitesi Ölçeğinde yer alan, "Fiziksel Özellikler", "Güvenilirlik", "Heveslilik", "Yeterlilik" ve "Empati (Duyarlılık)" faktörleri kullanılmıştır. Fiziki özellikler faktörü, binalarda kullanılmış olan cihazların, iletişim malzemelerinin ve çalışanların fiziki görünümünü kapsamaktadır. Güvenilirlik faktörü, teknoparkların verdikleri hizmetinin zamanında ve doğru olarak yerine getirmesi ile ilgili durumunu tespit etmek için kullanılmaktadır. Heveslilik faktörü, teknoparkların müşterilerine yardım etme, hızlı hizmet verme istekliliği ve işin zamanında bitirme yeteneğini ölçmektedir. Yeterlilik faktörü, teknoparklarda çalışan servis personellerinin gerekli ve yeterli bilgiye sahip olup olmadığını ölçmek için kullanılmıştır. Empati (Duyarlılık) faktörü ise müşteri ile direkt ilişki içinde olan çalışanların, saygı, nezaket ve samimiyet düzeylerini belirlemeyi amaçlamaktadır. Çalışmada teknoparkların hizmet kalitesi seviyelerinin ölçümünün sağlanması, ileride yapılabilecek bilimsel araştırmalar için de öncü bir rol oynayacaktır. Hem yurtiçinde hem de yurtdışında buna benzer bir çalışma olmaması nedeni ile sonuçlarının, teknopark yönetici şirketleri için de büyük önem arz edeceği düşünülmektedir.
-
ÖgeOptimization based-control of cooperative and noncooperative multi aircraft systems( 2020) Başpınar, Barış ; Koyuncu, Emre ; 625456 ; Uçak ve Uzay MühendisliğiIn this thesis, we mainly focus on developing methods that ensure autonomous control of cooperative and noncooperative multi-aircraft systems. Particularly, we focus on aerial combat, air traffic control problem, and control of multiple UAVs. We propose two different optimization-based approaches and their implementations with civil and military applications. In the first method, we benefit from hybrid system theory to present the input space of decision process. Then, using a problem specific evaluation strategy, we formulate an optimization problem in the form of integer/linear programming to generate optimal strategy. As a second approach, we design a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. In this case, we benefit from differential flatness theory and flatness-based control. We construct optimization problems in the form of mixed-integer linear programming (MILP) and non-convex optimization problem. In both methods, we also benefit from game theory when there are competitive decision makers. We give the details of the approaches for both civil and military applications. We present the details of the hybrid maneuver-based method for air-to-air combat. We use the performance parameters of F-16 to model the aircraft for military applications. Using hybrid system theory, we describe the basic and advanced fighter maneuvers. These maneuvers present the input space of the aerial combat. We define a set of metrics to present the air superiority. Then, the optimal strategy generation procedure is formulated as a linear program. Afterwards, we use the similar maneuver-based optimization approach to model the decision process of the air traffic control operator. We mainly focus on providing a scalable and fully automated ATC system and redetermining the airspace capacity via the developed ATC system. Firstly, we present an aircraft model for civil aviation applications and describe guidance algorithms for trajectory tracking. These model and algorithms are used to simulate and predict the motion of the aircraft. Then, ATCo's interventions are modelled as a set of maneuvers. We propose a mapping process to improve the performance of separation assurance and formulate an integer linear programming (ILP) that benefits from the mapping process to ensure the safety in the airspace. Thereafter, we propose a method to redetermine the airspace capacity. We create a stochastic traffic environment to simulate traffics at different complexities and define breaking point of an airspace with regards to different metrics. The approach is validated on real air traffic data for en-route airspace, and it is shown that the designed ATC system can manage traffic much denser than current traffic. As a second approach, we develop a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. It is also an optimization-based approach. Firstly, we focus on control of multi-aircraft systems. We utilize the STL specifications to encode the missions of the multiple aircraft. We benefit from differential flatness theory to construct a mixed-integer linear programming (MILP) that generates optimal trajectories for satisfying the STL specifications and performance constraints. We utilize air traffic control tasks to illustrate our approach. We present a realistic nonlinear aircraft model as a partially differentially flat system and apply the proposed method on managing approach control and solving the arrival sequencing problem. We also simulate a case study with a quadrotor fleet to show that the method can be used with different multi-agent systems. Afterwards, we use the similar flatness-based optimization approach to solve the aerial combat problem. In this case, we benefit from differential flatness, curve parametrization, game theory and receding horizon control. We present the flat description of aircraft dynamics for military applications. We parametrize the aircraft trajectories in terms of flat outputs. By the help of game theory, the aerial combat is modeled as an optimization problem with regards to the parametrized trajectories. This method allows the presentation of the problem in a lower dimensional space with all given and dynamical constraints. Therefore, it speeds up the strategy generation process. The optimization problem is solved with a moving time horizon scheme to generate optimal combat strategies. We demonstrate the method with the aerial combats between two UAVs. We show the success of the method through two different scenarios.
-
ÖgeDynamic and aeroelastic analysis of advanced aircraft wings carrying external stores(Lisansüstü Eğitim Enstitüsü, 2021) Aksongur Kaçar, Alev ; Kaya, Metin Orhan ; 709160 ; Uçak ve Uzay MühendisliğiBu çalışma gelişmiş uçak kanatlarında harici yük ve takip edici kuvvet altında kanadın dinamik ve aeroleastik davranışlarını incelemektedir. Harici yüklerin ağırlığı, pozisyonu, birbirine göre yerleşimi, kompozit katmanların yönelimi ile itki kuvveti etkileri incelenmiş ve hepsinin kanadın doğal frekansı ve kritik çırpınma hızına olan etkileri tespit edilmiştir.
-
ÖgeInvestigations on the effects of conical bluff body geometry on nonpremixed methane flames(Graduate Institute, 2021) Ata, Alper ; Özdemir, İlyas Bedii ; 675677 ; Department of Aeronautics and Astronautics EngineeringThis thesis is composed of three experimental studies, of which the first two are already published, and the third is under peer review. The first study investigates the effects of a stabilizer and the annular co-flow air speed on turbulent nonpremixed methane flames stabilized downstream of a conical bluff body. Four bluff body variants were designed by changing the outer diameter of a conically shaped object. The co-flow velocity was varied from zero to 7.4 m/s, while the fuel velocity was kept constant at 15 m/s. Radial distributions of temperature and velocity were measured in detail in the recirculation zone at vertical locations of 0.5D, 1D, and 1.5D. Measurements also included the CO2, CO, NOx, and O2 emissions at points downstream of the recirculation region. Flames were visualized under 20 different conditions, revealing various modes of combustion. The results evidenced that not only the co-flow velocity but also the bluff body diameter play important roles in the structure of the recirculation zone and, hence, the flame behavior. The second study analyzes the flow, thermal, and emission characteristics of turbulent nonpremixed CH4 flames for three burner heads of different cone heights. The fuel velocity was kept constant at 15 m/s, while the coflow air speed was varied between 0 – 7.4 m/s. Detailed radial profiles of the velocity and temperature were obtained in the bluff body wake at three vertical locations of 0.5D, 1D, and 1.5D. Emissions of CO2, CO, NOx, and O2 were also measured at the tail end of every flame. Flames were digitally photographed to support the point measurements with the visual observations. Fifteen different stability points were examined, which were the results of three bluff body variants and five coflow velocities. The results show that a blue-colored ring flame is formed, especially at high coflow velocities. The results also illustrate that, depending on the mixing at the bluff-body wake, the flames exhibit two modes of combustion regimes, namely fuel jet- and coflow-dominated flames. In the jet-dominated regime, the flames become longer compared to the flames of the coflow-dominated regime. In the latter regime, emissions were largely reduced due to the dilution by the excess air, which also surpasses their production. The final study examines the thermal characteristics of turbulent nonpremixed methane flames stabilized by four burner heads with the same exit diameter but different heights. The fuel flow rate was kept constant with an exit velocity of 15 m/s, while the co-flow air speed was increased from 0 to 7.6 m/s. The radial profiles of the temperature and flame visualizations were obtained to investigate the stability limits. The results evidenced that the air co-flow and the cone angle have essential roles in the stabilization of the flame: Increase in the cone angle and/or the co-flow speed deteriorated the stability of the flame, which eventually tended to blow-off. As the cone angle was reduced, the flame was attached to the bluff body. However, when the cone angle is very small, it has no effect on stability. The mixing and entrainment processes were described by the statistical moments of the temperature fluctuations. It appears that the rise in temperature coincides with the intensified mixing, and it becomes constant in the entrainment region.
-
ÖgeFailure analysis of adhesively bonded cfrp joints(Graduate School, 2021-01-04) Daylan, Seda ; Mecitoğlu, Zahit ; 511171169 ; Aeronautical and Astronautical EngineeringJoints are critical areas where load transfer occurs and should be designed to provide maximum strength to the structure. The adhesive bonding process is widely used as a structural joining method in aerospace applications. There are many advantages of using adhesively bonding joints instead of classical mechanical fastening. Some of these can be listed as joining of similar and dissimilar materials (metal-to-composite, metal-to-metal, metal-to-glass), providing a more uniform stress distribution with a significant decrease in the stress concentration in the structure since there will be no fastening holes, a considerable weight gain compared to mechanical fasteners, strong in terms of fatigue strength due to the absence of fastener holes in the structure. In addition to the above-mentioned positive aspects of using adhesives as a structural joining method, strength prediction is vital for an optimum design process in the initial sizing and critical design phases. The fact that adhesively bonded joints have various failure modes makes failure predictions complex. According to ASTM D5573, adhesively bonded composite joints have seven typical failure modes, but they can be listed under three main headings: adhesive failure, cohesive failure, and adherend failure. Adhesive failure occurs at the adherend and adhesive interface, and usually, the adhesive remains on an adherend. These failures are generally attributed to the poor-quality bonding process, environmental factors, and insufficient surface preparation. The other kind of failure, adherend failure, occurs when the structural integrity of the adherend breaks down before the joint structure and means that the strength of the joint area exceeds the strength of the adherend. On the other hand, cohesive failure is the type of failure expected after an ideal design and bonding process, where failure occurs within the adhesive structure. After cohesive failure, the adhesive material is seen on both adherends. Structural joining with adhesive has been used in the aerospace industry since the early 1970s and 1980s. Since these dates, many analytical and numeric methods have been used to study the failures of adhesively bonding joints. Analytical method studies to analyze the failures of adhesively bonded single lap joints, known in the literature, started with Volkersen in 1938. Volkersen did not include the eccentricity factor in the calculations due to the geometric nonlinearity of the single lap joint. This factor was first taken into account by Goland and Reissner in their calculations in 1944. Goland and Reissner made a remarkable study in analysing the adhesively bonded single lap joint, calculating the loads in the joint area and subsequently the stress on the adhesive. Afterwards, analytical method studies were continued by Hart Smith, Allman, Bigwood & Crocombe and more. In addition to analytical method studies, the continuum mechanic approach, fracture mechanic approach, and damage mechanic approach can be given examples to the numerical method studies. The fracture mechanics approach used in this thesis examines the initial crack propagation in the adhesive under three different loading modes. Crack propagation occurs when the adhesive's critical strain energy release rate equals the strain energy release rate under that load. After the three different modes' strain energy release rate values are calculated separately, an evaluation is made according to the power-law failure criterion. There are many types of joint configurations in the literature, and the common ones can be summarized as single lap joints, double lap joints, stepped joints etc. The single-lap joint type is the most widely used joint type in terms of ease of design and effectiveness. Within the scope of this thesis, it is aimed to obtain a general solution that can be applied to all joints after first making a study for the single lap joint geometry and validating the results of this study experimentally. Studies have been carried out to predict the failure load of adhesively bonding CFRP joints. They include two main steps, which are to find the loads at the edges of the joint area and to evaluate the failure criteria by calculating the strain energy release rate with these loads. As the first step, loads at the joint edges are found analytically and with the finite element method, respectively. While calculating the loads analytically, the Modified Goland and Reissner theory is used, which differs from the classical Goland and Reissner theorem by taking the adhesive thickness into account. While calculating the loads with the finite element method, the modelling technique first studied by Loss and Kedward and then described by Farhad Tahmasebi in his work published with NASA is used. The primary purpose of using this modelling technique is to simulate load transfers in overlap regions accurately for complex and analytically challenging to calculate geometries. Especially in aerospace, since modelling the large components with solid elements is not effective in terms of time and resources, a practical modelling technique that can produce results with high accuracy is needed. In the modelling technique used in the thesis, adherends are modelled with shell elements while the adhesive region is modelled between coincident nodes with three spring elements to provide stiffnesses in the shear and peel directions, and the nodes of the adhesive elements are connected to the adherends with rigid elements. The modulus values of the adhesive material are used in the stiffness calculation of the spring elements. After obtaining the loads with the analytical and finite element method, the second step, the calculation of the strain energy release rate values on the adhesive material, is carried out with reference to two different studies. Firstly, linear fracture mechanics formulations were studied by Williams, assuming that the energy required to advance an existing crack unit amount is equal to the difference of performed external work with internal strain energy, and the laminate containing crack performs linear elastic behaviour is used. Conventional beam theory is used for the 1D case, as the deformation will occur like beam deformation. Using beam theory, he formulated the external work and internal strain energy at the beginning and end of the crack. And using these two equations, he found energy release rate formulations in relation to bending moment and axial load. Then, mode separation is made to calculate the energy release rates in the mode I and II directions separately because the critical strain energy release rate value in these two directions is different and needs to be evaluated independently. This study's disadvantage is that the transverse shear load is ignored, and calculations are made only with bending moment and longitudinal force. Within the scope of the thesis, the strain energy release rate is calculated both with the loads found analytically and with the loads found by the finite element method. Shahin and Taheri did the other reference work, and with overlap edge loads, the stress on the adhesive first and then the strain energy release rate is calculated. In this study, two assumptions are made, and the first is that the shear and peel stress change is zero along with the thickness of the adhesive, and the other is that the stress on the adhesive is as much as the displacement difference of the adherends. As a result of the derivations, the stress distribution on the adhesive is found in the joint structure consisting of CFRP adherend and adhesive. Then, according to Irwin's VCCI approach, as if there is a virtual crack, the integration crack length is rewritten so that it converges to zero and the displacements are in stress. Thus, the stress and energy relationship equation is obtained, and strain energy release rates in mode I and II directions of the adhesive are calculated. As a result of all these studies, mode I and mode II strain energy release rate calculations are made according to two different methods with the loads found analytically and with the finite element method. The strain energy release rate values found and the critical strain energy release rate values, which are allowable, are evaluated according to the power-law failure criteria, and failure load predictions are made. For specimens with different overlap lengths, experimental failure load values and predicted failure load values are compared, and inferences are made about the accuracy of the FEM modelling technique and the methods used in SERR calculation. All these results are interpreted in detail, and it is obtained that the FEM modelling technique gives high accuracy results with Method 2 used in SERR calculation. Finally, a bonding analysis tool has been developed with the python programming language. This tool first detects the finite elements corresponding to upper and lower adherends in the model from NASTRAN .bdf file. Then reads the element loads from the .pch file, which is a NASTRAN output and contains the element loads, then calculates the SERR using Method 2 and calculates the reserve factor and failure load, respectively. This tool has been prepared so that these calculations can be made in a short time and accurately for tens of elements in the overlap zone in complex and large models.
-
ÖgeExperimental investigation of leading edge suction parameter on massively separated flow(Graduate School, 2021-05-10) Aydın, Egemen ; Yıldırım Çetiner, Nuriye Leman Okşan ; 511171150 ; Aerospace Engineering ; Uçak ve Uzay MühendisliğiThe study aims to investigate and understand the Leading Edge Suction Parameter (LESP) application on the massively separated flow. The experiment was done by gathering force data from the downstream flat plate and the visualization of the flow structures is done by Digital Particle Image Velocimetry. The experiments are conducted in free surface, closed-circuit, large scale water channel located in Trisonic Laboratory of Istanbul Technical University's Faculty of Aeronautics and Astronautics. The velocity of the tunnel is equal to 0.1 m/s which results in a 10.000 Reynolds Number. During the experiment, the flat plate at the downstream of the gust generator (plat plate) is kept constant angle of attack and the test cases are selecting to show that the LESP parameter that derived from only one force component works for different gust interaction with the flat plate. As already discussed in the literature, the critical LESP parameter depends on only airfoil shape and its ambient Reynolds Number. Also, the critical LESP number is calculated in literature as equal to 0,05 for plat plate at the 10,000 Reynolds Number. We did not perform an experiment to find critical LEPS numbers as our experiment was done with a flat plate on 10,000 Re. A different angle of attack and different gust impingement combination has been shown that the LESP parameter works even in a highly unstable gust environment. Flow structures around the airfoil leading edge are behaving as expected from the LESP theory (leading-edge vortex separation and unification).
-
ÖgeDevelopment of single-frame methods aided kalman-type filtering algorithms for attitude estimation of nano-satellites(Graduate School, 2021-08-20) Çilden Güler, Demet ; Hacızade, Cengiz ; Kaymaz, Zerefşan ; 511162104 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThere is a growing demand for the development of highly accurate attitude estimation algorithms even for small satellite e.g. nanosatellites with attitude sensors that are typically cheap, simple, and light because, in order to control the orientation of a satellite or its instrument, it is important to estimate the attitude accurately. Here, the estimation is especially important in nanosatellites, whose sensors are usually low-cost and have higher noise levels than high-end sensors. The algorithms should also be able to run on systems with very restricted computer power. One of the aims of the thesis is to develop attitude estimation filters that improve the estimation accuracy while not increasing the computational burden too much. For this purpose, Kalman filter extensions are examined for attitude estimation with a 3-axis magnetometer and sun sensor measurements. In the first part of this research, the performance of the developed extensions for the state of art attitude estimation filters is evaluated by taking into consideration both accuracy and computational complexity. Here, single-frame method-aided attitude estimation algorithms are introduced. As the single-frame method, singular value decomposition (SVD) is used that aided extended Kalman filter (EKF) and unscented Kalman filter (UKF) for nanosatellite's attitude estimation. The development of the system model of the filter, and the measurement models of the sun sensors and the magnetometers, which are used to generate vector observations is presented. Vector observations are used in SVD for satellite attitude determination purposes. In the presented method, filtering stage inputs are coming from SVD as the linear measurements of attitude and their error covariance relations. In this step, UD is also introduced for EKF that factorizes the attitude angles error covariance with forming the measurements in order to obtain the appropriate inputs for the filtering stage. The necessity of the sub-step, called UD factorization on the measurement covariance is discussed. The accuracy of the estimation results of the SVD-aided EKF with and without UD factorization is compared for the estimation performance. Then, a case including an eclipse period is considered and possible switching rules are discussed especially for the eclipse period, when the sun sensor measurements are not available. There are also other attitude estimation algorithms that have strengths in coping well with nonlinear problems or working well with heavy-tailed noise. Therefore, different types of filters are also tested to see what kind of filter provides the largest improvements in the estimation accuracy. Kalman-type filter extensions correspond to different ways of approximating the models. In that sense, a filter takes the non-Gaussianity into account and updates the measurement noise covariance whereas another one minimizes the nonlinearity. Various other algorithms can be used for adapting the Kalman filter by scaling or updating the covariance of the filter. The filtering extensions are developed so that each of them is designed to mitigate different types of error sources for the Kalman filter that is used as the baseline. The distribution of the magnetometer noises for a better model is also investigated using sensor flight data. The filters are tested for the measurement noise with the best fitting distribution. The responses of the filters are performed under different operation modes such as nominal mode, recovery from incorrect initial state, short and long-term sensor faults. Another aspect of the thesis is to investigate two major environmental disturbances on the spacecraft close enough to a planet: the external magnetic field and the planet's albedo. As magnetometers and sun sensors are widely used attitude sensors, external magnetic field and albedo models have an important role in the accuracy of the attitude estimation. The magnetometers implemented on a spacecraft measure the internal geomagnetic field sources caused by the planet's dynamo and crust as well as the external sources such as solar wind and interplanetary magnetic field. However, the models that include only the internal field are frequently used, which might remain incapable when geomagnetic activities occur causing an error in the magnetic field model in comparison with the sensor measurements. Here, the external field variations caused by the solar wind, magnetic storms, and magnetospheric substorms are generally treated as bias on the measurements and removed from the measurements by estimating them in the augmented states. The measurement, in this case, diverges from the real case after the elimination. Another approach can be proposed to consider the external field in the model and not treat it as an error source. In this way, the model can represent the magnetic field closer to reality. If a magnetic field model used for the spacecraft attitude control does not consider the external fields, it can misevaluate that there is more noise on the sensor, while the variations are caused by a physical phenomenon (e.g. a magnetospheric substorm event), and not the sensor itself. Different geomagnetic field models are compared to study the errors resulting from the representation of magnetic fields that affect the satellite attitude determination system. For this purpose, we used magnetometer data from low Earth-orbiting spacecraft and the geomagnetic models, IGRF and T89 to study the differences between the magnetic field components, strength, and the angle between the predicted and observed vector magnetic fields. The comparisons are made during geomagnetically active and quiet days to see the effects of the geomagnetic storms and sub-storms on the predicted and observed magnetic fields and angles. The angles, in turn, are used to estimate the spacecraft attitude, and hence, the differences between model and observations as well as between two models become important to determine and reduce the errors associated with the models under different space environment conditions. It is shown that the models differ from the observations even during the geomagnetically quiet times but the associated errors during the geomagnetically active times increase more. It is found that the T89 model gives closer predictions to the observations, especially during active times and the errors are smaller compared to the IGRF model. The magnitude of the error in the angle under both environmental conditions is found to be less than 1 degree. The effects of magnetic disturbances resulting from geospace storms on the satellite attitudes estimated by EKF are also examined. The increasing levels of geomagnetic activity affect geomagnetic field vectors predicted by IGRF and T89 models. Various sensor combinations including magnetometer, gyroscope, and sun sensor are evaluated for magnetically quiet and active times. Errors are calculated for estimated attitude angles and differences are discussed. This portion of the study emphasizes the importance of environmental factors on the satellite attitude determination systems. Since the sun sensors are frequently used in both planet-orbiting satellites and interplanetary spacecraft missions in the solar system, a spacecraft close enough to the sun and a planet is also considered. The spacecraft receives electromagnetic radiation of direct solar flux, reflected radiation namely albedo, and emitted radiation of that planet. The albedo is the fraction of sunlight incident and reflected light from the planet. Spacecraft can be exposed to albedo when it sees the sunlit part of the planet. The albedo values vary depending on the seasonal, geographical, diurnal changes as well as the cloud coverage. The sun sensor not only measures the light from the sun but also the albedo of the planet. So, a planet's albedo interference can cause anomalous sun sensor readings. This can be eliminated by filtering the sun sensors to be insensitive to albedo. However, in most of the nanosatellites, coarse sun sensors are used and they are sensitive to albedo. Besides, some critical components and spacecraft systems e.g. optical sensors, thermal and power subsystems have to take the light reflectance into account. This makes the albedo estimations a significant factor in their analysis as well. Therefore, in this research, the purpose is to estimate the planet's albedo using a simple model with less parameter dependency than any albedo models and to estimate the attitude by comprising the corrected sun sensor measurements. A three-axis attitude estimation scheme is presented using a set of Earth's albedo interfered coarse sun sensors (CSSs), which are inexpensive, small in size, and light in power consumption. For modeling the interference, a two-stage albedo estimation algorithm based on an autoregressive (AR) model is proposed. The algorithm does not require any data such as albedo coefficients, spacecraft position, sky condition, or ground coverage, other than albedo measurements. The results are compared with different albedo models based on the reference conditions. The models are obtained using either a data-driven or estimated approach. The proposed estimated albedo is fed to the CSS measurements for correction. The corrected CSS measurements are processed under various estimation techniques with different sensor configurations. The relative performance of the attitude estimation schemes when using different albedo models is examined. In summary, the effects of two main space environment disturbances on the satellite's attitude estimation are studied with a comprehensive analysis with different types of spacecraft trajectories under various environmental conditions. The performance analyses are expected to be of interest to the aerospace community as they can be reproducible for the applications of spacecraft systems or aerial vehicles.
-
ÖgeImplementation of propulsion system integration losses to a supersonic military aircraft conceptual design( 2021-10-07) Karaselvi, Emre ; Nikbay, Melike ; 511171151 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiMilitary aircraft technologies play an essential role in ensuring combat superiority from the past to the present. That is why the air forces of many countries constantly require the development and procurement of advanced aircraft technologies. A fifth-generation fighter aircraft is expected to have significant technologies such as stealth, low-probability of radar interception, agility with supercruise performance, advanced avionics, and computer systems for command, control, and communications. As the propulsion system is a significant component of an aircraft platform, we focus on propulsion system and airframe integration concepts, especially in addressing integration losses during the early conceptual design phase. The approach is aimed to be appropriate for multidisciplinary design optimization practices. Aircraft with jet engines were first employed during the Second World War, and the technology made a significant change in aviation history. Jet engine aircraft, which replaced propeller aircraft, had better maneuverability and flight performance. However, substituting a propeller engine with a jet engine required a new design approach. At first, engineers suggested that removing the propellers could simplify the integration of the propulsion system. However, with jet engines for fighter aircraft, new problems arose due to the full integration of the propulsion system and the aircraft's fuselage. These problems can be divided into two parts: designing air inlet, air intake integration, nozzle/afterbody design, and jet interaction with the tail. The primary function of the air intake is to supply the necessary air to the engine with the least amount of loss. However, the vast flight envelope of the fighter jets complicates the air intake design. Spillage drag, boundary layer formation, bypass air drag, and air intake internal performance are primary considerations for intake system integration. The design and integration of the nozzle is a challenging engineering problem with the complex structure of the afterbody and the presence of jet and free-flow mix over control surfaces. The primary considerations for the nozzle system are afterbody integration, boat-tail drag, jet flow interaction, engine spacing for twin-engine configuration, and nozzle base drag. Each new generation of aircraft design has become a more challenging engineering problem to meet increasing military performances and operational capabilities. This increase is due to higher Mach speeds without afterburner, increased acceleration capability, high maneuverability, and low visibility. Tradeoff analysis of numerous intake nozzle designs should be carried out to meet all these needs. It is essential to calculate the losses caused by different intakes and nozzles at the conceptual design of aircraft. Since the changes made after the design maturation delay the design calendar or changes needed in a matured design cause high costs, it is crucial to accurately present intake and nozzle losses while constructing the conceptual design of a fighter aircraft. This design exploration process needs to be automated using numerical tools to investigate all possible alternative design solutions simultaneously and efficiently. Therefore, spillage drag, bypass drag, boundary layer losses due to intake design, boat-tail drag, nozzle base drag, and engine spacing losses due to nozzle integration are examined within the scope of this thesis. This study is divided into four main titles. The first section, "Introduction", summarizes previous studies on this topic and presents the classification of aircraft engines. Then the problems encountered while integrating the selected aircraft engine into the fighter aircraft are described under the "Problem Statement". In addition, the difficulties encountered in engine integration are divided into two zones. Problem areas are examined as inlet system and afterbody system. The second main topic, "Background on Propulsion," provides basic information about the propulsion system. Hence, the Brayton cycle is used in aviation engines. The working principle of aircraft engines is described under the Brayton Cycle subtitle. For the design of engines, numbers are used to standardize engine zone naming to present a common understanding. That is why the engine station numbers and the regions are shown before developing the methodology. The critical parameters used in engine performance comparisons are thrust, specific thrust and specific fuel consumption, and they are mathematically described. The Aerodynamics subtitle outlines the essential mathematical formulas to understand the additional drag forces caused by propulsion system integration. During the thesis, ideal gas and isentropic flow assumptions are made for the calculations. Definition of drag encountered in aircraft and engine integration are given because accurate definitions prevent double accounting in the calculation. Calculation results with developed algorithms and assumptions are compared with the previous studies of Boeing company in the validation subtitle. For comparison, a model is created to represent the J79 engine with NPSS. The engine's performance on the aircraft is calculated, and given definitions and algorithms add drag forces to the model. The results are converged to Boeing's data with a 5% error margin. After validation, developed algorithms are tested with 5th generation fighter aircraft F-22 Raptor to see how the validated approach would yield results in the design of next-generation fighter aircraft. Engine design parameters are selected, and the model is developed according to the intake, nozzle, and afterbody design of the F-22 aircraft. A model equivalent to the F-119-PW-100 turbofan engine is modeled with NPSS by using the design parameters of the engine. Additional drag forces calculated with the help of algorithms are included in the engine performance results because the model is produced uninstalled engine performance data. Thus, the net propulsive force is compared with the F-22 Raptor drag force Brandtl for 40000 ft. The results show that the F-22 can fly at an altitude of 40000 ft, with 1.6M, meeting the aircraft requirements. In the thesis, a 2D intake assumption is modeled for losses due to inlet geometry. The effects of the intake capture area, throat area, wedge angle, and duct losses on motor performance are included. However, the modeling does not include a bump intake structure similar to the intake of the F-35 aircraft losses due to 3D effects. CFD can model losses related to the 3D intake structure, and test results and thesis studies can be developed. The circular nozzle, nozzle outlet area, nozzle throat area, and nozzle maximum area are used for modeling. The movement of the nozzle blades is included in the model depending on the boattail angle and base area. The works of McDonald & P. Hughest are used as a reference to represent the 2D-sized nozzle. The method described in this thesis is one way of accounting for installation effects in supersonic aircraft. Additionally, the concept works for aircraft with conventional shock inlets or oblique shock inlets flying at speeds up to 2.5 Mach. The equation implementation in NPSS enables aircraft manufacturers to calculate the influence of installation effects on engine performance. The study reveals the methodology for calculating additional drag caused by an engine-aircraft integration in the conceptual design phase of next-generation fighter aircraft. In this way, the losses caused by the propulsion system can be calculated accurately by the developed approach in projects where aircraft and engine design have not yet matured. If presented, drag definitions are not included during conceptual design causing significant change needs at the design stage where aircraft design evolves. Making changes in the evolved design can bring enormous costs or extend the design calendar.
-
ÖgeExperimental and numerical investigation of flapping airfoils interacting in various arrangements(Graduate School, 2021-12-10) Yılmaz, Saliha Banu ; Ünal, Mehmet Fevzi ; Şahin, Mehmet ; 521082102 ; Aeronautical and Astronautical EngineeringIn the last decades, flapping wing aerodynamics has gained a great deal of interest. Inspired by insect flight, the utilization of multiple wings has become very popular in Micro Air Vehicle (MAV) and Micromechanical Flying Insect (MFI) design. Therefore, studies aiming to disclose the characteristics of flow around interacting flapping airfoils has received a particular attention. However, the majority of these studies were done using real, complex, three dimensional parameters and geometries without making any assessment on basic two dimensional vortex dynamics. The aim of this study is to identify the baseline flow field characteristics in order to better understand the flapping wing aerodynamics in nature and thus to provide a viewpoint for MAV and MFI design. The thesis contains numerical and experimental investigation of tandem (in line) and biplane (side by side) arrangements of NACA0012 airfoils undergoing harmonic pure plunging motion by means of vortex dynamics, thrust and propulsive efficiency. Additionally, the "deflected wake phenomenon" which is an interesting and a challenging benchmark problem for the validation of the numerical algorithms for moving boundary problems is investigated for a single airfoil due to its flow characteristics which accommodates strong transient effects at low Reynolds numbers. Throughout the study, effects of reduced frequency, non-dimensional plunge amplitude, Reynolds number and phase angle between airfoils are considered. The vorticity patterns are obtained both numerically and experimentally whereas force statistics and propulsive efficiencies are evaluated only in numerical simulations. In the experimental phase of the study, Particle Image Velocimetry (PIV), which is a non-intrusive optical measurement technique, is utilized. Experiments are conducted in the large scale water channel in the Trisonic Laboratory of Istanbul Technical University. The motion of the wings is provided by two servo motors and their gear systems. To obtain a two dimensional flow around the wings, they are placed in between two large endplates one of which is having a slot to permit the connection between the wings and the servo motors. The flow is seeded with silver coated hollow glass spheres of 10µ diameter and illuminated with a dual cavity Nd-Yag laser. To visualize a larger flow area, two 16-bit CCD cameras are used together either inline or side by side depending on the positions of the wings. Dantec Dynamics's Dynamic Studio software is used for synchronization, image acquisition, image stitching and cross correlation purposes. Synchronization between servo motors and data acquisition system is done via LabView software. In post process, an in-house Matlab code is used for masking of the airfoils. CleanVec and NFILVB software are utilized for vector range validation and for filtering. In order to gather mean velocity fields, NWENSAV software is used. From the experimental velocity vector fields, two dimensional vorticity fields are obtained in order to understand the flow field characteristics. The experimental results are also used as a benchmark for the numerical studies. In the numerical phase of the study, an arbitrary Lagrangian-Eulerian (ALE) formulation based on an unstructured side-centered finite volume method is utilized in order to solve the incompressible Navier-Stokes equations. The velocities are defined at the midpoint of each edge where the pressure is defined at element centroid. The present arrangement of the primitive variables leads to a stable numerical scheme and it does not require any ad-hoc modifications in order to enhance pressure-velocity coupling. The most appealing feature of this primitive variable arrangement is the availability of very efficient multigrid solvers. The mesh motion algorithm is based on an algebraic method using the minimum distance function from the airfoil surface due to its numerical efficiency, although in some cases where large mesh deformation occurs Radial Basis Function (RBF) algorithm is used. To satisfy Discrete Geometric Conservation Law (DGCL), the convective term in the momentum equation is modified in order to take account the grid velocity. The numerical grid is created via Gambit and Cubit softwares with quadrilateral elements. Grid and time independencies are achieved by means of force statistics and vorticity fields. To make direct comparison Finite Time Lyapunov Exponent (FTLE) fields are calculated for some cases. FTLE fields characterize fluid flow by measuring the amount of stretching between neighbouring particles and the Lagrangian Coherent Structures (LCS) are computed as the locally maximum regions of the FTLE field. On the other hand, using a second-order Runge-Kutta method particle tracking algorithm is developed based on the integration of the massless particle trajectories on moving unstructured quadrilateral elements. Validation of results is performed by comparing the numerical results with the experimental results and also comparing with the corresponding cases in the literature. Accordingly, the results were substantially compatible within itself and also compatible with the literature. Highly accurate numerical results are obtained in order to investigate the flow pattern around a NACA0012 airfoil, undergoing pure harmonic plunging motion corresponding to the deflected wake phenomenon which are confirmed by means of spatial and temporal convergence. Present study successfully reproduces the details of the flow field which is not produced in literature such as fine vortical structures in opposite direction of the deflected wake and the vorticity structures close to airfoil surface which is dominated by complex interactions of LE with the plunging airfoil. Moreover, highly persistent transient effects and the calculations require two orders of magnitude larger duration than the heave period to reach the time-periodic state which is prohibitively expensive for the numerical simulations. This persistent transient effect is not reported before in the literature. The three-dimensional simulation also confirms highly persistent transient effects. In addition, the three-dimensional simulation indicates that the flow field is highly three-dimensional close to the airfoil leading edge. The three-dimensional structure of the flow field is not noted in the literature for the parameters used herein. In case of tandem arrangement of airfoils, the experimental results agree well with the numerical solutions. Major flow structures are substantially compatible in both numerical and experimental results at Reynolds number of 2,000. For the considered parameters, during upstroke and downstroke co-rotating leading and trailing end vortices merge at the trailing end of the forewing and interact with the downstream airfoil in either constructive or destructive way in trust production. Thrust production of forewing is maximum when airfoil moves from topmost position to mid position for the considered reduced frequencies at all configurations. It is hard to specify thrust-drag generation characteristics of the hindwing since it depends on not only plunge motion parameters, but also on interactions with vortices from the forewing. For the considered phase angles of 0°, 90°, 180° and 270°, in addition to stationary hind wing case, the force statistics are strongly altered due to the airfoil-wake interactions. In case of biplane arrangement of airfoils at phase angle of 180°, experimental and numerical vorticity results are also quite comparable. Regarding the parameters investigated, as the reduced frequency increases, vorticity structures get larger at constant plunge amplitude. However, vorticity structures do not change much after a certain reduced frequency value. As the plunge amplitude increases, the magnitude of vortices increases without depending on reduced frequency. Increasing plunge amplitude results in increased amount of fluid moving in the direction of motion in a constant period of time, commensurate with strong suction between airfoils as they move apart from each other. As a consequence of this suction force, energetic vortex pairs are formed which helps in thrust augmentation. For thrust production, among the phase angles considered, i.e. 0°, 90°, 180° and 270°, in addition to stationary lower wing case, the most efficient is φ=180°. Effect of three dimensionality is not observed at this phase angle for the considered parameters. Additionally, no remarkable difference is observed in general flow structure when Reynolds number is increased from 2,000 to 10,000.
-
ÖgeNumerical simulation of aircraft icing with an adaptive thermodynamic model considering ice accretion(Institute of Science and Technology, 2022) Siyahi, Hadi ; Baytaş, A. Cihat ; 754795 ; Department of Aeronautics and Astronautics EngineeringThe icing phenomenon is one of the most undesirable events in aircraft. We may see this phenomenon from different points of view. The safety of flight is undoubtedly the biggest concern of designers, nowadays. The icing causes the malfunctioning or even failure of the pressure and speed measurement devices, and consequently make difficulties for controllability of the flight. Icing in rudder, ailerons, and elevators can also make control of aircraft even impossible. During landing, the icing on the pilot window along with possible failures in the landing gears may cause major catastrophes. Besides, detachment of ice particles can cause serious mechanical damage to the aircraft when they collide with the body or sometimes with internal parts such as compressor blades. The other point of view is the degradation of the performance of aircraft, and consequently the increase of fuel consumption because of icing. Icing affects the aerodynamics of an airplane in an undesirable way and puts the aircraft in a situation that is far from what the aircraft is designed for. Therefore, it is necessary to study aircraft icing to provide a safer and more efficient flight. Since the icing in aircraft is of great importance, a precision analysis of this phenomenon should be performed. Tests in the wind tunnel and during the flight are very expensive. On contrary, the numerical-computational simulations can be cost-effective for studying aircraft icing. In the present study, the numerical-computational simulation of aircraft icing has been performed by writing a computer-code via FORTRAN. The computational simulation of aircraft icing is a modular procedure consisting of the grid generation, air solver, droplet solver and ice accretion modules. First, the computational domain is generated via elliptic grid generation. The differential methods based on the solution of the elliptic equations are commonly used for generating of the mesh for a geometry with arbitrary boundaries. Elliptic equations are also utilized for the unstructured grids. The most popular elliptic equation is the Poisson equation, which gives the wonderful possibility to satisfy smoothness, fine spacing, and orthogonality on the body surface by means of the controlling terms. Then, the velocity and pressure distributions of airflow around the wing have been found, and the convective heat transfer coefficient on the body will be calculated. The inviscid flow model has been selected in our simulation because it needs less effort and time in comparison with the Navier-Stokes codes. The two-dimensional, steady-state, inviscid, incompressible, irrotational flow (potential flow) model has been applied for solving airflow.
-
ÖgeA high-order finite-volume solver for supersonic flows(Lisansüstü Eğitim Enstitüsü, 2022) Spinelli, Gregoria Gerardo ; Çelik, Bayram ; 721738 ; Uçak ve Uzay MühendisliğiNowadays, Computational Fluid Dynamics (CFD) is a powerful tool in engineering used in various industries such as automotive, aerospace and nuclear power. More than ever the growing computational power of modern computer systems allows for realistic modelization of physics. Most of the open-source codes, however, offer a second-order approximation of the physical model in both space and time. The goal of this thesis is to extend this order of approximation to what is defined as high-order discretization in both space and time by developing a two-dimensional finite-volume solver. This is especially challenging when modeling supersonic flows, which shall be addressed in this study. To tackle this task, we employed the numerical methods described in the following. Curvilinear meshes are utilized since an accurate representation of the domain and its boundaries, i.e. the object under investigation, are required. High-order approximation in space is guaranteed by a Central Essentially Non-Oscillatory (CENO) scheme, which combines a piece-wise linear reconstruction and a k-exact reconstruction in region with and without discontinuities, respectively. The usage of multi-step methods such as Runge-Kutta methods allow for a high-order approximation in time. The algorithm to evaluate convective fluxes is based on the family of Advection Upstream Splitting (AUSM) schemes, which use an upwind reconstruction. A central stencil is used to evaluate viscous fluxes instead. When using high-order schemes, discontinuities induce numerical problems, such as oscillations in the solution. To avoid the oscillations, the CENO scheme reverts to a piece-wise linear reconstruction in regions with discontinuities. However, this introduces a loss of accuracy. The CENO algorithm is capable of confining this loss of accuracy to the cells closest to the discontinuity. In order to reduce this accuracy loss Adaptive Mesh Refinement (AMR) is used. This algorithm refines the mesh near the discontinuity, confining the loss of accuracy to a smaller portion of the domain. In this study, a combination of the CENO scheme and the AUSM schemes is used to model several problems in different compressibility regimes, with a focus on supersonic flows. The scope of this thesis is to analyze the capabilities and the limitations of the proposed combination. In comparison to traditional implementations, which can be found in literature, our implementation does not impose a limit on the refinement ratio of neighboring cells while utilizing AMR. Due to the high computational expenses of a high-order scheme in conjunction with AMR, our solver benefits from a shared memory parallelization. Another advantage over traditional implementations is that our solver requires one layer of ghost cells less for the transfer of information between adjacent blocks. The validation of the solver is performed in different steps. We assess the order of accuracy of the CENO scheme by interpolating a smooth function, in this case the spherical cosine function. Then we validate the algorithm to compute the inviscid fluxes by modeling a Sod shock tube. Finally, the Boundary Conditions (BCs) for the inviscid solver and its order of accuracy are validated by modeling a convected vortex in a supersonic uniform flow. The curvilinear mesh is validated by modeling the flow around a NACA0012 airfoil. The computation of the viscous fluxes is validated by modeling a viscous boundary layer developing on a flat plate. The BCs for viscous flows and the curvilinear implementation are validated by modeling the flow around a cylinder and a NACA0012 airfoil. The AUSM schemes are tested for shock robustness by modeling an inviscid hypersonic cylinder at a Mach number of 20 and a viscous hypersonic cylinder at a Mach number of 8.03. Then, we validate our AMR implementation by modeling a two-dimensional Riemann problem. All the validation results agree well with either numerical or experimental results available in literature. The performance of the code, in terms of computational time required by the different orders of approximation and the parallel efficiency, is assessed. For the former a supersonic vortex convection served as an example, while the latter used a two-dimensional Riemann problem. We obtained a linear speed-up until 12 cores. The highest speedup value obtained is 20 with 32 cores. Furthermore, the solver is used to model three different supersonic applications: the interaction between a vortex and a normal shock, the double Mach reflection and the diffraction of a shock on a wedge. The first application resembles a strong interaction between a vortex and a steady shock wave for two different vortex strengths. In both cases our results perfectly match the ones obtained by a Weighted Essentially Non-Oscillatory (WENO) scheme documented in literature. Both schemes are approximating the solution with the same order of accuracy in both, time and space. The second application, the double Mach reflection, is a challenging problem for high-order solvers because the shock and its reflections interact strongly. For this application, all AUSM-schemes under investigation fail to obtain a stable result. The main form of instability encountered is the Carbuncle phenomenon. Our implementation overcomes this problem by combining the AUSM+M scheme with the formulation of the speed of sound of the AUSM+up scheme. This combination is capable of modeling this problem without instabilities. Our results are in agreement with those obtained with a WENO scheme. Both, the reference solutions and our results, use the same order of accuracy in both, time and space. Finally, the third example is the diffraction of a shock past a delta wedge. In this configuration the shock is diffracted and forms three different main structures: two triple points, a vortex at the trailing edge of the wedge and a reflected shock traveling upwards. Our results agree well with both, numerical and experimental results available in literature. Here, a formation of a vortex-let is observed along the vortex slip-line. This vorticity generation under inviscid flow condition is studied and we conclude that the stretching of vorticity due to compressibility is the reason. The same formation is observed when the angle of attack of the wedge is increased in the range of 0-30. In general, the AUSM+up2 scheme performed best in terms of accuracy for all problems tested here. However, for configurations, in which the Carbuncle phenomenon may appear, the combination of the AUSM+M scheme and the computation of the speed of sound formula of the AUSM+up scheme is preferable for stability reasons. During our computations, we observe a small undershooting right behind shocks on curved boundaries. This is imputable to the curvilinear approximation of the boundaries, which is only second-order accurate. Our experience shows that the smoothness indicator formula in its original version, fails to label uniform flow regions as smooth. We solve the issue by introducing a threshold for the numerator of the formula. When the numerator is lower than the threshold, the cell is labeled as smooth. A value higher than 10^-7 for the threshold might force the solver to apply high-order reconstruction across shocks, and therefore will not apply the piece-wise linear reconstruction which prevents oscillations. We observe that the CENO scheme might cause unphysical states in both inviscid and viscous regime. By reconstructing the conservative variables instead of the primitive ones, we are able to prevent unphysical states for inviscid flows. For the viscous flows, temporarily reverting to first-order reconstruction in the cells where the temperature is computed as negative, prevents unphysical states. This technique is solely required during the first iterations of the solver, when the flow is started impulsively. In this study the CENO, the AUSM and the AMR methods are combined and applied successfully to supersonic problems. When modeling supersonic flow with high-order accuracy in space, one should prefer the combination of the AUSM schemes and the CENO scheme. While the CENO scheme is simpler than the WENO scheme used in comparison, we show that it yields results of comparable accuracy. Although it was beyond the scope of this study, the AUSM can be extended to real gas modeling which constitutes another advantage of this approach.
-
ÖgeA modified anfis system for aerial vehicles control(Lisansüstü Eğitim Enstitüsü, 2022) Öztürk, Muhammet ; Özkol, İbrahim ; 713564 ; Uçak ve Uzay MühendisliğiThis thesis presents fuzzy logic systems (FLS) and their control applications in aerial vehicles. In this context, firstly, type-1 fuzzy logic systems and secondly type-2 fuzzy logic systems are examined. Adaptive Neuro-Fuzzy Inference System (ANFIS) training models are examined and new type-1 and type-2 models are developed and tested. The new approaches are used for control problems as quadrotor control. Fuzzy logic system is a humanly structure that does not define any case precisely as 1 or 0. The Fuzzy logic systems define the case with membership functions. In literature, there are very much fuzzy logic applications as data processing, estimation, control, modeling, etc. Different Fuzzy Inference Systems (FIS) are proposed as Sugeno, Mamdani, Tsukamoto, and ¸Sen. The Sugeno and Mamdani FIS are the most widely used fuzzy logic systems. Mamdani antecedent and consequent parameters are composed of membership functions. Because of that, Mamdani FIS needs a defuzzification step to have a crisp output. Sugeno antecedent parameters are membership functions but consequent parameters are linear or constant and so, the Sugeno FIS does not need a defuzzification step. The Sugeno FIS needs less computational load and it is simpler than Mamdani FIS and so, it is more widely used than Mamdani FIS. Training of Mamdani parameters is more complicated and needs more calculation than Sugeno FIS. The Mamdani ANFIS approaches in the literature are examined and a new Mamdani ANFIS model (MANFIS) is proposed. Training performance of the proposed MANFIS model is tested for a nonlinear function and control performance is tested on a DC motor dynamic. Besides, ¸Sen FIS that was used for estimation of sunshine duration in 1998, is examined. This ¸SEN FIS antecedent and consequent parameters are membership functions as Mamdani FIS and needs to defuzzification step. However, because of the structure of the ¸Sen defuzzification structure, the ¸Sen FIS can be calculated with less computational load, and therefore ¸Sen ANFIS training model has been created. These three approaches are trained on a nonlinear function and used for online control. In this study, the neuro-fuzzy controller is used as online controller. Neuro-fuzzy controllers consist of simultaneous operation of two functions named fuzzy logic and ANFIS. The fuzzy logic function is the one that generates the control signal. It generates a control signal according to the controller inputs. The other function is the ANFIS function that trains the parameters of the fuzzy logic function. Neuro-fuzzy controllers are intelligent controllers, independent of the model, and constantly adapting their parameters. For this reason, these controllers' parameters values are constantly changing according to the changes in the system. There are studies on different neuro-fuzzy control systems in the literature. Each approach is tested on a DC motor model that is a single-input and single-output system, and the neuro-fuzzy controllers' advantages and performances are examined. In this way, the approaches in the literature and the approaches added within the scope of the thesis are compared to each other. Selected neuro-fuzzy controllers are used in quadrotor control. Quadrotors have a two-stage controller structure. In the first stage, position control is performed and the position control results are defined as angles. In the second stage, attitude control is performed over the calculated angle values. In this thesis, the neuro-fuzzy controller is shown to work perfectly well in single layer control structures, i.e., there was not any overshooting, and settling time was very short. But it is seen from quadrotor control results that the neuro-fuzzy controller can not give the desired performance in the two-layered control structure. Therefore, the feedback error learning control system, in which the fuzzy controller works together with conventional controllers, is examined. Fundamentally, there is an inverse dynamic model parallel to a classical controller in the feedback error learning structure. The inverse dynamic model aims to increase the performance by influencing the classical controller signal. In the literature, there are a lot of papers about the structure of feedback error learning control and there are different proposed approaches. In the structure used in this work, fuzzy logic parameters are trained using ANFIS with error input.The fuzzy logic control signal is obtained as a result of training. The fuzzy logic control signal is added to the conventional controller signal. This study has been tested on models such as DC motor and quadrotor. It is seen that the feedback error learning control with the ANFIS increases the control performances. Antecedent and consequent parameters of type-1 fuzzy logic systems consist of certain membership functions. A type-2 FLS is proposed to better define the uncertainties, because of that, type-2 fuzzy inference membership functions are proposed to include uncertainties. The type-2 FLS is operationally difficult because of uncertainties. In order to simplify type-2 FLS operations, interval type-2 FLS is proposed as a special case of generalized type-2 FLS in the literature. Interval type-2 membership functions are designed as a two-dimensional projection of general type-2 membership functions and represent the area between two type-1 membership functions. The area between these two type-1 membership functions is called Footprint of Uncertainty (FOU). This uncertainty also occurs in the weight values obtained from the antecedent membership functions. Consequent membership functions are also type-2 and it is not possible to perform the defuzzification step directly because of uncertainty. Therefore, type reduction methods have been developed to reduce the type-2 FLS to the type-1 FLS. Type reduction methods try to find the highest and lowest values of the fuzzy logic model. Therefore, a switch point should be determined between the weights obtained from the antecedent membership functions. Type reduction methods find these switch points by iterations and this process causes too much computation, so many different methods have been proposed to minimize this computational load. In 2018, an iterative-free method called Direct Approach (DA) was proposed. This method performs the type reduction process faster than other iterative methods. In the literature, studies such as neural networks and genetic algorithms on the training for parameters of the type-2 FLS still continue. These studies are also used in the interval type-2 fuzzy logic control systems. There are proposed interval type-2 ANFIS structures in literature, but they are not effective because of uncertainties of interval type-2 membership functions. FLS parameters for ANFIS training should not contain uncertainties. However, the type-2 FLS should inherently contain uncertainty. For this reason, Karnik-Mendel algorithm is modified, which is one of the type-reduction methods, to apply the ANFIS on interval type-2 FLS. The modified Karnik-Mendel algorithm gives the same results as the Karnik-Mendel algorithm. The modified Karnik-Mendel algorithm also gives exact parameter values for use in ANFIS. One can notice that the ANFIS training of the interval type-2 FLS has been developed successfully and has been used for system control.
-
ÖgeNumerical and experimental study of fluid structure interaction in a reciprocating piston compressor(Graduate School, 2022-01-14) Coşkun, Umut Can ; Acar, Hayri ; Güneş, Hasan ; 511132113 ; Aeronautics and Astronautics EngineeringConsisting of household refrigerators, cold storages, cold chain logistics, industrial freezers, air conditioners, cryogenics and heat pumps, refrigeration industry are a vital part of many sectors such as food, health care, air conditioning, sports, leisure, production of plastics and chemicals along with electronic data processing centers and scientific research facilities, which can not operate without refrigeration. There are roughly 5 billion in operation refrigeration systems which consumes 20% of the electricity used worldwide, responsible of 7.8% of GHG emission of the world, 500 billion USD cost of annual equipment sale, 15 million of employed people. Around 37% of global warming impact caused by refrigeration is direct emission of fluorinated refrigerants (CFCs, HCFCs and HFCs), 63% is due to indirect emission caused by electricity generation required for refrigeration. Both economic goals of making refrigeration units cheaper, more durable, and environment concerns of making these units more efficient and less hazardous for the world, require meticulous research and study on these refrigeration units. Approximately 40% of refrigeration units consist of domestic refrigeration systems alone where mostly hermetic, reciprocating type compressors are used. Design and improvement of such compressors is a multidisciplinary subject and requires deep understanding of heat and momentum transfer between refrigerant and solid component of compressor which can only be done through scientific investigation, using experimental and numerical techniques. In this thesis study, concerning the advantages of numerical studies, a multi-physics numerical model of flow through the gas line of a household, hermetically sealed, reciprocating piston compressor and the fluid structure interaction around the valve reeds including the contact between deformable parts was developed. Concerning the complexity of the model, the problem divided into several steps and at each step, numerical results are validated with experiments. In the first chapter of this thesis, the motivation behind the thesis study is discussed along with a theoretical background about refrigeration, compressors, fluid-structure interaction and a comprehensive literature survey are summarized to express the position of the thesis study among academic literature and it's novelty. In the second chapter, experimental studies conducted throughout the thesis are presented. Experimental studies divided into two sections. In the first section, the valve reed dynamics are investigated experimentally outside the compressor in multiple test conditions. A test rig is built for this reason, and the displacement of valve reed under constant point load, free oscillation and the impact of valve reed to valve plate from a pre-deformed form are measured, in order to validate the numerical work. In the second section, the compressor specifications such as cooling capacity, compression work, average refrigerant mass flow rate, along with surface temperature and instantaneous pressure variation from several locations inside the compressor are measured inside a calorimeter setup, to provide boundary conditions and validation for numerical analyses. Numerical work of the thesis study is explained in the third chapter. Modelling the whole compressor gas line between compressor inlet and outlet, including the strong coupled interaction between the refrigerant and deformable solid parts such as valve reeds is too complex of an attempt to do in a single step. Therefore, the numerical problem divided into seven smaller numerical problems and investigated consecutively. At each consecutive steps, problems are isolated, identified, solved and results are validated. The similarity of each step to the final model is increased along with it's complexity as a natural consequence at each consecutive steps. The numerical studies also briefly cover the advantages and disadvantages of using an open source or a commercial multi-physics solver, where OpenFOAM and Ansys Workbench software are utilized for this purpose, respectively. After the simplified steps of the numerical model are completed, the whole gas line of a compressor produced by Arçelik is modelled. The numerical results compared against experimentally obtained data and a good agreement is achieved between them. The developed method is further used for parametric investigation on compressor design to show the capabilities and the benefits of the numerical model. Finally, results of whole thesis study, the experience gained throughout the thesis work and the planned future work are discussed in the final chapter.
-
ÖgeA study on static and dynamic buckling analysis of thin walled composite cylindrical shells(Graduate School, 2022-01-24) Özgen, Cansu ; Doğan, Vedat Ziya ; 511171148 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThin-walled structures have many useage in many industries. Examples of these fields include: aircraft, spacecraft and rockets can be given. The reason for the use of thin-walled structures is that they have a high strength weight ratio. In order to define a cylinder as thin-walled, the ratio of radius to thickness must be more than 20, and one of the problems encountered in the use of such structures is the problem of buckling. It is possible to define the buckling as a state of instability in the structure under compressive loads. This state of instability can be seen in the load displacement graph as the curve follows two different paths. The possible behaviors; snap through or bifurcation behavior. Compressive loading that cause buckling; there may be an axial load, torsional load, bending load, external pressure. In addition to these loads, buckling may occur due to temperature change. Within the scope of this thesis, the buckling behavior of thin-walled cylinders under axial compression was examined. The cylinder under the axial load indicates some displacement. When the amount of load applied reaches critical level, the structure moves from one state of equilibrium to another. After some point, the structure shows high displacement behavior and loses stiffness. The amount of load that the structure will carry decreases considerably, but the structure continues to carry loads. The behavior of the structure after this point is called post-buckling behavior. The critical load level for the structure can be determined by using finite elements method. Linear eigenvalue analysis can be performed to determine the static buckling load. However, it should be noted here that eigenvalue-eigenvector analysis can only be used to make an approximate estimate of the buckling load and input the resulting buckling shape into nonlinear analyses as a form of imperfection. In addition, it can be preferred to change parameters and compare them, since they are cheaper than other types of analysis. Since the buckling load is highly affected by the imperfection, nonlinear methods with geometric imperfection should be used to estimate a more precise buckling load. It is not possible to identify geometric imperfection in linear eigenvalue analysis. Therefore, a different type of analysis should be selected in order to add imperfection. For example, an analysis model which includes imperfection can be established with the Riks method as a nonlinear static analysis type. Unlike the Newton-Rapson method, the Riks method is capable of backtracking in curves. Thus, it is suitable for use in buckling analysis. In Riks analysis, it is recommended to add imperfection in contrast to linear eigenvalue analysis. Because if the imperfection is added, the problem will be bifurcation problem instead of limit load problem and sharp turns in the graph can cause divergence in analysis. Another nonlinear method of static phenomena is called quasi-static analysis which is used dynamic solver. The important thing to note here is that the inertial effects should be too small to be neglected in the analysis. For this purpose, kinetic energy and internal energy should be compared at the end of the analysis and kinetic energy should be ensured to be negligible levels besides internal energy. Also, if the event is solved in the actual time length, this analysis will be quite expensive. Therefore, the time must be scaled. In order to scale the time correctly, frequency analysis can be performed first and the analysis time can be determined longer than the period corresponding to the first natural frequency. For three analysis methods mentioned within this study, validation studies were carried out with the examples in the literature. As a result of each type of analysis giving consistent results, the effect of parameters on static buckling load was examined, while linear eigenvalue analysis method was used because it was also sufficient for cheaper analysis method and comparison studies. While displacement-controlled analyses were carried out in the static buckling analyses mentioned, load-controlled analyses were performed in the analyses for the determination of dynamic buckling force. As a result of these analyses, they were evaluated according to different dynamic buckling criteria. There are some of the dynamic buckling criteria; Volmir criterion, Budiansky-Roth criterion, Hoff-Bruce criterion, etc. When Budiansky-Roth criterion is used, the first estimated buckling load is applied to the structure and displacement - time graph is drawn. If a major change in displacement is observed, it can be assumed that the structure is dynamically buckled. For Hoff-Bruce criterion, the speed - displacement graph should be drawn. If this graph is not focused in a single area and is drawn in a scattered way, it is considered that the structure has moved to the unstable area. As in static buckling analyses, dynamic buckling analyses were primarily validated with a sample study in the literature. After the analysis methods, the numerical studies were carried out on the effect of some parameters on the buckling load. First, the effect of the stacking sequence of composite layers on the buckling load was examined. In this context, a comprehensive study was carried out, both from which layer has the greatest effect of changing the angle and which angle has the highest buckling load. In addition, the some angle combinations are obtained in accordance with the angle stacking rules found in the literature. For those stacking sequences, buckling forces are calculated with both finite element analyses and analytically. In addition, comparisons were made with different materials. Here, the buckling load is calculated both for cylinders with different masses of the same thickness and for cylinders with different thicknesses with the same mass. Here, the highest force value for cylinders with the same mass is obtained for a uniform composite. In addition, although the highest buckling force was obtained for steel material in the analysis of cylinders of the same thickness, when we look at the ratio of buckling load to mass, the highest value was obtained for composite material. In addition, the ratio of length to diameter and the effect of thickness were also examined. Here, as the length to diameter ratio increases, the buckling load decreases. As the thickness increases, the buckling load increases with the square of the thickness. In addition to the effect of the length to diameter ratio and the effect of thickness, the loading time and the shape of the loading profile are also known in dynamic buckling analysis. In addition, the critical buckling force is affected by imperfections in the structure, which usually occur during the production of the structure. How sensitive the structures are to the imperfection may vary depending on the different parameters. The imperfection can be divided into three different groups as geometric, material and loading. Cylinders under axial load are particularly affected by geometric imperfection. The geometric imperfection can be defined as how far the structure is from a perfect cylindrical structure. It is possible to determine the specified amount of deviation by different measurement methods. Although it is not possible to measure the amount of imperfection for all structures, an idea can be gained about how much imperfection is expected from the studies found in the literature. Both the change in the buckling load on the measured cylinders and the imperfection effect of the buckling load can be measured by adding the measured amount of imperfection to the buckling load calculations. In cases where the amount of imperfection cannot be measured, the finite element can be included in the analysis model as an eigenvector imperfection obtained from linear buckling analysis and the critical buckling load can be calculated for the imperfect structure using nonlinear analysis methods. In this study, studies were carried out on how imperfection sensitivity changes under both static and dynamic loading with different parameters. These parameters are the the length-to-diameter ratio, the effect of the stacking sequence of the composite layers and the added imperfection shape. The most important result obtained in the study on imperfection sensitivity is that the effect of the imperfection on the buckling load is quite high. Even geometric imperfection equal to thickness can cause the buckling load to drop by up to half.
-
ÖgeA study on optimization of a wing with fuel sloshing effects(Graduate School, 2022-01-24) Vergün, Tolga ; Doğan, Vedat Ziya ; 511181206 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiIn general, sloshing is defined as a phenomenon that corresponds to the free surface elevation in multiphase flows. It is a movement of liquid inside another object. Sloshing has been studied for centuries. The earliest work [48] was carried out in the literature by Euler in 1761 [17]. Lamb [32] theoretically examined sloshing in 1879. Especially with the development of technology, it has become more important. It appears in many different fields such as aviation, automotive, naval, etc. In the aviation industry, it is considered in fuel tanks. Since outcomes of sloshing may cause instability or damage to the structure, it is one of the concerns about aircraft design. To prevent its adverse effect, one of the most popular solutions is adding baffles into the fuel tank. Still, this solution also comes with a disadvantage: an increase in weight. To minimize the effects of added weight, designers optimize the structure by changing its shape, thickness, material, etc. In this study, a NACA 4412 airfoil-shaped composite wing is used and optimized in terms of safety factor and weight. To do so, an initial composite layup is determined from current designs and advice from literature. When the design of the initial system is completed, the system is imported into a transient solver in the Ansys Workbench environment to perform numerical analysis on the time domain. To achieve more realistic cases, the wing with different fuel tank fill levels (25%, 50%, and 75%) is exposed to aerodynamic loads while the aircraft is rolling, yawing, and dutch rolling. The aircraft is assumed to fly with a constant speed of 60 m/s (~120 knots) to apply aerodynamic loads. Resultant force for 60 m/s airspeed is applied onto the wing surface by 1-Way Fluid-Structure Interaction (1-Way FSI) as a distributed pressure. Using this method, only fluid loads are transferred to the structural system, and the effect of wing deformation on the fluid flow field is neglected. Once gravity effects and aerodynamic loads are applied to the wing structure, displacement is defined as the wing is moving 20 deg/s for 3 seconds for all types of movements. On the other hand, fluid properties are described in the Ansys Fluent environment. Fluent defines the fuel level, fluid properties, computational fluid dynamics (CFD) solver, etc. Once both structural and fluid systems are ready, system coupling can perform 2-Way Fluid-Structure Interaction (2-Way FSI). Using this method, fluid loads and structural deformations are transferred simultaneously at each step. In this method, the structural system transfers displacement to the fluid system while the fluid system transfers pressure to the structural system. After nine analyses, the critical case is determined regarding the safety factor. Critical case, in which system has the lowest minimum safety factor, is found as 75% filled fuel tank while aircraft dutch rolling. After the determination of the critical case, the optimization process is started. During the optimization process, 1-Way FSI is used since the computational cost of the 2-Way FSI method is approximately 35 times that of 1-Way FSI. However, taking less time should not be enough to accept 1-Way FSI as a solution method; the deviation of two methods with each other is also investigated. After this investigation, it was found that the variation between the two methods is about 1% in terms of safety factors for our problem. In the light of this information, 1-Way FSI is preferred to apply both sloshing and aerodynamic loads onto the structure to reduce computational time. After method selection, thickness optimization is started. Ansys Workbench creates a design of experiments (DOE) to examine response surface points. Latin Hypercube Sampling Design (LHSD) is preferred as a DOE method since it generates non-collapsing and space-filling points to create a better response surface. After creating the initial response surface using Genetic Aggregation, the optimization process is started using the Multi-Objective Genetic Algorithm (MOGA). Then, optimum values are verified by analyzing the optimum results in Ansys Workbench. When the optimum results are verified, it is realized that there is a notable deviation in results between optimized and verified results. To minimize the variation, refinement points are added to the response surface. This process is kept going until variation comes under 1%. After finding the optimum results, it is noticed that its precision is too high to maintain manufacturability so that it is rounded into 1% of a millimeter. In the end, final thickness values are verified. As a result, optimum values are found. It is found that weight is decreased from 100.64 kg to 94.35 kg, which means a 6.3% gain in terms of weight, while the minimum safety factor of the system is only reduced from 1.56 to 1.54. At the end of the study, it is concluded that a 6.3% reduction in weight would reflect energy saving.
-
ÖgeFonksiyonel derecelendirilmiş malzemeden üretilen plakların mekanik ve ısıl yükler altındaki burkulma analizi(Lisansüstü Eğitim Enstitüsü, 2022-01-27) Aktaş, İbrahim Utku ; Doğan, Vedat Ziya ; 511171115 ; Uçak ve Uzay MühendisliğiMalzeme seçimi bütün mühendislik uygulamalarında çok önemli rol oynamaktadır. Neredeyse bütün mühendislik uygulamalarının gelişmesi ve ilerlemesi o alanda kullanılan malzemelerin gelişmişliği ile doğrudan ilişkilidir. Malzemelerin monolitik malzemeden alaşımlı malzemelere evrimi ve kompozit malzemelerin gelişimi, bir malzeme sınıfının çağın ihtiyaçlarına artık cevap veremiyor oluşundan doğmuştur. Çoğu mühendislik uygulamasında, monolitik bir malzemede bulunması imkânsız olan birbirleriyle çelişen özelliklere sahip malzemelerin kullanımına ihtiyaç duyulmaktadır. Ayrıca, farklı malzemelerin alaşımlanması, bileşen malzemelerin termodinamik davranışı ve bir malzemenin diğer malzemelerle karıştırılma derecesindeki kıstaslar ile sınırlıdır. Fonksiyonel derecelendirilmiş malzeme, iki malzemenin bir araya getirilmesi ve zorlu çalışma ortamlarına maruz kaldıktan sonra dahi işlevlerini yerine getirebilmesi ve özelliklerini koruyabilmesi gerekliliğinden doğmuştur. İşlevsel olarak derecelendirilmiş malzeme başlangıçta bir ısıl bariyer uygulaması ihtiyacı için geliştirilmiş olsa da, bu önemli gelişmiş malzemenin uygulaması artırılmış ve aşırı aşınma direnci ve korozyon direnci uygulamaları gibi mühendislik uygulamalarında bir dizi sorunu çözmek için kullanılmıştır. Bu yeni malzeme türünden havacılık, otomobil ve biyomedikal gibi uygulamalarda yararlanılmaktadır. Fonksiyonel derecelendirilmiş malzemeler, geleneksel kompozit malzemelerin zorlu çalışma ortamlarında kullanıldığında başarısız uygulamalara neden olmasının sonucunda ortaya çıkmıştır. Geleneksel kompozit malzemelerin mühendislik uygulamalarındaki başarısızlığı kompozit malzemeyi oluşturan katmanlar arasındaki belirgin bir şekilde tanımlanmış olan arayüzden kaynaklanmaktadır. Arayüz, bu bölgede yüksek bir gerilme yığılmasına sebebiyet vermekte ve kompozitin nihai başarısızlığına neden olan çatlak başlangıcını ve yayılmasını teşvik etmektedir. Bu çatlak oluşma ve ilerleme sürecine "delaminasyon" adı verilmektedir. Japonya' da bir uzay mekiği projesinde karşılaşılan ve fonksiyonel derecelendirilmiş malzemelerin ortaya çıkmasına ortam hazırlayan sorun, geleneksel kompozit malzemelerdeki bu belirgin arayüzün nasıl ortadan kaldırılabileceğini ve kompozitin istenen ısıl bariyer görevini nasıl yerine getirebileceği problemini ortaya koymuştur. Araştırmacılar, kademeli olarak değişen bir arayüz ile geleneksel kompozit malzemedeki keskin arayüzü sistematik olarak ortadan kaldırabildiler, böylece bu arayüzdeki gerilme yığılmasını azalttılar ve geliştirilen fonksiyonel derecelendirilmiş malzeme, zorlu çalışma koşullarında kırıma uğramadan ayakta kalabildi. Sonuç olarak malzemenin asıl geliştirilme amacı olan yapıya ısıl kalkan olması dışında çeşitli mühendislik uygulamaları için de fonksiyonel derecelendirilmiş malzemeler kullanılmıştır. Fonksiyonel derecelendirilmiş malzemeler, malzemenin hacmi boyunca değişen özelliklerle birlikte değişen bileşime sahip gelişmiş kompozit malzemelerdir. Havacılıkta kullanılan araçlar başta aerodinamik yükler olmak üzere birçok mekanik ve ısıl yüklere maruz kalmaktadır. Bu yükler hava aracının yapısallarının boyutlandırılmasında kullanılmaktadır. Güvenli bir hava aracı maruz kaldığı yükleri yapı içerisinde taşırken kırıma uğramayacak şekilde tasarlanmaktadır. Hava aracının yapısalları birçok farklı şekilde kırıma ya da hasara uğrayabilmektedir. Bunları öngörebilmek ve yapıyı ona göre tasarlamak hayati öneme sahiptir. Bununla beraber, yapıları kırıma uğratmayan fakat yapılarda yapısal kararsızlığa yol açan burkulma problemi havacılıkta çok önemli bir konudur. Örneğin bir uçağa gelen yükler kanat üzerindeki kabukların düzlem içi basma ya da çekme yüklerine maruz kalmasına sebep olabilmektedir. Kabuk elemanlarının basma yüküne maruz kaldığı durumlarda burkulma olayı gerçekleşebilir. Bu da hem kanat üzerindeki aerodinamik akışı bozabilmekte hem de yapının kararsız hale gelmesine sebep olabilmektedir. Bu gibi durumlarda yapının yük taşıma kapasitesi değişmekte ve burkulma sonrası hesaplamaların yapılması gerekmektedir. Bundan dolayı yapısal elemanların ne zaman burkulmaya uğrayabileceğini öngörebilmek büyük önem taşımaktadır. Bu tezde fonksiyonel derecelendirilmiş malzemeden üretilen plakların ısıl ve mekanik yüklemeler altındaki burkulma davranışları sistematik olarak ele alınacaktır. 1. Kısım' da yapılan çalışmadan genel olarak bahsedilip çalışmanın amacından ve isteğinden söz edilmiştir. 2. Kısım' da ise geçmişte fonksiyonel derecelendirilmiş plaklar üzerine yapılmış çalışmalar okuyucuya aktarılmıştır. Bu çalışmaları ifade etmeden önce temel burkulma probleminin tanımı yapılmıştır. Burkulma olayını tanımlamaya ilk olarak kolon ve kiriş elemanlarının burkulmasından başlanmış daha sonra plakların burkulması anlatılmıştır. Burkulma teorisinin alt yapısının okuyucuya bu şekilde verilmesi amaçlanmıştır. Ardından fonksiyonel derecelendirilmiş malzemelere kısa bir giriş yapılmış ve tarihçesinden bahsedilmiştir. Bu kısımda aynı zamanda fonksiyonel derecelendirilmiş malzemelerin burkulması üzerine yapılan akademik çalışmalardan da bahsedilmiştir. 3. Kısım' da fonksiyonel derecelendirilmiş malzemeden üretilen plakların mekaniğini anlamak adına geleneksel kompozit malzemeden üretilen plakların mekaniği okuyucuya aktarılmıştır. İlk olarak katmanlı kompozit plak teorilerinden kısaca bahsedilmiş ve sonra Klasik Kompozit Plaka Teorisi (KPT) ve Birinci Dereceden Kayma Şekil Değiştirme Teorisi (BKT) detaylı bir şekilde anlatılmıştır. Çünkü fonksiyonel derecelendirilmiş malzemeden üretilen plakların mekaniğini anlamak için geleneksel kompozit plakların mekaniğini iyice anlamak büyük önem taşımaktadır. 4. Kısım' da fonksiyonel derecelendirilmiş malzemelerin üretim yöntemlerinden kısaca bahsedilmiş ve etkin malzeme özelliklerinin nasıl modellendiği gösterilmiştir. 5. Kısım' a gelindiğinde daha önceden kısaca bahsedilen plakların burkulma problemi üzerinde durulmuş ve bu problemin belirli sınır koşulları altında analitik çözüm yöntemlerinden bahsedilmiştir. İlk olarak izotropik plakların burkulma probleminin çözümü, Navier ve Levy sınır koşullarını ayrı ayrı sağlayacak şekilde oluşturulan sınır koşulları altında çözülmüştür. Ardından Fonksiyonel derecelendirilmiş malzemeden üretilen plakların burkulma problemini çözebilmek için KPT kullanılarak analitik model oluşturulmuştur. Sonrasında oluşturulan analitik model her bir kenarından basit mesnetli kabul edilen fonksiyonel derecelendirilmiş plaklar için farklı yüklemeler altında MATLAB programında yazılan kod yardımı ile çözülmüştür. Bu yüklemeler mekanik ve ısıl yüklemeler olmak üzere ikiye ayrılmaktadır. Mekanik yüklemeler için üç farklı durum göz önüne alınmıştır. Bunlar: tek eksenli basma yükü, iki eksenli basma yükü ve iki eksenli basma – çekme yükü altındaki burkulma analizleridir. Isıl yükleme koşulları ise sıcaklığın kalınlık boyunca farklı şekillerde dağılımları göz önüne alınarak yine üç farklı şekilde yapıya uygulanacak ve burkulma analizi yapılmıştır. İlk olarak kalınlık boyunca sabit sıcaklık dağılımı için kritik burkulma sıcaklık farkı bulunmuştur. Ardından kalınlık boyunca doğrusal değişen sıcaklık dağılımı için burkulma analizi yapılıp kritik burkulma sıcaklık farkı elde edilmiş ve sonrasında ise kalınlık boyunca doğrusal olmayan sıcaklık dağılımı için bu analizler tekrarlanmıştır. Elde edilen tüm sonuçlar daha önceki çalışmalarla kıyaslanmış ve ince FD plaklar için KPT' nin oldukça başarılı sonuçlar verdiği görülmüştür. 6. Kısım' da ise sonlu elemanlar paket programı, PATRAN, NASTRAN yardımıyla burkulma analizleri gerçekleştirilmiş ve KPT ile elde edilen analitik sonuçlarla kıyaslanmıştır. Sonraki kısımlarda yapılan tüm çalışmalar kısaca değerlendirilmiş ve gelecekte bu konu üzerine yapılabilecek çalışmalardan bahsedilmiştir.
-
ÖgeFlying and handling qualities oriented longitudinal robust control of a fighter aircraft in a large flight envelope(Graduate School, 2022-02-15) Kaçan, Zafer ; Koyuncu, Emre ; 511181144 ; Aeronautics and Astronautics EngineeringIn the scope of this thesis study, robust control design apprach has been applied to F-16 aircraft which is aimed to satisfy Level 1 FHQ within the specified flight envelope. First, a brief information about the histoy of flight mentioned in the introduction chapter. This historical storyline starts from the early sketches of Leonardo da Vinci and extends along to Wright Brothers who had achieved the first sustainable, controlled heavier than air flight. Then the innovations in aerospace industry is mentioned along with the advances in technology at the same time. The milestone successes are explained which has brought us to realize the design of fly-by-wire flight control algoritms. Then a literature review of the documents about the F-16 aircraft, FHQ criteria, multivariable robust control applications and mathematical backgroud of this approach. The structure of the thesis has outlined. Then, F-16 aircraft has been presented along with the aerodynamic data and how the force and moment of the aircraft is related with the aerodynamic and thrust data. The presented data of the F-16 aircraft was obtained from the researches of NASA Langley Research Center which is based on wind tunnel test results of the F-16 aircraft. The mathematical model of the F-16 aircraft is introduced. This mathematical model includes the airframe spacifications, the mass data, the systems that represent actuator and sensor and the environmental data which gives the atmospheric properties with respect to the flight condition of the aircraft. Then, the trim and linearization algorithms are introduced for the steady-state wings level flight condition. The inputs, states and outputs related to the longitudinal motion of the aircraft has been identified and the resultant state space linear system which represents the characteristics of the aircraft is obtained. The longitudinal modes which are phugoid and the short-period mode are mentioned. Next, the flying and handling qualities to evaluate the performance of the aircraft are emphasized. The reason for the use of flying and handling qualities are determined and related with the pilot evaluations Cooper-Harper ratings. The suggested flying and handling qualities are explained for the use of both design guidance and evaluation criteria. It is mentioned that the CAP criterion is used as design guideline whereas the Bandwidth and Dropback criteria are used as evaluation criteria for the aircraft in the both frequency and time domain. The corresponding flying and handling qualitieslevels are detailed for the criteria and related intervals for the properties are supported with the graphical representations. The robust control approach is introduced while mentioning the background of the method. The norm definitions are done and the feedback properties are given in the related chapter of this thesis in order to associating the design purposes with the feedback properties. The relationships between the open-loop characteristics and closed loop results are identified and the loop-shaping aproach is emphasized. Yhen the uncertainty definitions are identified. The classes of uncertainty and where they are reasoned for is explained. An uncertainty definition which is suitable for the use in this thesis is mentioned. Then the H_∞ Loop Shaping approach is expressed. The normalized coprime factorization method is explained and the design of both one degree of freedom and two degrees of freedom H_∞ Loop Shaping approaches are detailed with the design steps. Then, the control structure used in this thesis is explained. It is aimed to design a pitch rate controller which will results in Level 1 flying and handling qualities within a specified flight envelope. The design has been made for one design point and then the resulted parameters are used for the whole flight envelope. This enables to overcome the complexity of gain scheduling manner and provides robustness against any probable loss of air data such as angle of attack. The controller architecture of NASA research was presented for the longitudinal axis. Then the optimization structure to find the design parameters which ensures that the pitch rate demand flight control law results in Level 1 flying and handling qualities within a specified flight envelope. The root mean square approach has been applied in optimization phase. It is purposed that the time responses of 5 different design points after step input should follow a desired transfer function response specified during the design of the two degrees of freedom H_∞ Loop Shaping algorithm as close as possible. Moreover, in order to satisfy the specified flying and handling qualities, time delay parameter is included in the optimization cost which makes the optimization multiobjective optimization with a weigthed sum cost function. The resultant optimized parameters for the design of two degrees of freedom H_∞ Loop Shaping architecture is given. The results of both nominal design point and the responses of 5 different design points along the flight envelope are presented. The flying and handling qualities evaluations are shown. Then the performance and stability robustness results are associated with the results. The comparison study between the two degrees of freedom H_∞ Loop Shaping algorithm and the NASA control structure which emphasizes a classical PI controller has been presented. The results are satisfactory as all the design points resulted in Level 1 flying and handling qualities responses in both frequency and time domain. It is seen that the control architecture is successful for performance and stability robustness as all uncertain plants are following the nominal response and no frequency response has crossed a nichols exclusion zone defined. The two degrees of freedom H_∞ Loop Shaping algorithm outperformed the NASA PI controller as Level 2 results are seen for NASA PI controller responses. The use of two degrees of freedom H_∞ Loop Shaping structure lowered the time delays as it was purposed in the optimization goals as the effective time delay results are less than the NASA PI controller.
-
ÖgeAeroacoustic investigations for a refrigerator air duct and flow systems(Graduate School, 2022-02-16) Demir, Hazal Berfin ; Çelik, Bayram ; 511181186 ; Aeronautics and Astronautics EngineeringNoise has become an important public health problem with industrialization, and has become a crucial design problem for engineering. For this reason, noise reduction studies have became the focus, especially in the white goods, automotive and aviation sectors, which requires interaction with human. Among the vehicles and products in the aforementioned sectors, the refrigerators, unlike the others, are located in the center of the living area and work throughout the day. Therefore, possible sound problems are observed more quickly by the users and are found to be disturbing. At this point, the investigation and reduction of the acoustic propagation of existing products by various numerical and experimental methods is a valuable contribution to both industry and literature. Within the scope of this thesis, the freezer compartment of a refrigerator with a No frost cooling system was investigated from an aeroacoustic perspective. The freezer compartment consists of three drawers where food will be placed, an axial fan that provides air flow, an evaporator cover that separates the evaporator pipes and the interior volume, and plastic walls surrounding them. The main source of air flow noise in the system is the axial fan. For this reason, in the first step of the study, solo aeroacoustic examination of the axial fan was made. Afterwards, the entire freezer volume was examined and the study was completed with three different model proposals in which acoustic emission was reduced. The flow field analysis of the axial fan with an operational speed of 1200 rpm was carried out with commercial software ANSYS Fluent. In this numerical model, Shear Stress Transport 𝑘 – 𝜔 turbulence model was used. Governing equations was solved under three-dimensional, transient, viscous, incompressible flow assumptions. The rotation of the fan was defined by the sliding mesh method. The numerical flow solution was validated with experimental volumetric flow rate data. According to the numerical and experimental results, the flow rate of the axial fan under the specified conditions was determined as 19 L/s. A hybrid aeroacoustic model is created by giving the pressure outputs of the flow solution as input to the acoustic model. For the acoustic solution, Ffowcs Williams & Hawkings (FW-H) model defined in ANSYS Fluent was used and the result of the solution was compared with the sound pressure data collected in the full anechoic acoustic room. Although there is some difference between the numerical and experimental sound pressure curves, it was observed that the hybrid model established to understand the general trend and to catch the blade passing frequency was successful. It was predicted that the difference between experimental and numerical measurements occurred for two reasons. The first is absence of the fan motor in the numerical analysis. Another reason is that the acoustic propagation resulting from the excitation of the air flow to the system structures cannot be predicted with this model. In the second step of the study, the model validated with axial fan solutions was applied to the freezer compartment. The aim here is to reveal the air flow distribution in the freezer volume and to identify the regions where turbulence effects increase. In the numerical model, the axial fan was rotated at an operational speed of 1200 rpm and this rotation was achieved by the sliding mesh method. As a result of the analysis, it was seen that the turbulence formation started at the wing tips as observed in the solo fan analyses, and the vortices coming out of the trailing edge tips were especially concentrated in the region between the upper wall of the freezer volume and the upper two drawers. In addition, a turbulent area was detected at the bottom of the evaporator cover (which is the fan suction area). As a result of the hybrid aeroacoustic model solution, the sound pressure data collected from 1 meter away from the front, rear and side surfaces of the freezer and the sound pressure data collected from the same locations in the full anechoic acoustic room were compared. When the total sound pressure in the range of 10-10000 Hz is compared, it is seen that there is a difference of 3-7 dBA between the numerical model and the experimental results. As a result of the investigations of the axial fan in the solo and freezer volume, three different freezer models have been proposed to improve air flow, reduce turbulence and reduce the resulting noise caused by air flow. In the fist suggested model, the bottom part of the evaporator cover has changed and the acostic propagation has decreased 0.24 dBA at 1200 rpm rotational speed. The position of the axial fan and its distance from the structures in the suction and discharge directions are the parameters affecting the acoustic propagation. In the second model, it is aimed to provide acoustic gain by changing the fan position. In this context, the fan was moved on the shaft by 5 mm and brought closer to the blowing region. With this modification, total sound power level was decreased 2.18 dBA. The final model is the superposition of the first two models. Here, it was aimed to see the combined effect of two mentioned model. At 1200 rpm rotational speed, 3.27 dBA gain was achived by the third model.
-
ÖgeSafe motion planning and learning for unmanned aerial systems(Graduate School, 2022-05-06) Perk, Barış Eren ; İnalhan, Gökhan ; 511142104 ; Aeronautics and Astronautics EngineeringTo control unmanned aerial systems, we rarely have a perfect system model. Safe and aggressive planning is also challenging for nonlinear and under-actuated systems. Expert pilots, however, demonstrate maneuvers that are deemed at the edge of plane envelope. Inspired by biological systems, in this paper, we introduce a framework that leverages methods in the field of control theory and reinforcement learning to generate feasible, possibly aggressive, trajectories. For the control policies, Dynamic Movement Primitives (DMPs) imitate pilot-induced primitives, and DMPs are combined in parallel to generate trajectories to reach original or different goal points. The stability properties of DMPs and their overall systems are analyzed using contraction theory. For reinforcement learning, Policy Improvement with Path Integrals (PI2) was used for the maneuvers. The results in this paper show that PI2 updated policies are a feasible and parallel combination of different updated primitives transfer the learning in the contraction regions. Our proposed methodology can be used to imitate, reshape, and improve feasible, possibly aggressive, maneuvers. In addition, we can exploit trajectories generated by optimization methods, such as Model Predictive Control (MPC), and a library of maneuvers can be instantly generated. For application, 3-DOF (degrees of freedom) Helicopter and 2D-UAV (unmanned aerial vehicle) models are utilized to demonstrate the main results.
-
ÖgeFlight safety risk awareness at flight test activities with analytical hierarchy process method(Graduate School, 2022-05-23) Akgür, Yusuf ; Kodal, Ali ; 511191143 ; Aeronautics and Astronautics EngineeringIn 1903, the Wright brothers succeeded in flying the first manned and propelled heavier-than-air aircraft, which soon led to the birth of aviation and the spread of aircrafts. Aircrafts, which started to be produced for different purposes, have caused many accidents and even deaths in their post-production use and especially in the design development stages. Over the years, various arrangements have been made, international agreements have been signed, and local and international organizations have been established in order to prevent these accidents and deaths and to manage aircraft operations safely. Annex-19 Safety Management System (SMS), which is the 19th and last annex of the International Civil Aviation Organisation (ICAO) Air Transport rules, is a system for managing the safety risks of organizations carrying out aviation activities and ensuring the effectiveness of safety risk controls, and includes systematic procedures, practices and policies for the management of these risks. Implementation of SMS in organizations carrying out civil aviation activities has started to be made compulsory by relevant local and international authorities. The studies which aim to prove whether the designed and manufactured aircraft provide the desired performance are called flight tests. Advances in technology, when incorporated into aircraft design processes, have led to the creation of formal requirements and specifications that provide universal benchmarks in aircraft design processes. Parallel to these developments, the aims and applications of flight testing have also matured and become a discipline. Flight tests are high-risk flights since they are carried out with aircraft that have not been certified yet, have low flight hours, and have many unknowns about the nature of the aircraft. For these reasons, within the scope of flight test activities, the risks should be determined in advance, necessary mitigation studies should be carried out and test procedures should be determined. It is stated in the Flight Test Operational Manuel (FTOM) guide document published by EASA that flight test organizations should improve the SMS. In this document, flight test risk management activities and risk management activities that must be carried out within the scope of SMS are separated. Flight test risk management was held responsible for the management of specific risks specific to each flight test, while SMS risk management was held responsible for operational risks that constitute continuity. Within the scope of this study, the Analytical Hierarchy Process (AHP) method, which is a hierarchical weighted multi-purpose decision analysis method that combines qualitative and quantitative analysis methods, was used to provide a holistic awareness of flight safety risks in flight test activities. When using the weighting function of the AHP method, the safety risk matrix published by the SMS risk management of the relevant institution is based on and it is aimed to determine how important the risks are to each other. The values selected from the risk matrix for the risk specific to the flight test and operational risks are multiplied with the coefficients to be determined for each risk level to create a comparison matrix and the weight of each risk is calculated. It is expected that the flight test risk will have the largest share in the weighting to be achieved, and the evaluation of the results in this direction. Providing corrective feedback on the coefficients determined for each risk level, the choice of risk value and the structure of the risk matrix are the gains that can be achieved in addition to flight safety risk awareness. The use of the safety risk matrix and the values here while calculating the weights of the risks eliminates the subjective evaluation in the AHP method and makes the consistency index 0. However, the method used is subjective due to the structure of the risk matrix, the selected risk values and coefficients. For this reason, the returns to be obtained in line with the outputs of the method will allow these subjective values to change and take their optimum form over time. This study, which started in line with the definitions in the EASA Part-21 FTOM Guide document, became an example of how Flight Test Risk Management and Safety Management System can work together. As a result, it is aimed to raise awareness of the flight safety risks involved in Flight Test Activities to the relevant flight test team by making use of the weighting feature of the AHP method.