LEE- Uçak ve Uzay Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Çıkarma tarihi ile LEE- Uçak ve Uzay Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeTeknoloji geliştirme bölgelerinin hizmet kalitesinin ölçümü: Türkiye genelinde bir uygulama(Fen Bilimleri Enstitüsü, 2020) Özyurt, Mehmet Akif ; Özkol, İbrahim ; 656880 ; Uçak ve Uzay Mühendisliği Ana Bilim DalıBilgi Üretimine ve bunun bir çıktısı olan teknolojik üretime dayalı ürünler bugün çağımıza damgasını vurmuş ve yaşadığımız zaman dilimi, bir çok düşünür tarafından "Bilgi Çağı" olarak adlandırılmıştır. Bu çağda yüksek teknoloji üretiminin merkezinde olan ülkelerin gücü, toprak ya da sermaye büyüklüğünden değil, kaliteli eğitilmiş insan gücünün büyüklüğünden ve bu gücün yüksek teknoloji içeren üretimlere aktarılmasından kaynaklanmaktadır. Eğitim seviyesi yüksek insanlara sahip ülkelerin, üretim kalite ve seviyeleri de yüksektir. Yaşadığımız yüzyılda ülkelerin bilimsel ve teknolojik gelişim hızı çok artmıştır. Bugüne kadar ortaya çıkan bu gelişmelerin çoğu, son 30 yıl içerisinde meydana gelmiş olup, bu hız her geçen gün katlanarak artmaktadır. Dolayısı ile, gelecek kısa vadeli zaman diliminde de, bilimsel ve teknolojik açıdan, şu an yaşadığımızdan çok daha ileride bir dünyanın ortaya çıkacağını öngörmek yanlış olmaz. Yüksek teknoloji üretimi günümüzde, rekabet üstünlüğü yarışının da en belirleyici unsuru haline gelmiştir. Bu nedenle, rekabet gücünün artırılması, sadece maliyetleri düşürmeye değil, tüketici tercih ve taleplerine hızlı bir şekilde yanıt vermenin ötesinde, sürekli gelişime, yenilik ve icatta bulunmaya bağlı bir duruma gelmiştir. Teknolojik bulguları, pazarlama şansı olan bir ürün ya da hizmete, yeni bir üretim veya dağıtım yöntemine, ya da yeni bir hizmet mekanizmasına dönüştürmede, yani teknolojik yenilik üretiminde (inovasyonda) başarılı olanlar artık, dünya pazarlarına egemen olmaktadırlar. Bu tür Ar-Ge'ye dayalı teknolojik gelişmelerin ve yeniliklerin ortaya çıkartıldığı, kaliteli eğitilmiş insan gücünün istihdam edildiği, yüksek katma değerli ürünleri üreten şirketleri ve kurumları bünyesinde barındıran bölgelere, "Teknopark" ya da ülkemizde ilgili yasanın verdiği ad ile "Teknoloji Geliştirme Bölgesi" (TGB) adı verilmektedir. Kavramsal olarak, teknoparklar, Ar-Ge yapıcılar ile, üniversiteler ve sanayi (firmaları) arasında bilim ve teknoloji akışını sağlamaya ve yaymaya yardımcı olan araçlardır. Ayrıca teknoparklar, kuluçka mekanizmalarının oluşturduğu sinerji ile, bilim ve teknoloji tabanlı firmaların gelişimini kolaylaştırmaktadırlar. Bu alanlarda, yüksek teknoloji ve destek araçları kullanılarak, firmalar yenilikçi olmaya teşvik edilmekte, bu yolla katma değeri yüksek ürünler ortaya çıkartılmaktadır. Uluslararası Bilim Parkı Birliği tarafından ise teknoparklar, temel amaçları yenilikçilik kültürünü ve işletmelerinin ya da bilgi merkezli kurumların rekabet gücünü artırmayı destekleyerek, toplumun refah seviyesini yükseltmek olan, alanında profesyonel ekipler tarafından yönetilen yapılar şeklinde tanımlanmaktadır. Bu hedeflere ulaşmak için teknoparklar, üniversiteler, Ar-Ge yapıcıları ve firmalar arasındaki bilgi ve teknoloji akışını sağlar, yönetir, kuluçka ve spin-off mekanizmaları ile yenilikçilik eksenli şirketlerin oluşmasını ve gelişmesini kolaylaştırır, kaliteli yapılar üreterek, diğer katma değer sunan şirket ve hizmetlerin de ortaya çıkmasına altyapı hazırlarlar. Bu tanımlamalar doğrultusunda, diğer adı ile TGB'lerin aslında bilim ve teknoloji kümelenmesi oldukları da söylenebilir. Çünkü genel anlamda teknoparklar, yenilikçi fikirlerle bir araya gelen, ileri teknoloji üreten veya kullanan ve aynı zamanda bu teknolojiyi pazarlayan, Ar-Ge merkezinden ya da üniversiteden faydalanan işletmelerin oluşturduğu bir küme olarak da tabir edilmektedirler. Teknoparklara yönelik yapılan bu tanımlamaların farklılığı büyüklüklerinden ve işkolu faaliyetlerindeki farklılıklardan kaynaklanmaktadır. Yüksek teknoloji üreticilerinin konumlanma merkezi olan teknoparklar, istihdam imkanlarının artırılmasında, gerekli bilgi birikimi sağlanarak sanayinin geliştirilmesinde, üniversiteler ile birlikte eğitim olanaklarının artırılması için firmalara destek verilmesinde ve KOBİ'lerin sayısının artırılmasının yanı sıra bunların desteklenmesinde de etkili bir araç olarak kullanılmaktadırlar. Bu açıdan teknoparkların en temel amaçlarından bir tanesi üniversite, sanayi ve devlet arasında iş birliği sağlamak ve buna bağlı olarak bilgi ve teknoloji ağırlıklı mekânların kurulması ile bölgesel, ulusal ve uluslararası rekabetçilik seviyesinin artırılarak, ülke kalkınmasına katkı sağlamaktır. Teknoparklar, ülkelerin istihdam yapısını olumlu yönde değiştiren ve işsizlik oranının düşmesinde önemli bir etken olan, yeni ve yüksek teknoloji altyapısına sahip alanlardır. Bunun örneklerini teknopark tecrübeleri eskiye dayanan gelişmiş ve sanayileşmiş ülkelerde görmek mümkündür. Bu değişim ve gelişmenin de etkisi ile istihdamın sektörel dağılım anlamında da farklılaştığı görülmektedir. Bilindiği gibi geçmişte gelişmişliğin bir ölçütü, işgücü dağılımının tarım ve sanayi sektörlerindeki durumu olarak görülmekteydi. Şimdilerde ise gelişmişliğin ölçütü olarak, teknoloji sektöründeki istihdam oranı bir ölçüt olarak görülmektedir. Örneğin gelişmiş bir ülke durumunda olan Almanya'da, tarım ve geleneksel sanayilerindeki yüksek istihdam oranı günümüzde ciddi bir azalış göstererek istihdam, yüksek teknolojik ürün üreten sektörlere doğru kaymıştır. Teknoparklarda, Üniversite - Sanayi - Devlet üçgeninde yer alan bütün aktörlerin karlı çıkması hedeflenerek, Ar-Ge için yatırım yapacak yeterli gücü olmayan firmaların da desteklenmesi ve üniversitelerde üretilen bilginin ticarileştirilerek bu firmalara aktarılması düşüncesi de gerçekleştirilmeye çalışılmaktadır. Buna bağlı olarak oluşturulan teknopark ara yüzünün, üniversite, sanayi, bölge ve ülke ekonomik yapısına önemli katkılar sağlaması beklenmektedir. Nitekim teknoparklardan sanayiye akan bu bilgi, sanayi üretiminin modern ölçülerde yapılmasında ve üretim tabanının bilgi ve teknoloji kaynaklı olmasında etkili bir rol oynamaktadır. Bir diğer deyiş ile teknoparklar vasıtası ile, sanayinin üniversitede üretilen bilgiye ulaşması ve üniversitede üretilen bu bilginin de sanayi tarafından uygulama alanı bulması hedeflenmektedir. Bu çalışmada, Türkiye'de faaliyet gösteren teknoparkların sunmuş olduğu hizmet kalitesi ile bu hizmetlerden istifade eden oyuncuların algıladığı hizmet kalitesi arasındaki farkı ortaya çıkarmak, Servqual ölçeğinden yararlanılarak müşterilerin (Ar-Ge yapıcılarının) memnuniyet düzeylerini belirlemek amaçlanmıştır. Çalışmada ayrıca teknoparkların faaliyette bulundukları süre ile müşterilerin teknoparklara ilişkin hizmet kalite algıları arasında bir ilişki olup olmadığı araştırılmıştır. Teknoparklar arasında geçiş yapan firmalarda, teknopark değiştirme kararı verirken hizmet kalitesinin etkisinin de belirlenmesi hedeflenmiştir. Çalışmada son olarak Vikor yöntemi kullanılarak Türkiye'de faaliyet gösteren teknoparklar, hizmet kalitesi açısından sıralanmıştır. Araştırmada Servqual ölçeğinde yer alan hizmet ölçüm faktörleri yer almıştır. Parasuraman ve ark. (1988) tarafından geliştirilen ve hizmet kalitesini belirlemek için ortaya koydukları Servqual ölçme aracı, bugüne kadar spor tesislerinden, otel hizmetlerine kadar tüm hizmet işletmelerinde sıklıkla kullanılmıştır. Bu ölçek hem yurtiçi hem de yurtdışında birer hizmet işletmesi olarak ele alınan teknoparkların hizmet kalitesini ölçmek için, ilk defa bu çalışmada kullanılmıştır. Bu nedenle öncelikle ölçeğin teknoparklara adaptasyonu yapılmış ve bu adaptasyonun güvenilirlik ve geçerlilik çalışması gerçekleştirilerek, analizlere geçilmiştir. Araştırmada ölçeğinde, Servqual Hizmet Kalitesi Ölçeğinde yer alan, "Fiziksel Özellikler", "Güvenilirlik", "Heveslilik", "Yeterlilik" ve "Empati (Duyarlılık)" faktörleri kullanılmıştır. Fiziki özellikler faktörü, binalarda kullanılmış olan cihazların, iletişim malzemelerinin ve çalışanların fiziki görünümünü kapsamaktadır. Güvenilirlik faktörü, teknoparkların verdikleri hizmetinin zamanında ve doğru olarak yerine getirmesi ile ilgili durumunu tespit etmek için kullanılmaktadır. Heveslilik faktörü, teknoparkların müşterilerine yardım etme, hızlı hizmet verme istekliliği ve işin zamanında bitirme yeteneğini ölçmektedir. Yeterlilik faktörü, teknoparklarda çalışan servis personellerinin gerekli ve yeterli bilgiye sahip olup olmadığını ölçmek için kullanılmıştır. Empati (Duyarlılık) faktörü ise müşteri ile direkt ilişki içinde olan çalışanların, saygı, nezaket ve samimiyet düzeylerini belirlemeyi amaçlamaktadır. Çalışmada teknoparkların hizmet kalitesi seviyelerinin ölçümünün sağlanması, ileride yapılabilecek bilimsel araştırmalar için de öncü bir rol oynayacaktır. Hem yurtiçinde hem de yurtdışında buna benzer bir çalışma olmaması nedeni ile sonuçlarının, teknopark yönetici şirketleri için de büyük önem arz edeceği düşünülmektedir.
-
ÖgeOptimization based-control of cooperative and noncooperative multi aircraft systems( 2020) Başpınar, Barış ; Koyuncu, Emre ; 625456 ; Uçak ve Uzay MühendisliğiIn this thesis, we mainly focus on developing methods that ensure autonomous control of cooperative and noncooperative multi-aircraft systems. Particularly, we focus on aerial combat, air traffic control problem, and control of multiple UAVs. We propose two different optimization-based approaches and their implementations with civil and military applications. In the first method, we benefit from hybrid system theory to present the input space of decision process. Then, using a problem specific evaluation strategy, we formulate an optimization problem in the form of integer/linear programming to generate optimal strategy. As a second approach, we design a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. In this case, we benefit from differential flatness theory and flatness-based control. We construct optimization problems in the form of mixed-integer linear programming (MILP) and non-convex optimization problem. In both methods, we also benefit from game theory when there are competitive decision makers. We give the details of the approaches for both civil and military applications. We present the details of the hybrid maneuver-based method for air-to-air combat. We use the performance parameters of F-16 to model the aircraft for military applications. Using hybrid system theory, we describe the basic and advanced fighter maneuvers. These maneuvers present the input space of the aerial combat. We define a set of metrics to present the air superiority. Then, the optimal strategy generation procedure is formulated as a linear program. Afterwards, we use the similar maneuver-based optimization approach to model the decision process of the air traffic control operator. We mainly focus on providing a scalable and fully automated ATC system and redetermining the airspace capacity via the developed ATC system. Firstly, we present an aircraft model for civil aviation applications and describe guidance algorithms for trajectory tracking. These model and algorithms are used to simulate and predict the motion of the aircraft. Then, ATCo's interventions are modelled as a set of maneuvers. We propose a mapping process to improve the performance of separation assurance and formulate an integer linear programming (ILP) that benefits from the mapping process to ensure the safety in the airspace. Thereafter, we propose a method to redetermine the airspace capacity. We create a stochastic traffic environment to simulate traffics at different complexities and define breaking point of an airspace with regards to different metrics. The approach is validated on real air traffic data for en-route airspace, and it is shown that the designed ATC system can manage traffic much denser than current traffic. As a second approach, we develop a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. It is also an optimization-based approach. Firstly, we focus on control of multi-aircraft systems. We utilize the STL specifications to encode the missions of the multiple aircraft. We benefit from differential flatness theory to construct a mixed-integer linear programming (MILP) that generates optimal trajectories for satisfying the STL specifications and performance constraints. We utilize air traffic control tasks to illustrate our approach. We present a realistic nonlinear aircraft model as a partially differentially flat system and apply the proposed method on managing approach control and solving the arrival sequencing problem. We also simulate a case study with a quadrotor fleet to show that the method can be used with different multi-agent systems. Afterwards, we use the similar flatness-based optimization approach to solve the aerial combat problem. In this case, we benefit from differential flatness, curve parametrization, game theory and receding horizon control. We present the flat description of aircraft dynamics for military applications. We parametrize the aircraft trajectories in terms of flat outputs. By the help of game theory, the aerial combat is modeled as an optimization problem with regards to the parametrized trajectories. This method allows the presentation of the problem in a lower dimensional space with all given and dynamical constraints. Therefore, it speeds up the strategy generation process. The optimization problem is solved with a moving time horizon scheme to generate optimal combat strategies. We demonstrate the method with the aerial combats between two UAVs. We show the success of the method through two different scenarios.
-
ÖgeDynamic and aeroelastic analysis of advanced aircraft wings carrying external stores(Lisansüstü Eğitim Enstitüsü, 2021) Aksongur Kaçar, Alev ; Kaya, Metin Orhan ; 709160 ; Uçak ve Uzay MühendisliğiBu çalışma gelişmiş uçak kanatlarında harici yük ve takip edici kuvvet altında kanadın dinamik ve aeroleastik davranışlarını incelemektedir. Harici yüklerin ağırlığı, pozisyonu, birbirine göre yerleşimi, kompozit katmanların yönelimi ile itki kuvveti etkileri incelenmiş ve hepsinin kanadın doğal frekansı ve kritik çırpınma hızına olan etkileri tespit edilmiştir.
-
ÖgeInvestigations on the effects of conical bluff body geometry on nonpremixed methane flames(Graduate Institute, 2021) Ata, Alper ; Özdemir, İlyas Bedii ; 675677 ; Department of Aeronautics and Astronautics EngineeringThis thesis is composed of three experimental studies, of which the first two are already published, and the third is under peer review. The first study investigates the effects of a stabilizer and the annular co-flow air speed on turbulent nonpremixed methane flames stabilized downstream of a conical bluff body. Four bluff body variants were designed by changing the outer diameter of a conically shaped object. The co-flow velocity was varied from zero to 7.4 m/s, while the fuel velocity was kept constant at 15 m/s. Radial distributions of temperature and velocity were measured in detail in the recirculation zone at vertical locations of 0.5D, 1D, and 1.5D. Measurements also included the CO2, CO, NOx, and O2 emissions at points downstream of the recirculation region. Flames were visualized under 20 different conditions, revealing various modes of combustion. The results evidenced that not only the co-flow velocity but also the bluff body diameter play important roles in the structure of the recirculation zone and, hence, the flame behavior. The second study analyzes the flow, thermal, and emission characteristics of turbulent nonpremixed CH4 flames for three burner heads of different cone heights. The fuel velocity was kept constant at 15 m/s, while the coflow air speed was varied between 0 – 7.4 m/s. Detailed radial profiles of the velocity and temperature were obtained in the bluff body wake at three vertical locations of 0.5D, 1D, and 1.5D. Emissions of CO2, CO, NOx, and O2 were also measured at the tail end of every flame. Flames were digitally photographed to support the point measurements with the visual observations. Fifteen different stability points were examined, which were the results of three bluff body variants and five coflow velocities. The results show that a blue-colored ring flame is formed, especially at high coflow velocities. The results also illustrate that, depending on the mixing at the bluff-body wake, the flames exhibit two modes of combustion regimes, namely fuel jet- and coflow-dominated flames. In the jet-dominated regime, the flames become longer compared to the flames of the coflow-dominated regime. In the latter regime, emissions were largely reduced due to the dilution by the excess air, which also surpasses their production. The final study examines the thermal characteristics of turbulent nonpremixed methane flames stabilized by four burner heads with the same exit diameter but different heights. The fuel flow rate was kept constant with an exit velocity of 15 m/s, while the co-flow air speed was increased from 0 to 7.6 m/s. The radial profiles of the temperature and flame visualizations were obtained to investigate the stability limits. The results evidenced that the air co-flow and the cone angle have essential roles in the stabilization of the flame: Increase in the cone angle and/or the co-flow speed deteriorated the stability of the flame, which eventually tended to blow-off. As the cone angle was reduced, the flame was attached to the bluff body. However, when the cone angle is very small, it has no effect on stability. The mixing and entrainment processes were described by the statistical moments of the temperature fluctuations. It appears that the rise in temperature coincides with the intensified mixing, and it becomes constant in the entrainment region.
-
ÖgeFailure analysis of adhesively bonded cfrp joints(Graduate School, 2021-01-04) Daylan, Seda ; Mecitoğlu, Zahit ; 511171169 ; Aeronautical and Astronautical EngineeringJoints are critical areas where load transfer occurs and should be designed to provide maximum strength to the structure. The adhesive bonding process is widely used as a structural joining method in aerospace applications. There are many advantages of using adhesively bonding joints instead of classical mechanical fastening. Some of these can be listed as joining of similar and dissimilar materials (metal-to-composite, metal-to-metal, metal-to-glass), providing a more uniform stress distribution with a significant decrease in the stress concentration in the structure since there will be no fastening holes, a considerable weight gain compared to mechanical fasteners, strong in terms of fatigue strength due to the absence of fastener holes in the structure. In addition to the above-mentioned positive aspects of using adhesives as a structural joining method, strength prediction is vital for an optimum design process in the initial sizing and critical design phases. The fact that adhesively bonded joints have various failure modes makes failure predictions complex. According to ASTM D5573, adhesively bonded composite joints have seven typical failure modes, but they can be listed under three main headings: adhesive failure, cohesive failure, and adherend failure. Adhesive failure occurs at the adherend and adhesive interface, and usually, the adhesive remains on an adherend. These failures are generally attributed to the poor-quality bonding process, environmental factors, and insufficient surface preparation. The other kind of failure, adherend failure, occurs when the structural integrity of the adherend breaks down before the joint structure and means that the strength of the joint area exceeds the strength of the adherend. On the other hand, cohesive failure is the type of failure expected after an ideal design and bonding process, where failure occurs within the adhesive structure. After cohesive failure, the adhesive material is seen on both adherends. Structural joining with adhesive has been used in the aerospace industry since the early 1970s and 1980s. Since these dates, many analytical and numeric methods have been used to study the failures of adhesively bonding joints. Analytical method studies to analyze the failures of adhesively bonded single lap joints, known in the literature, started with Volkersen in 1938. Volkersen did not include the eccentricity factor in the calculations due to the geometric nonlinearity of the single lap joint. This factor was first taken into account by Goland and Reissner in their calculations in 1944. Goland and Reissner made a remarkable study in analysing the adhesively bonded single lap joint, calculating the loads in the joint area and subsequently the stress on the adhesive. Afterwards, analytical method studies were continued by Hart Smith, Allman, Bigwood & Crocombe and more. In addition to analytical method studies, the continuum mechanic approach, fracture mechanic approach, and damage mechanic approach can be given examples to the numerical method studies. The fracture mechanics approach used in this thesis examines the initial crack propagation in the adhesive under three different loading modes. Crack propagation occurs when the adhesive's critical strain energy release rate equals the strain energy release rate under that load. After the three different modes' strain energy release rate values are calculated separately, an evaluation is made according to the power-law failure criterion. There are many types of joint configurations in the literature, and the common ones can be summarized as single lap joints, double lap joints, stepped joints etc. The single-lap joint type is the most widely used joint type in terms of ease of design and effectiveness. Within the scope of this thesis, it is aimed to obtain a general solution that can be applied to all joints after first making a study for the single lap joint geometry and validating the results of this study experimentally. Studies have been carried out to predict the failure load of adhesively bonding CFRP joints. They include two main steps, which are to find the loads at the edges of the joint area and to evaluate the failure criteria by calculating the strain energy release rate with these loads. As the first step, loads at the joint edges are found analytically and with the finite element method, respectively. While calculating the loads analytically, the Modified Goland and Reissner theory is used, which differs from the classical Goland and Reissner theorem by taking the adhesive thickness into account. While calculating the loads with the finite element method, the modelling technique first studied by Loss and Kedward and then described by Farhad Tahmasebi in his work published with NASA is used. The primary purpose of using this modelling technique is to simulate load transfers in overlap regions accurately for complex and analytically challenging to calculate geometries. Especially in aerospace, since modelling the large components with solid elements is not effective in terms of time and resources, a practical modelling technique that can produce results with high accuracy is needed. In the modelling technique used in the thesis, adherends are modelled with shell elements while the adhesive region is modelled between coincident nodes with three spring elements to provide stiffnesses in the shear and peel directions, and the nodes of the adhesive elements are connected to the adherends with rigid elements. The modulus values of the adhesive material are used in the stiffness calculation of the spring elements. After obtaining the loads with the analytical and finite element method, the second step, the calculation of the strain energy release rate values on the adhesive material, is carried out with reference to two different studies. Firstly, linear fracture mechanics formulations were studied by Williams, assuming that the energy required to advance an existing crack unit amount is equal to the difference of performed external work with internal strain energy, and the laminate containing crack performs linear elastic behaviour is used. Conventional beam theory is used for the 1D case, as the deformation will occur like beam deformation. Using beam theory, he formulated the external work and internal strain energy at the beginning and end of the crack. And using these two equations, he found energy release rate formulations in relation to bending moment and axial load. Then, mode separation is made to calculate the energy release rates in the mode I and II directions separately because the critical strain energy release rate value in these two directions is different and needs to be evaluated independently. This study's disadvantage is that the transverse shear load is ignored, and calculations are made only with bending moment and longitudinal force. Within the scope of the thesis, the strain energy release rate is calculated both with the loads found analytically and with the loads found by the finite element method. Shahin and Taheri did the other reference work, and with overlap edge loads, the stress on the adhesive first and then the strain energy release rate is calculated. In this study, two assumptions are made, and the first is that the shear and peel stress change is zero along with the thickness of the adhesive, and the other is that the stress on the adhesive is as much as the displacement difference of the adherends. As a result of the derivations, the stress distribution on the adhesive is found in the joint structure consisting of CFRP adherend and adhesive. Then, according to Irwin's VCCI approach, as if there is a virtual crack, the integration crack length is rewritten so that it converges to zero and the displacements are in stress. Thus, the stress and energy relationship equation is obtained, and strain energy release rates in mode I and II directions of the adhesive are calculated. As a result of all these studies, mode I and mode II strain energy release rate calculations are made according to two different methods with the loads found analytically and with the finite element method. The strain energy release rate values found and the critical strain energy release rate values, which are allowable, are evaluated according to the power-law failure criteria, and failure load predictions are made. For specimens with different overlap lengths, experimental failure load values and predicted failure load values are compared, and inferences are made about the accuracy of the FEM modelling technique and the methods used in SERR calculation. All these results are interpreted in detail, and it is obtained that the FEM modelling technique gives high accuracy results with Method 2 used in SERR calculation. Finally, a bonding analysis tool has been developed with the python programming language. This tool first detects the finite elements corresponding to upper and lower adherends in the model from NASTRAN .bdf file. Then reads the element loads from the .pch file, which is a NASTRAN output and contains the element loads, then calculates the SERR using Method 2 and calculates the reserve factor and failure load, respectively. This tool has been prepared so that these calculations can be made in a short time and accurately for tens of elements in the overlap zone in complex and large models.
-
ÖgeExperimental investigation of leading edge suction parameter on massively separated flow(Graduate School, 2021-05-10) Aydın, Egemen ; Yıldırım Çetiner, Nuriye Leman Okşan ; 511171150 ; Aerospace Engineering ; Uçak ve Uzay MühendisliğiThe study aims to investigate and understand the Leading Edge Suction Parameter (LESP) application on the massively separated flow. The experiment was done by gathering force data from the downstream flat plate and the visualization of the flow structures is done by Digital Particle Image Velocimetry. The experiments are conducted in free surface, closed-circuit, large scale water channel located in Trisonic Laboratory of Istanbul Technical University's Faculty of Aeronautics and Astronautics. The velocity of the tunnel is equal to 0.1 m/s which results in a 10.000 Reynolds Number. During the experiment, the flat plate at the downstream of the gust generator (plat plate) is kept constant angle of attack and the test cases are selecting to show that the LESP parameter that derived from only one force component works for different gust interaction with the flat plate. As already discussed in the literature, the critical LESP parameter depends on only airfoil shape and its ambient Reynolds Number. Also, the critical LESP number is calculated in literature as equal to 0,05 for plat plate at the 10,000 Reynolds Number. We did not perform an experiment to find critical LEPS numbers as our experiment was done with a flat plate on 10,000 Re. A different angle of attack and different gust impingement combination has been shown that the LESP parameter works even in a highly unstable gust environment. Flow structures around the airfoil leading edge are behaving as expected from the LESP theory (leading-edge vortex separation and unification).
-
ÖgeDevelopment of single-frame methods aided kalman-type filtering algorithms for attitude estimation of nano-satellites(Graduate School, 2021-08-20) Çilden Güler, Demet ; Hacızade, Cengiz ; Kaymaz, Zerefşan ; 511162104 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThere is a growing demand for the development of highly accurate attitude estimation algorithms even for small satellite e.g. nanosatellites with attitude sensors that are typically cheap, simple, and light because, in order to control the orientation of a satellite or its instrument, it is important to estimate the attitude accurately. Here, the estimation is especially important in nanosatellites, whose sensors are usually low-cost and have higher noise levels than high-end sensors. The algorithms should also be able to run on systems with very restricted computer power. One of the aims of the thesis is to develop attitude estimation filters that improve the estimation accuracy while not increasing the computational burden too much. For this purpose, Kalman filter extensions are examined for attitude estimation with a 3-axis magnetometer and sun sensor measurements. In the first part of this research, the performance of the developed extensions for the state of art attitude estimation filters is evaluated by taking into consideration both accuracy and computational complexity. Here, single-frame method-aided attitude estimation algorithms are introduced. As the single-frame method, singular value decomposition (SVD) is used that aided extended Kalman filter (EKF) and unscented Kalman filter (UKF) for nanosatellite's attitude estimation. The development of the system model of the filter, and the measurement models of the sun sensors and the magnetometers, which are used to generate vector observations is presented. Vector observations are used in SVD for satellite attitude determination purposes. In the presented method, filtering stage inputs are coming from SVD as the linear measurements of attitude and their error covariance relations. In this step, UD is also introduced for EKF that factorizes the attitude angles error covariance with forming the measurements in order to obtain the appropriate inputs for the filtering stage. The necessity of the sub-step, called UD factorization on the measurement covariance is discussed. The accuracy of the estimation results of the SVD-aided EKF with and without UD factorization is compared for the estimation performance. Then, a case including an eclipse period is considered and possible switching rules are discussed especially for the eclipse period, when the sun sensor measurements are not available. There are also other attitude estimation algorithms that have strengths in coping well with nonlinear problems or working well with heavy-tailed noise. Therefore, different types of filters are also tested to see what kind of filter provides the largest improvements in the estimation accuracy. Kalman-type filter extensions correspond to different ways of approximating the models. In that sense, a filter takes the non-Gaussianity into account and updates the measurement noise covariance whereas another one minimizes the nonlinearity. Various other algorithms can be used for adapting the Kalman filter by scaling or updating the covariance of the filter. The filtering extensions are developed so that each of them is designed to mitigate different types of error sources for the Kalman filter that is used as the baseline. The distribution of the magnetometer noises for a better model is also investigated using sensor flight data. The filters are tested for the measurement noise with the best fitting distribution. The responses of the filters are performed under different operation modes such as nominal mode, recovery from incorrect initial state, short and long-term sensor faults. Another aspect of the thesis is to investigate two major environmental disturbances on the spacecraft close enough to a planet: the external magnetic field and the planet's albedo. As magnetometers and sun sensors are widely used attitude sensors, external magnetic field and albedo models have an important role in the accuracy of the attitude estimation. The magnetometers implemented on a spacecraft measure the internal geomagnetic field sources caused by the planet's dynamo and crust as well as the external sources such as solar wind and interplanetary magnetic field. However, the models that include only the internal field are frequently used, which might remain incapable when geomagnetic activities occur causing an error in the magnetic field model in comparison with the sensor measurements. Here, the external field variations caused by the solar wind, magnetic storms, and magnetospheric substorms are generally treated as bias on the measurements and removed from the measurements by estimating them in the augmented states. The measurement, in this case, diverges from the real case after the elimination. Another approach can be proposed to consider the external field in the model and not treat it as an error source. In this way, the model can represent the magnetic field closer to reality. If a magnetic field model used for the spacecraft attitude control does not consider the external fields, it can misevaluate that there is more noise on the sensor, while the variations are caused by a physical phenomenon (e.g. a magnetospheric substorm event), and not the sensor itself. Different geomagnetic field models are compared to study the errors resulting from the representation of magnetic fields that affect the satellite attitude determination system. For this purpose, we used magnetometer data from low Earth-orbiting spacecraft and the geomagnetic models, IGRF and T89 to study the differences between the magnetic field components, strength, and the angle between the predicted and observed vector magnetic fields. The comparisons are made during geomagnetically active and quiet days to see the effects of the geomagnetic storms and sub-storms on the predicted and observed magnetic fields and angles. The angles, in turn, are used to estimate the spacecraft attitude, and hence, the differences between model and observations as well as between two models become important to determine and reduce the errors associated with the models under different space environment conditions. It is shown that the models differ from the observations even during the geomagnetically quiet times but the associated errors during the geomagnetically active times increase more. It is found that the T89 model gives closer predictions to the observations, especially during active times and the errors are smaller compared to the IGRF model. The magnitude of the error in the angle under both environmental conditions is found to be less than 1 degree. The effects of magnetic disturbances resulting from geospace storms on the satellite attitudes estimated by EKF are also examined. The increasing levels of geomagnetic activity affect geomagnetic field vectors predicted by IGRF and T89 models. Various sensor combinations including magnetometer, gyroscope, and sun sensor are evaluated for magnetically quiet and active times. Errors are calculated for estimated attitude angles and differences are discussed. This portion of the study emphasizes the importance of environmental factors on the satellite attitude determination systems. Since the sun sensors are frequently used in both planet-orbiting satellites and interplanetary spacecraft missions in the solar system, a spacecraft close enough to the sun and a planet is also considered. The spacecraft receives electromagnetic radiation of direct solar flux, reflected radiation namely albedo, and emitted radiation of that planet. The albedo is the fraction of sunlight incident and reflected light from the planet. Spacecraft can be exposed to albedo when it sees the sunlit part of the planet. The albedo values vary depending on the seasonal, geographical, diurnal changes as well as the cloud coverage. The sun sensor not only measures the light from the sun but also the albedo of the planet. So, a planet's albedo interference can cause anomalous sun sensor readings. This can be eliminated by filtering the sun sensors to be insensitive to albedo. However, in most of the nanosatellites, coarse sun sensors are used and they are sensitive to albedo. Besides, some critical components and spacecraft systems e.g. optical sensors, thermal and power subsystems have to take the light reflectance into account. This makes the albedo estimations a significant factor in their analysis as well. Therefore, in this research, the purpose is to estimate the planet's albedo using a simple model with less parameter dependency than any albedo models and to estimate the attitude by comprising the corrected sun sensor measurements. A three-axis attitude estimation scheme is presented using a set of Earth's albedo interfered coarse sun sensors (CSSs), which are inexpensive, small in size, and light in power consumption. For modeling the interference, a two-stage albedo estimation algorithm based on an autoregressive (AR) model is proposed. The algorithm does not require any data such as albedo coefficients, spacecraft position, sky condition, or ground coverage, other than albedo measurements. The results are compared with different albedo models based on the reference conditions. The models are obtained using either a data-driven or estimated approach. The proposed estimated albedo is fed to the CSS measurements for correction. The corrected CSS measurements are processed under various estimation techniques with different sensor configurations. The relative performance of the attitude estimation schemes when using different albedo models is examined. In summary, the effects of two main space environment disturbances on the satellite's attitude estimation are studied with a comprehensive analysis with different types of spacecraft trajectories under various environmental conditions. The performance analyses are expected to be of interest to the aerospace community as they can be reproducible for the applications of spacecraft systems or aerial vehicles.
-
ÖgeImplementation of propulsion system integration losses to a supersonic military aircraft conceptual design( 2021-10-07) Karaselvi, Emre ; Nikbay, Melike ; 511171151 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiMilitary aircraft technologies play an essential role in ensuring combat superiority from the past to the present. That is why the air forces of many countries constantly require the development and procurement of advanced aircraft technologies. A fifth-generation fighter aircraft is expected to have significant technologies such as stealth, low-probability of radar interception, agility with supercruise performance, advanced avionics, and computer systems for command, control, and communications. As the propulsion system is a significant component of an aircraft platform, we focus on propulsion system and airframe integration concepts, especially in addressing integration losses during the early conceptual design phase. The approach is aimed to be appropriate for multidisciplinary design optimization practices. Aircraft with jet engines were first employed during the Second World War, and the technology made a significant change in aviation history. Jet engine aircraft, which replaced propeller aircraft, had better maneuverability and flight performance. However, substituting a propeller engine with a jet engine required a new design approach. At first, engineers suggested that removing the propellers could simplify the integration of the propulsion system. However, with jet engines for fighter aircraft, new problems arose due to the full integration of the propulsion system and the aircraft's fuselage. These problems can be divided into two parts: designing air inlet, air intake integration, nozzle/afterbody design, and jet interaction with the tail. The primary function of the air intake is to supply the necessary air to the engine with the least amount of loss. However, the vast flight envelope of the fighter jets complicates the air intake design. Spillage drag, boundary layer formation, bypass air drag, and air intake internal performance are primary considerations for intake system integration. The design and integration of the nozzle is a challenging engineering problem with the complex structure of the afterbody and the presence of jet and free-flow mix over control surfaces. The primary considerations for the nozzle system are afterbody integration, boat-tail drag, jet flow interaction, engine spacing for twin-engine configuration, and nozzle base drag. Each new generation of aircraft design has become a more challenging engineering problem to meet increasing military performances and operational capabilities. This increase is due to higher Mach speeds without afterburner, increased acceleration capability, high maneuverability, and low visibility. Tradeoff analysis of numerous intake nozzle designs should be carried out to meet all these needs. It is essential to calculate the losses caused by different intakes and nozzles at the conceptual design of aircraft. Since the changes made after the design maturation delay the design calendar or changes needed in a matured design cause high costs, it is crucial to accurately present intake and nozzle losses while constructing the conceptual design of a fighter aircraft. This design exploration process needs to be automated using numerical tools to investigate all possible alternative design solutions simultaneously and efficiently. Therefore, spillage drag, bypass drag, boundary layer losses due to intake design, boat-tail drag, nozzle base drag, and engine spacing losses due to nozzle integration are examined within the scope of this thesis. This study is divided into four main titles. The first section, "Introduction", summarizes previous studies on this topic and presents the classification of aircraft engines. Then the problems encountered while integrating the selected aircraft engine into the fighter aircraft are described under the "Problem Statement". In addition, the difficulties encountered in engine integration are divided into two zones. Problem areas are examined as inlet system and afterbody system. The second main topic, "Background on Propulsion," provides basic information about the propulsion system. Hence, the Brayton cycle is used in aviation engines. The working principle of aircraft engines is described under the Brayton Cycle subtitle. For the design of engines, numbers are used to standardize engine zone naming to present a common understanding. That is why the engine station numbers and the regions are shown before developing the methodology. The critical parameters used in engine performance comparisons are thrust, specific thrust and specific fuel consumption, and they are mathematically described. The Aerodynamics subtitle outlines the essential mathematical formulas to understand the additional drag forces caused by propulsion system integration. During the thesis, ideal gas and isentropic flow assumptions are made for the calculations. Definition of drag encountered in aircraft and engine integration are given because accurate definitions prevent double accounting in the calculation. Calculation results with developed algorithms and assumptions are compared with the previous studies of Boeing company in the validation subtitle. For comparison, a model is created to represent the J79 engine with NPSS. The engine's performance on the aircraft is calculated, and given definitions and algorithms add drag forces to the model. The results are converged to Boeing's data with a 5% error margin. After validation, developed algorithms are tested with 5th generation fighter aircraft F-22 Raptor to see how the validated approach would yield results in the design of next-generation fighter aircraft. Engine design parameters are selected, and the model is developed according to the intake, nozzle, and afterbody design of the F-22 aircraft. A model equivalent to the F-119-PW-100 turbofan engine is modeled with NPSS by using the design parameters of the engine. Additional drag forces calculated with the help of algorithms are included in the engine performance results because the model is produced uninstalled engine performance data. Thus, the net propulsive force is compared with the F-22 Raptor drag force Brandtl for 40000 ft. The results show that the F-22 can fly at an altitude of 40000 ft, with 1.6M, meeting the aircraft requirements. In the thesis, a 2D intake assumption is modeled for losses due to inlet geometry. The effects of the intake capture area, throat area, wedge angle, and duct losses on motor performance are included. However, the modeling does not include a bump intake structure similar to the intake of the F-35 aircraft losses due to 3D effects. CFD can model losses related to the 3D intake structure, and test results and thesis studies can be developed. The circular nozzle, nozzle outlet area, nozzle throat area, and nozzle maximum area are used for modeling. The movement of the nozzle blades is included in the model depending on the boattail angle and base area. The works of McDonald & P. Hughest are used as a reference to represent the 2D-sized nozzle. The method described in this thesis is one way of accounting for installation effects in supersonic aircraft. Additionally, the concept works for aircraft with conventional shock inlets or oblique shock inlets flying at speeds up to 2.5 Mach. The equation implementation in NPSS enables aircraft manufacturers to calculate the influence of installation effects on engine performance. The study reveals the methodology for calculating additional drag caused by an engine-aircraft integration in the conceptual design phase of next-generation fighter aircraft. In this way, the losses caused by the propulsion system can be calculated accurately by the developed approach in projects where aircraft and engine design have not yet matured. If presented, drag definitions are not included during conceptual design causing significant change needs at the design stage where aircraft design evolves. Making changes in the evolved design can bring enormous costs or extend the design calendar.
-
ÖgeExperimental and numerical investigation of flapping airfoils interacting in various arrangements(Graduate School, 2021-12-10) Yılmaz, Saliha Banu ; Ünal, Mehmet Fevzi ; Şahin, Mehmet ; 521082102 ; Aeronautical and Astronautical EngineeringIn the last decades, flapping wing aerodynamics has gained a great deal of interest. Inspired by insect flight, the utilization of multiple wings has become very popular in Micro Air Vehicle (MAV) and Micromechanical Flying Insect (MFI) design. Therefore, studies aiming to disclose the characteristics of flow around interacting flapping airfoils has received a particular attention. However, the majority of these studies were done using real, complex, three dimensional parameters and geometries without making any assessment on basic two dimensional vortex dynamics. The aim of this study is to identify the baseline flow field characteristics in order to better understand the flapping wing aerodynamics in nature and thus to provide a viewpoint for MAV and MFI design. The thesis contains numerical and experimental investigation of tandem (in line) and biplane (side by side) arrangements of NACA0012 airfoils undergoing harmonic pure plunging motion by means of vortex dynamics, thrust and propulsive efficiency. Additionally, the "deflected wake phenomenon" which is an interesting and a challenging benchmark problem for the validation of the numerical algorithms for moving boundary problems is investigated for a single airfoil due to its flow characteristics which accommodates strong transient effects at low Reynolds numbers. Throughout the study, effects of reduced frequency, non-dimensional plunge amplitude, Reynolds number and phase angle between airfoils are considered. The vorticity patterns are obtained both numerically and experimentally whereas force statistics and propulsive efficiencies are evaluated only in numerical simulations. In the experimental phase of the study, Particle Image Velocimetry (PIV), which is a non-intrusive optical measurement technique, is utilized. Experiments are conducted in the large scale water channel in the Trisonic Laboratory of Istanbul Technical University. The motion of the wings is provided by two servo motors and their gear systems. To obtain a two dimensional flow around the wings, they are placed in between two large endplates one of which is having a slot to permit the connection between the wings and the servo motors. The flow is seeded with silver coated hollow glass spheres of 10µ diameter and illuminated with a dual cavity Nd-Yag laser. To visualize a larger flow area, two 16-bit CCD cameras are used together either inline or side by side depending on the positions of the wings. Dantec Dynamics's Dynamic Studio software is used for synchronization, image acquisition, image stitching and cross correlation purposes. Synchronization between servo motors and data acquisition system is done via LabView software. In post process, an in-house Matlab code is used for masking of the airfoils. CleanVec and NFILVB software are utilized for vector range validation and for filtering. In order to gather mean velocity fields, NWENSAV software is used. From the experimental velocity vector fields, two dimensional vorticity fields are obtained in order to understand the flow field characteristics. The experimental results are also used as a benchmark for the numerical studies. In the numerical phase of the study, an arbitrary Lagrangian-Eulerian (ALE) formulation based on an unstructured side-centered finite volume method is utilized in order to solve the incompressible Navier-Stokes equations. The velocities are defined at the midpoint of each edge where the pressure is defined at element centroid. The present arrangement of the primitive variables leads to a stable numerical scheme and it does not require any ad-hoc modifications in order to enhance pressure-velocity coupling. The most appealing feature of this primitive variable arrangement is the availability of very efficient multigrid solvers. The mesh motion algorithm is based on an algebraic method using the minimum distance function from the airfoil surface due to its numerical efficiency, although in some cases where large mesh deformation occurs Radial Basis Function (RBF) algorithm is used. To satisfy Discrete Geometric Conservation Law (DGCL), the convective term in the momentum equation is modified in order to take account the grid velocity. The numerical grid is created via Gambit and Cubit softwares with quadrilateral elements. Grid and time independencies are achieved by means of force statistics and vorticity fields. To make direct comparison Finite Time Lyapunov Exponent (FTLE) fields are calculated for some cases. FTLE fields characterize fluid flow by measuring the amount of stretching between neighbouring particles and the Lagrangian Coherent Structures (LCS) are computed as the locally maximum regions of the FTLE field. On the other hand, using a second-order Runge-Kutta method particle tracking algorithm is developed based on the integration of the massless particle trajectories on moving unstructured quadrilateral elements. Validation of results is performed by comparing the numerical results with the experimental results and also comparing with the corresponding cases in the literature. Accordingly, the results were substantially compatible within itself and also compatible with the literature. Highly accurate numerical results are obtained in order to investigate the flow pattern around a NACA0012 airfoil, undergoing pure harmonic plunging motion corresponding to the deflected wake phenomenon which are confirmed by means of spatial and temporal convergence. Present study successfully reproduces the details of the flow field which is not produced in literature such as fine vortical structures in opposite direction of the deflected wake and the vorticity structures close to airfoil surface which is dominated by complex interactions of LE with the plunging airfoil. Moreover, highly persistent transient effects and the calculations require two orders of magnitude larger duration than the heave period to reach the time-periodic state which is prohibitively expensive for the numerical simulations. This persistent transient effect is not reported before in the literature. The three-dimensional simulation also confirms highly persistent transient effects. In addition, the three-dimensional simulation indicates that the flow field is highly three-dimensional close to the airfoil leading edge. The three-dimensional structure of the flow field is not noted in the literature for the parameters used herein. In case of tandem arrangement of airfoils, the experimental results agree well with the numerical solutions. Major flow structures are substantially compatible in both numerical and experimental results at Reynolds number of 2,000. For the considered parameters, during upstroke and downstroke co-rotating leading and trailing end vortices merge at the trailing end of the forewing and interact with the downstream airfoil in either constructive or destructive way in trust production. Thrust production of forewing is maximum when airfoil moves from topmost position to mid position for the considered reduced frequencies at all configurations. It is hard to specify thrust-drag generation characteristics of the hindwing since it depends on not only plunge motion parameters, but also on interactions with vortices from the forewing. For the considered phase angles of 0°, 90°, 180° and 270°, in addition to stationary hind wing case, the force statistics are strongly altered due to the airfoil-wake interactions. In case of biplane arrangement of airfoils at phase angle of 180°, experimental and numerical vorticity results are also quite comparable. Regarding the parameters investigated, as the reduced frequency increases, vorticity structures get larger at constant plunge amplitude. However, vorticity structures do not change much after a certain reduced frequency value. As the plunge amplitude increases, the magnitude of vortices increases without depending on reduced frequency. Increasing plunge amplitude results in increased amount of fluid moving in the direction of motion in a constant period of time, commensurate with strong suction between airfoils as they move apart from each other. As a consequence of this suction force, energetic vortex pairs are formed which helps in thrust augmentation. For thrust production, among the phase angles considered, i.e. 0°, 90°, 180° and 270°, in addition to stationary lower wing case, the most efficient is φ=180°. Effect of three dimensionality is not observed at this phase angle for the considered parameters. Additionally, no remarkable difference is observed in general flow structure when Reynolds number is increased from 2,000 to 10,000.
-
ÖgeNumerical simulation of aircraft icing with an adaptive thermodynamic model considering ice accretion(Institute of Science and Technology, 2022) Siyahi, Hadi ; Baytaş, A. Cihat ; 754795 ; Department of Aeronautics and Astronautics EngineeringThe icing phenomenon is one of the most undesirable events in aircraft. We may see this phenomenon from different points of view. The safety of flight is undoubtedly the biggest concern of designers, nowadays. The icing causes the malfunctioning or even failure of the pressure and speed measurement devices, and consequently make difficulties for controllability of the flight. Icing in rudder, ailerons, and elevators can also make control of aircraft even impossible. During landing, the icing on the pilot window along with possible failures in the landing gears may cause major catastrophes. Besides, detachment of ice particles can cause serious mechanical damage to the aircraft when they collide with the body or sometimes with internal parts such as compressor blades. The other point of view is the degradation of the performance of aircraft, and consequently the increase of fuel consumption because of icing. Icing affects the aerodynamics of an airplane in an undesirable way and puts the aircraft in a situation that is far from what the aircraft is designed for. Therefore, it is necessary to study aircraft icing to provide a safer and more efficient flight. Since the icing in aircraft is of great importance, a precision analysis of this phenomenon should be performed. Tests in the wind tunnel and during the flight are very expensive. On contrary, the numerical-computational simulations can be cost-effective for studying aircraft icing. In the present study, the numerical-computational simulation of aircraft icing has been performed by writing a computer-code via FORTRAN. The computational simulation of aircraft icing is a modular procedure consisting of the grid generation, air solver, droplet solver and ice accretion modules. First, the computational domain is generated via elliptic grid generation. The differential methods based on the solution of the elliptic equations are commonly used for generating of the mesh for a geometry with arbitrary boundaries. Elliptic equations are also utilized for the unstructured grids. The most popular elliptic equation is the Poisson equation, which gives the wonderful possibility to satisfy smoothness, fine spacing, and orthogonality on the body surface by means of the controlling terms. Then, the velocity and pressure distributions of airflow around the wing have been found, and the convective heat transfer coefficient on the body will be calculated. The inviscid flow model has been selected in our simulation because it needs less effort and time in comparison with the Navier-Stokes codes. The two-dimensional, steady-state, inviscid, incompressible, irrotational flow (potential flow) model has been applied for solving airflow.
-
ÖgeA high-order finite-volume solver for supersonic flows(Lisansüstü Eğitim Enstitüsü, 2022) Spinelli, Gregoria Gerardo ; Çelik, Bayram ; 721738 ; Uçak ve Uzay MühendisliğiNowadays, Computational Fluid Dynamics (CFD) is a powerful tool in engineering used in various industries such as automotive, aerospace and nuclear power. More than ever the growing computational power of modern computer systems allows for realistic modelization of physics. Most of the open-source codes, however, offer a second-order approximation of the physical model in both space and time. The goal of this thesis is to extend this order of approximation to what is defined as high-order discretization in both space and time by developing a two-dimensional finite-volume solver. This is especially challenging when modeling supersonic flows, which shall be addressed in this study. To tackle this task, we employed the numerical methods described in the following. Curvilinear meshes are utilized since an accurate representation of the domain and its boundaries, i.e. the object under investigation, are required. High-order approximation in space is guaranteed by a Central Essentially Non-Oscillatory (CENO) scheme, which combines a piece-wise linear reconstruction and a k-exact reconstruction in region with and without discontinuities, respectively. The usage of multi-step methods such as Runge-Kutta methods allow for a high-order approximation in time. The algorithm to evaluate convective fluxes is based on the family of Advection Upstream Splitting (AUSM) schemes, which use an upwind reconstruction. A central stencil is used to evaluate viscous fluxes instead. When using high-order schemes, discontinuities induce numerical problems, such as oscillations in the solution. To avoid the oscillations, the CENO scheme reverts to a piece-wise linear reconstruction in regions with discontinuities. However, this introduces a loss of accuracy. The CENO algorithm is capable of confining this loss of accuracy to the cells closest to the discontinuity. In order to reduce this accuracy loss Adaptive Mesh Refinement (AMR) is used. This algorithm refines the mesh near the discontinuity, confining the loss of accuracy to a smaller portion of the domain. In this study, a combination of the CENO scheme and the AUSM schemes is used to model several problems in different compressibility regimes, with a focus on supersonic flows. The scope of this thesis is to analyze the capabilities and the limitations of the proposed combination. In comparison to traditional implementations, which can be found in literature, our implementation does not impose a limit on the refinement ratio of neighboring cells while utilizing AMR. Due to the high computational expenses of a high-order scheme in conjunction with AMR, our solver benefits from a shared memory parallelization. Another advantage over traditional implementations is that our solver requires one layer of ghost cells less for the transfer of information between adjacent blocks. The validation of the solver is performed in different steps. We assess the order of accuracy of the CENO scheme by interpolating a smooth function, in this case the spherical cosine function. Then we validate the algorithm to compute the inviscid fluxes by modeling a Sod shock tube. Finally, the Boundary Conditions (BCs) for the inviscid solver and its order of accuracy are validated by modeling a convected vortex in a supersonic uniform flow. The curvilinear mesh is validated by modeling the flow around a NACA0012 airfoil. The computation of the viscous fluxes is validated by modeling a viscous boundary layer developing on a flat plate. The BCs for viscous flows and the curvilinear implementation are validated by modeling the flow around a cylinder and a NACA0012 airfoil. The AUSM schemes are tested for shock robustness by modeling an inviscid hypersonic cylinder at a Mach number of 20 and a viscous hypersonic cylinder at a Mach number of 8.03. Then, we validate our AMR implementation by modeling a two-dimensional Riemann problem. All the validation results agree well with either numerical or experimental results available in literature. The performance of the code, in terms of computational time required by the different orders of approximation and the parallel efficiency, is assessed. For the former a supersonic vortex convection served as an example, while the latter used a two-dimensional Riemann problem. We obtained a linear speed-up until 12 cores. The highest speedup value obtained is 20 with 32 cores. Furthermore, the solver is used to model three different supersonic applications: the interaction between a vortex and a normal shock, the double Mach reflection and the diffraction of a shock on a wedge. The first application resembles a strong interaction between a vortex and a steady shock wave for two different vortex strengths. In both cases our results perfectly match the ones obtained by a Weighted Essentially Non-Oscillatory (WENO) scheme documented in literature. Both schemes are approximating the solution with the same order of accuracy in both, time and space. The second application, the double Mach reflection, is a challenging problem for high-order solvers because the shock and its reflections interact strongly. For this application, all AUSM-schemes under investigation fail to obtain a stable result. The main form of instability encountered is the Carbuncle phenomenon. Our implementation overcomes this problem by combining the AUSM+M scheme with the formulation of the speed of sound of the AUSM+up scheme. This combination is capable of modeling this problem without instabilities. Our results are in agreement with those obtained with a WENO scheme. Both, the reference solutions and our results, use the same order of accuracy in both, time and space. Finally, the third example is the diffraction of a shock past a delta wedge. In this configuration the shock is diffracted and forms three different main structures: two triple points, a vortex at the trailing edge of the wedge and a reflected shock traveling upwards. Our results agree well with both, numerical and experimental results available in literature. Here, a formation of a vortex-let is observed along the vortex slip-line. This vorticity generation under inviscid flow condition is studied and we conclude that the stretching of vorticity due to compressibility is the reason. The same formation is observed when the angle of attack of the wedge is increased in the range of 0-30. In general, the AUSM+up2 scheme performed best in terms of accuracy for all problems tested here. However, for configurations, in which the Carbuncle phenomenon may appear, the combination of the AUSM+M scheme and the computation of the speed of sound formula of the AUSM+up scheme is preferable for stability reasons. During our computations, we observe a small undershooting right behind shocks on curved boundaries. This is imputable to the curvilinear approximation of the boundaries, which is only second-order accurate. Our experience shows that the smoothness indicator formula in its original version, fails to label uniform flow regions as smooth. We solve the issue by introducing a threshold for the numerator of the formula. When the numerator is lower than the threshold, the cell is labeled as smooth. A value higher than 10^-7 for the threshold might force the solver to apply high-order reconstruction across shocks, and therefore will not apply the piece-wise linear reconstruction which prevents oscillations. We observe that the CENO scheme might cause unphysical states in both inviscid and viscous regime. By reconstructing the conservative variables instead of the primitive ones, we are able to prevent unphysical states for inviscid flows. For the viscous flows, temporarily reverting to first-order reconstruction in the cells where the temperature is computed as negative, prevents unphysical states. This technique is solely required during the first iterations of the solver, when the flow is started impulsively. In this study the CENO, the AUSM and the AMR methods are combined and applied successfully to supersonic problems. When modeling supersonic flow with high-order accuracy in space, one should prefer the combination of the AUSM schemes and the CENO scheme. While the CENO scheme is simpler than the WENO scheme used in comparison, we show that it yields results of comparable accuracy. Although it was beyond the scope of this study, the AUSM can be extended to real gas modeling which constitutes another advantage of this approach.
-
ÖgeA modified anfis system for aerial vehicles control(Lisansüstü Eğitim Enstitüsü, 2022) Öztürk, Muhammet ; Özkol, İbrahim ; 713564 ; Uçak ve Uzay MühendisliğiThis thesis presents fuzzy logic systems (FLS) and their control applications in aerial vehicles. In this context, firstly, type-1 fuzzy logic systems and secondly type-2 fuzzy logic systems are examined. Adaptive Neuro-Fuzzy Inference System (ANFIS) training models are examined and new type-1 and type-2 models are developed and tested. The new approaches are used for control problems as quadrotor control. Fuzzy logic system is a humanly structure that does not define any case precisely as 1 or 0. The Fuzzy logic systems define the case with membership functions. In literature, there are very much fuzzy logic applications as data processing, estimation, control, modeling, etc. Different Fuzzy Inference Systems (FIS) are proposed as Sugeno, Mamdani, Tsukamoto, and ¸Sen. The Sugeno and Mamdani FIS are the most widely used fuzzy logic systems. Mamdani antecedent and consequent parameters are composed of membership functions. Because of that, Mamdani FIS needs a defuzzification step to have a crisp output. Sugeno antecedent parameters are membership functions but consequent parameters are linear or constant and so, the Sugeno FIS does not need a defuzzification step. The Sugeno FIS needs less computational load and it is simpler than Mamdani FIS and so, it is more widely used than Mamdani FIS. Training of Mamdani parameters is more complicated and needs more calculation than Sugeno FIS. The Mamdani ANFIS approaches in the literature are examined and a new Mamdani ANFIS model (MANFIS) is proposed. Training performance of the proposed MANFIS model is tested for a nonlinear function and control performance is tested on a DC motor dynamic. Besides, ¸Sen FIS that was used for estimation of sunshine duration in 1998, is examined. This ¸SEN FIS antecedent and consequent parameters are membership functions as Mamdani FIS and needs to defuzzification step. However, because of the structure of the ¸Sen defuzzification structure, the ¸Sen FIS can be calculated with less computational load, and therefore ¸Sen ANFIS training model has been created. These three approaches are trained on a nonlinear function and used for online control. In this study, the neuro-fuzzy controller is used as online controller. Neuro-fuzzy controllers consist of simultaneous operation of two functions named fuzzy logic and ANFIS. The fuzzy logic function is the one that generates the control signal. It generates a control signal according to the controller inputs. The other function is the ANFIS function that trains the parameters of the fuzzy logic function. Neuro-fuzzy controllers are intelligent controllers, independent of the model, and constantly adapting their parameters. For this reason, these controllers' parameters values are constantly changing according to the changes in the system. There are studies on different neuro-fuzzy control systems in the literature. Each approach is tested on a DC motor model that is a single-input and single-output system, and the neuro-fuzzy controllers' advantages and performances are examined. In this way, the approaches in the literature and the approaches added within the scope of the thesis are compared to each other. Selected neuro-fuzzy controllers are used in quadrotor control. Quadrotors have a two-stage controller structure. In the first stage, position control is performed and the position control results are defined as angles. In the second stage, attitude control is performed over the calculated angle values. In this thesis, the neuro-fuzzy controller is shown to work perfectly well in single layer control structures, i.e., there was not any overshooting, and settling time was very short. But it is seen from quadrotor control results that the neuro-fuzzy controller can not give the desired performance in the two-layered control structure. Therefore, the feedback error learning control system, in which the fuzzy controller works together with conventional controllers, is examined. Fundamentally, there is an inverse dynamic model parallel to a classical controller in the feedback error learning structure. The inverse dynamic model aims to increase the performance by influencing the classical controller signal. In the literature, there are a lot of papers about the structure of feedback error learning control and there are different proposed approaches. In the structure used in this work, fuzzy logic parameters are trained using ANFIS with error input.The fuzzy logic control signal is obtained as a result of training. The fuzzy logic control signal is added to the conventional controller signal. This study has been tested on models such as DC motor and quadrotor. It is seen that the feedback error learning control with the ANFIS increases the control performances. Antecedent and consequent parameters of type-1 fuzzy logic systems consist of certain membership functions. A type-2 FLS is proposed to better define the uncertainties, because of that, type-2 fuzzy inference membership functions are proposed to include uncertainties. The type-2 FLS is operationally difficult because of uncertainties. In order to simplify type-2 FLS operations, interval type-2 FLS is proposed as a special case of generalized type-2 FLS in the literature. Interval type-2 membership functions are designed as a two-dimensional projection of general type-2 membership functions and represent the area between two type-1 membership functions. The area between these two type-1 membership functions is called Footprint of Uncertainty (FOU). This uncertainty also occurs in the weight values obtained from the antecedent membership functions. Consequent membership functions are also type-2 and it is not possible to perform the defuzzification step directly because of uncertainty. Therefore, type reduction methods have been developed to reduce the type-2 FLS to the type-1 FLS. Type reduction methods try to find the highest and lowest values of the fuzzy logic model. Therefore, a switch point should be determined between the weights obtained from the antecedent membership functions. Type reduction methods find these switch points by iterations and this process causes too much computation, so many different methods have been proposed to minimize this computational load. In 2018, an iterative-free method called Direct Approach (DA) was proposed. This method performs the type reduction process faster than other iterative methods. In the literature, studies such as neural networks and genetic algorithms on the training for parameters of the type-2 FLS still continue. These studies are also used in the interval type-2 fuzzy logic control systems. There are proposed interval type-2 ANFIS structures in literature, but they are not effective because of uncertainties of interval type-2 membership functions. FLS parameters for ANFIS training should not contain uncertainties. However, the type-2 FLS should inherently contain uncertainty. For this reason, Karnik-Mendel algorithm is modified, which is one of the type-reduction methods, to apply the ANFIS on interval type-2 FLS. The modified Karnik-Mendel algorithm gives the same results as the Karnik-Mendel algorithm. The modified Karnik-Mendel algorithm also gives exact parameter values for use in ANFIS. One can notice that the ANFIS training of the interval type-2 FLS has been developed successfully and has been used for system control.
-
ÖgeHelikopter yer rezonansı kararsızlığının çözümü(Lisansüstü Eğitim Enstitüsü, 2022) Keser, Oğuzhan ; Türkmen, Halit Süleyman ; 736937 ; Uçak ve Uzay Mühendisliği Ana Bilim DalıBu çalışmada seçilen örnek bir helikopterin "Aérospatiale Gazelle" yer rezonansı analizi yapılmıştır. Rotorun ağırlık merkezinin pallerdeki ilerleyen pal hızı ile gerileyen pal hızının farkından dolayı dönme ekseninden kaçması sonucu bir merkezkaç kuvveti oluşur. Oluşan bu kuvvetin frekansı gövde frekansı ile çakıştığı durumda yer rezonansı durumu meydana gelir. Bu rezonans durumu bir kararsızlık/stabilite problemidir ve büyük genliklere sebebiyet verebilir. Böyle bir durumda helikopter ana yapısında büyük kırımlara yol açabilir. Bu tezde önce literatür araştırmasıyla yapılan çalışmalar incelenmiştir. Literatürdeki ağırlıklı çalışmalar ise lineer olmayan sönüm önerileri olan çalışmalardır. Eş merkezli çift rotor sistemin rezonansı, iniş takımlarının rezonansa etkisi veya lineer olmayan diferansiyel denklemler aracılığıyla yer rezonansındaki yapılan çalışmalardan bazılarını oluşturmaktadır. Yer rezonansını daha iyi kavrayabilmek için önce titreşim teorisi basit olarak ayrık sistemlerde ve serbest titreşim altında incelenmiştir. Serbest titreşim altında sönümsüz ve sönümlü sistemin doğal frekansları özdeğer çözümü ile bulunmuştur. Zorlanmış titreşim altında sistemin cevabı matematiksel olarak elde edilmiştir. Sonrasında ise ayrık sistemden farklı olarak sürekli bir sistemde kiriş örneği ele alınmış, sürekli sistemin genel kısmi diferansiyel denklemi zamana ve konuma göre çözülmüştür. Sürekli sistemin zorlanmış titreşim altındaki yer değiştirmesi de genel diferansiyel denklemden türetilmiş ve örnek bir kiriş problemi ele alınarak sistemin frekans ve deplasman cevabı analitik olarak hesaplanmış, ABAQUS (sonlu elemanlar modeli) ile de doğrulanmıştır. Sonrasında ise Nahas'ın Coleman'ın denklemlerinden yararlanılarak oluşturduğu sekiz serbestlik dereceli (altı gövde frekansı, iki rotor frekansı) ile dinamik denklemin özdeğer çözümü gösterilmiştir. Kurulan bu dinamik denklemde kütle matrisi helikopter kütlesinden ve üç eksendeki (x-y-z) atalet terimlerinden oluşmaktadır. Berklik matrisi ise gövdeye kıyasla iniş takımlara daha esnek olduğu için iniş takımının terimlerinden oluşmaktadır. Sönüm matrisi ise daha sonra ortaya çıkacak olan kararsızlıktan sonra gövde frekansı, rotor hızı ve palin kütlesinin helikopter kütlesine oranından hesaplanacak bir matristir. Seçilen Gazelle helikopterinin dış yapısı için bir yüzey çizimi bulunmuş ve bu yüzey kullanılarak sonlu elemanlarda helikopterin dış geometrisinin modeli oluşturulmuştur. Daha sonra bu modele floor, basınç duvarı, longeron ve frame ler eklenerek gerçek bir helikopter modeline benzetilmeye çalışılmıştır. Helikopterin toplam kalkış ağırlığı bilindiği için motor ağırlığı, pilot ağırlıkları ve kokpitteki diğer ağırlıklar konsantre yük olarak helikopterin içine yerleştirilmiş ve böylece gerçek helikopterdekine yakın bir ağırlık dağılımı ve gerçekçi bir ağırlık merkezi hedeflenmiştir. Buradaki amaç helikopterin atalet değerlerini ve ağırlık merkezinin konumunu gerçeğe en yakın temsil etmektir. Pale ait değerlerin ise literatürdeki çalışmalardan alınmıştır.
-
ÖgeDeğişken malzemeli ve noktasal kütle taşıyan kirişlerin termal etki altındaki titreşim davranışının incelenmesi(Lisansüstü Eğitim Enstitüsü, 2022) Kıroğlu, İbrahim ; Kaya, Metin Orhan ; 769232 ; Uçak ve Uzay Mühendisliği Bilim DalıKiriş yapılarının başta havacılık olmak üzere otomotiv ve inşaat gibi birçok sektörde oldukça yaygın bir kullanım alanı vardır. Genel olarak, boyuna ve enine dik yükleri destekleyen kiriş yapılarının uzunluğu kesit ölçülerine göre oldukça büyüktür. Kullanım alanı oldukça geniş olan kiriş yapılarının analizinde çeşitli metotlar ve teoriler bulunmaktadır. Bu çalışma kapsamında Euler-Bernoulli ve Timoshenko kiriş teorileri incelenmiş, kirişlerin titreşim davranışının incelenmesi amacıyla titreşim denklemleri elde edilmiştir. Bir dizi diferansiyel denklemden oluşan titreşim denklemlerinin çözümü için oldukça yaygın ve pratik bir çözüm yöntemi olan Diferansiyel Dönüşüm Metodu (DDM) kullanılmış, sonuçlar analitik çözüm ile kıyaslanmıştır. Diferansiyel Dönüşüm Metodu için MATLAB kodu oluşturulmuş, çözümler geliştirilen kod yardımıyla elde edilmiştir. Bu teorilerin haricinde günümüzde kullanımı yaygınlaşan Sonlu Elemanlar Yöntemi (SEY) ile de sonuçlar elde edilmiş ve analitik sonuçlar ile karşılaştırma yapılmıştır. Sonlu Elemanlar Yöntemi'nde ABAQUS paket programı kullanılmış, kiriş modellemeleri bu yazılım ile yapılmıştır. Uçak yapıları, yüksek mukavemete ve yorulma dayanımına sahip oldukça hafif yapılardan oluşmaktadır. Bu bağlamda tez çalışmasında incelenmek üzere kiriş yapı malzemeleri olarak çeşitli metalik ve kompozit malzemeler belirlenmiştir. Kompozit malzemelerin elastik karakteristiklerinin belirlenmesi için kompozit teorisi kullanılarak laminaların mikromekanik ve makromekanik analizi yapılmıştır. Sıcaklıkla ilgili fenomenlerin yapılar üzerindeki etkisi oldukça geniş bir çalışma alanına sahiptir. Sıcaklıktaki değişim, kirişin titreşim davranışında büyük bir farklılığa sebep olabilir. Bu tür yapıların dinamik davranışları yapının termal genleşmesine ve malzeme özelliklerine bağlı olarak sıcaklık etkisiyle değişmektedir. Bu tez çalışmasında sıcaklık değişiminin kirişlerin titreşim davranışına etkisini incelemek amacıyla, elde edilen titreşim denklemine sıcaklık terimi eklenmiş ve çözümler yinelenmiştir. Belirlenen beş farklı sıcaklık değişimi için sonuçlar karşılaştırmalı olarak verilmiştir.
-
ÖgeAnalytical investigation of quasi-aeroservoelastic behaviour of an aircraft spoiler(Graduate School, 2022) Kurtiş, Yiğit ; Mecitoğlu, Zahit ; Muğan, Ata ; 777765 ; Aeronautics and Astronautics Engineering ProgramThe application of science and mathematics to solve problems is called as engineering. In most of engineering process, accuracy is directly dependent on cost which can be defined as function of time and money. In the problem solving processes, there are a lot of assumptions in exchange for accuracy in order to reduce cost and find more solutions in a short time. Reducing solution time provides ability to enhance problem solving capabilities by increasing number of ways to solve problems, finding different sources of problems or optimizing solution methods. At the end, exact solution may not be reached, but more related problems can be solved with approximate solutions in limited time. With advanced technology in aviation industry, accurate designs are more important than before due to desire for better performance. In order to increase accuracy, research and development studies are performed, such as analytical formulizations and tests. Owing to high cost and long durations of test operations, analytical solutions are preferred to be supported if possible. Especially for aircraft design, due to safety consideration and aim for lightweight designs, designers have to balance time, weight and cost without any penalty for safety. In this condition, analytical solutions helps to reduce solution time for lightweight designs and create extra time for optimization studies. In this study, behavior of spoiler structures are investigated for desired deflection angle under external loads by means of analytical solutions. Spoiler is a control surface which can create drag and lift for aircraft. Spoiler structures have been implemented to aircrafts in order to improve control, especially while rolling, landing and braking. One of the main objectives of a spoiler structure is to increase drag for landing and braking applications. Additionally, spoilers can be used to increase roll rate for acrobatic or trainer aircrafts. Under aerodynamic load, as all structures deform, spoiler structures show a deformation. It affects spoiler deflection mechanism because points of mechanism changes when spoiler deformation occurs. In this case, spoiler rotates back towards to its original position where back rotation angle is usually not able to be considered in mechanism design. This condition creates dwindle for effectivity of spoiler surface which means reducing performance of aircraft. In this thesis, an analytical formulation study is performed in order to foresee back rotation angles of spoiler structures and gain ability to design mechanism for more convenient deflection angles for spoilers under aerodynamic loads. Result curves are created by curve fitting method in order to monitor and compare behavior of both analytical method analyses and finite element method analyses. Error functions are defined and calculated to find out tendency difference between analytical method and finite element method analyses under changing variables. For realistic deflection angles, the aim of this study is to accomplish accurate analytical results with error percentage below ±15% for back rotation angles and ±2% for final deflection angle compared to finite element method analyses. In the introduction section, engineering approaches for development studies are explained. Importance of accuracy for engineering application is tried to be stated by support of relations between accuracy and other engineering concerns. These concerns can be expressed with time, cost and other concerns, such as health issues, ethic concerns and safety. Also, scope and purpose of thesis are determined in this section. In the literature review section, spoiler structures and their duties on aircraft are stated. Dimensions are shown with examples and figures from aircraft industry. Grid stiffened spoiler concepts are explained in addition to commonly used structural architectures, such as composite and metal builtup structures.
-
ÖgeExperimental investigation of underexpanded transverse jet interaction with supersonic crossflow(Graduate School, 2022) Malkoçoğlu, Utkun Erinç ; Yıldırım Çetiner, Okşan ; 732995 ; Aeronautical and Astronautical Engineering ProgrammeTransverse jet interaction with crossflow is one of the most canonical and studied flow phenomena in fluid dynamics. It is possible come across to this kind of flow even in nature and daily life; smoke blowing out of the chimney in windy weather could be given as a typical example. Despite different flow structures observed, this interaction is matter of interest in both subsonic and supersonic crossflows. As regards to aerospace discipline, this particular flow event becomes prominent in high-speed applications. Two main research area comprises most of the studies in this context; lateral thrust vectoring without any control surface of supersonic missiles and effective injection for fuel mixing in combustion chamber of supersonic combustion ramjets which have key role in modern aerospace systems. Even minor changes in incidence angles of supersonic missile control surfaces could lead to complex flow structures and prevent effective maneuverability. Lateral control with jets injected to supersonic crossflow becomes advantageous on account of much less response time in comparison with conventional control surfaces. On the other hand, maxiumum mixing of fuel with flow in combustion chamber is a must for scramjet engines. Flow structures, their stability and penetration to flow domain is very crucial. Furthermore, pressure gradients; thus, momentum losses and their minimization in the downstram region of the jet should be evaluated. Realization of jet interaction with supersonic crossflow will contribute to competitive new-generation aerospace solutions. As a result of interaction, various zones occur in both upstream and downstream regions of the jet. The most distinct one is the bow shock which occurs at a certain upstream distance from the jet exit. High-pressure jet behaves just like an obstacle against the crossflow and forces it to go around. Since jet mixing evolves with increasing distance from the surface, its dominance and resistance against crossflow diminishes. As a result, bow shock is bent towards the surface. Bow shock induces flow separation with adverse pressure gradient. Therefore; a recirculation zone, and more importantly, a horseshoe vortex is observed. On the other hand, jet accelerates by expansion in the vicinity of injection surface. Then, it is surrounded by a barrel shock, which defines compression. As a result, supersonic nature of the jet concludes with a Mach disk which is normal to trajectory of the jet. In downstream of the Mach disk, vortical events take place. The most characteristic one is a counter-rotating vortex pair which enlarges as distance to jet exit increases. When surface activities are inspected, a V-shaped separation zone draws attention. When moved to streamwise symmetry axis of the jet, reattachment occurs. In streamwise direction, this zone terminates and then, reflection shocks appear. Briefly, flow structures as a result of jet interaction with supersonic crossflow are summarized as such.
-
ÖgeExperimental and numerical studies on low velocity impact behavior of Glare panels(Graduate School, 2022) Mazı, Oğuzhan ; Doğan, Vedat Ziya ; 775917 ; Aeronautical and Astronautical Engineering ProgrammeThe aerospace industry is always striving to improve aircraft efficiency and strength capacities. New materials are constantly being researched in order to produce more durable and lighter structural parts. Glare materials, which are obtained by laying glass fiber resins in certain directions between thin aluminum sheet metal parts, are very promising, especially in terms of fatigue and impact damage resistance. Glare, which was used in the fuselage panels of the Airbus A380 aircraft, has provided many advantages to the aircraft in many respects. It is known that aircraft are subjected to many low-speed impact damages during their manufacturing and service life such as tool drop, impact of foreign objects on the runway. These damages are also a design criteria to be considered during aircraft design. In this thesis, low velocity impact tests were performed on test specimens made of Glare 4A-2/1-0.3 material in accordance with the standards. Calibration tests were performed to determine the critical damage level and then verification tests were performed to examine the critical energy level. At the same time, a numerical model was prepared with the finite element method to verify the tests in Abaqus/Explicit program. In the model, three-dimensional solid elements were used and the interlaminar behavior was created using cohesive surfaces. The solution algorithm of the Abaqus/explicit program was created with the help of a VUMAT code and the damage criterion for composite plates was embedded in this code. The results of the simulation studies were compared with the results of the experimental studies and consistent results were observed within certain error rates. After the numerical results were verified, the finite element models were updated and the effects of various parameters such as plate thickness, energy level, metal thickness and impact angle on low velocity impact damage were investigated. As a result of the studies carried out, the parameters examined were evaluated and preliminary evaluations were made regarding the use of Glare 4A-2/1-0.3 material for aircraft structure in terms of low-velocity impact resistance. Considering the low velocity impact damage, it was concluded that Glare 4A materials can be evaluated in addition to traditional metallic structures and composite structures at the material selection stage for aircraft structural design. The studies concluded that increasing laminate thickness results in more lightweight structures than increasing outer Al thickness. Moreover, considering oblique impact conditions, it was seen that dent depth and panel failure is proportional to the impact perpendicularity. Finally, it was stated that there are many research areas that need to be examined regarding Glare materials. Some suggestions for the future research and studies were mentioned.
-
ÖgeMechanical response of the carbon fiber reinforced polymer composite sandwich structures with pyramidal lattice core(Graduate School, 2022) Önal, Gürkan ; Mecitoğlu, Zahit ; 732994 ; Aeronautics and Astronautics Engineering ProgrammeComposite materials are widely used for many years in various industries. Depending upon technological developments, fiber reinforced composite materials such as carbon, glass and aramid fiber, have been introduced. Also, they can be classified based on the matrix type or the reinforcement type. Matrix-based composite materials can be divided by ceramic, organic, metal matrix; while, reinforcement based classification covers fiber-reinforced, particulate and structural composite materials. It is also explicitly known that sandwich structures are established by a core, upper and lower skin (a.k.a facing or face sheet). Core of a sandwich structures conventionally is chosen as honeycomb or foam. The sandwich structures which includes foam or honeycomb, are remarkable common as structural composite. Surely, this yields having plenty of research in the literature. In the context of thesis herein, a special group of structural composites rather than foam or honeycomb, have been investigated. It is the carbon fiber reinforced polymer composite sandwich structures with pyramidal lattice (CPL) core. Along with demanding to high stiffness or strength to weight ratio, it is needed to arise to replace conventional cores by their counterparts. Promising ones might be given as Kagom'e core, X-Type core, Y-Type core, V Type core, tetrahedral, diamond textile, diamond collinear, square collinear, pyramidal. As to CPL core, it has been examined by a number of researcher from different perspectives such as compressive behaviour under quasi static loading, shear behaviour, bending behaviour, enhancing its analytical models, improving its manufacturing method, hierarchical CPL core, its node designing, searching appropriate failure criteria or etc. It is also indicated that works related to the CPL core in literature are not extensive. As the other core types, the CPL core could be characterized by its relative density. This parameter is basically defined as the ratio of volume occupied by the material within a cell to volume of the cell. Also, it determines failure mode of the CPL core. For instance, the CPL core with lower relative density, tends to fail by means of Euler buckling in the case of compression loading. However, it is possible to have delamination failure if the CPL core has higher relative density. Within the scope of the current thesis a relative density formulation has been derived. This is exact definition despite of that approximate formulations have been introduced. Moreover, it is note that the CPL cores which have been studied in present thesis, have square cross section. This fact has been holds by the exact definition of relative density. Also, it can be indicated that relative density is a parameter which can be utilized by comparing any type of core between each other. Subsequently, two different mechanical behaviour of sandwich structures with the CPL core, have been studied in this work such as out of plane compression and flexural based shear. For each behavior, the specimens have been designed so as to have two different relative densities like 2.863% and 0.725%. While the former relative density stands for Design 1, the latter relative density represents Design 2.
-
ÖgeDesign and optimization of two stage launch vehicles with the same liquid propellant rocket engines in both stages(Graduate School, 2022) Özçelik, Kubilay ; Aslan, Ali Rüstem ; 714559 ; Aeronautical and Astronautical Engineering ProgrammeSpace exploration is an important technological catalyst for humanity. While researching space and its practical uses, it accelerates the development of new technologies. Reaching orbit is a difficult and complex problem. To get the speed required to stay in orbit, launch vehicles need to have very high propellant mass fraction ratios and high performing propulsion systems. Reaching the performance limits to reach orbit needs high technology and expensive materials to be used. Because of this it is very expensive to put payload into orbit. In the recent years private space companies are entering to the launch vehicle market. These privately funded companies try to drop the prices to be able to compete with existing launch service companies to insert payloads into orbit. To do so they try to reuse the same liquid rocket engines in all stages to drop the development and manufacturing costs. Most of the private launch vehicle companies are designing only one rocket engine and are using them in their 1st and 2nd stages. While the 1st stage engines are bundled together using engines that have sea level optimized nozzle. The same engine is used in the 2nd stage with a vacuum optimized nozzle. Doing so, they reduce the development costs, complexity and manufacturing costs of their launch vehicle. Also the new trend is to design the launch vehicle as reusable as possible. This allows for cost reductions that make the launch vehicle more competitive in the market. Some companies that use this approach are SpaceX, RocketLab USA and Relativity Space. In this thesis, a launch vehicle optimization tool is developed specifically for two stage to orbit vehicles that use the same liquid propellant rocket engines for all stages with only minor modifications. In the 1st stage many sea level optimized engines are bundled together and in the 2nd stage a single vacuum optimized engine is used. It can design launch vehicles for different propellant combinations and liquid rocket engine cycles. Most launch vehicle design methods estimate the stage properties and try to distribute the mass of the stages based on estimations. After finding a viable solution it is designed in detail and the assumed performances of the stages cannot be achieved. This causes an iterative design loop that is resource draining. To solve this problem in this thesis the liquid propellant engines and stages are designed in detail. Firstly, the liquid propellant rocket engine is designed in detail and after that the stage is created by adding tanks and pressurization system. The stage design tool is connected and implemented such that it can design stages with bundled engines for the 1st stage and modifies the same engine as vacuum optimized for the 2nd stage to create the desired launch vehicle. The stage design tool is connected to an optimization algorithm and launch vehicle design tool to create the specified launch vehicle design tool necessary for this thesis. One of the most important design parameters for a launch vehicle is the required delta V for the selected mission. But without simulating the launch trajectory making a good estimate for required delta V is difficult. Therefore, to validate the designed launch vehicles, an orbital trajectory simulation code is developed based on MATLAB. Using this simulator, the designed launch vehicles are trajectory simulated and if successful they are validated or if they are unsuccessful the design parameters are updated accordingly in launch vehicle design code and the process is repeated to find good performing launch vehicles. Designing a launch vehicle is a complex multi-disciplinary and multi objective problem. To rapidly design the launch vehicle the most important parameters are selected as payload capacity, vehicle delta V capacity and T/W ratio at liftoff. The payload capacity and delta V capacity mostly influence the mass of the launch vehicle. Whereas the T/W ratio at liftoff determines the engine thrust and orbital launch performance of the launch vehicle. The optimization algorithm is developed such that it searches for the launch vehicle with minimum liftoff mass while ensuring the design input parameters are met with minimal error.
-
ÖgeFused filament fabrication of PETG :Investigation of the mechanical properties through the parameter optimization(Graduate School, 2022) Parlak, Buket ; Cebeci, Hülya ; 736752 ; Uçak ve Uzay Mühendisliği Bilim DalıAdditive manufacturing methods are in increasing demand every year due to their low cost, production of complex parts, rapid prototyping possibilities, and accessibility, and they can be preferred over other traditional methods (casting, forging). Additive manufacturing; is used effectively in many fields, especially in aviation. In addition, it is available in the literature that wax patterns (wax-patterns) used in precision casting, with its rapid prototyping feature, are obtained by the Fused Filament Fabrication (FFF) method. However, it is seen that the choice of polymer used here is very important. The polymer having high strength, low thermal expansion coefficient, and no causing shrinkage and warping during production are the desired properties. There are wax models produced with PLA and ABS in the literature. It is seen that parts produced with FFF are used not only in prototyping, but also in unmanned aerial vehicles. Additive manufacturing methods are classified according to the type of material as metal, ceramic and polymer-based. According to the ISO/ASTM 52900:2015 standard, material types are also divided into their sub-headings. The basic working principle of additive manufacturing is based on the principle that the feeder (it can be a powder or polymer filament) is melted with the help of a melting source and reassembled on a table at the desired dimensions. First of all, the CAD (Computer Aided Design) models of the part are created as .STL (Standard Triangle Language) file format and it is combined with the parameter information to be used in the printer with the help of a slicer and the g-code is created. This generated g-code is uploaded to the printer and the processes are started. Parameter selections play an important role in determining the mechanical properties of the polymer parts. The most important parameters used in the FFF method are as follows; the infill ratio, the layer height, the layer thickness, the width of the raster, the infill pattern, the air gap ratio, the raster orientation, the build direction, the printer speed, the printer temperature and the nozzle diameter. The choice of polymer type is another important parameter. In this study, PETG polymer was used because of its high resistance to chemicals, fatigue resistance, high toughness, and low shrinkage during production compared to other polymers and its easy production. This study aimed to examine the effects of the negative air gap, selected infill pattern and tensile sample standard, annealing heat treatment temperature and time on tensile properties (Ultimate Tensile Strength (UTS) and Elastic Modulus (E)). For the first parameter set, 60 samples were produced. 20 of these samples were concentric and produced by ASTM D638 Type IV standard. The remaining 20 samples were also concentric ASTM D3039. To examine the infill pattern difference in the last 20 samples, they were produced in rectilinear infill in accordance with the ASTM D3039 standard. All 5 samples were produced to have a 0%, 10%, 15%, and 20% negative air gap. As a result of the comparison of the infill patterns, it was seen that the concentric filling resulted in 29,65-50,54% higher results in E and 33,06%-47,88% higher results in UTS than the rectilinear infill. Another comparison was made between samples produced by ASTM D638 Type IV with concentric infill and 0%, 10%, 15% and 20% negative air gap, and samples produced according to ASTM D3039. According to the comparison of the different infill patterns, the concentric infill samples produced according to ASTM D638 Type IV showed the highest properties of 16,33% in E and 20,69% - 48,16% in UTS compared to those produced according to ASTM D3039. The effect of increased negative air gap was also investigated in both concentric (ASTM D638 Type IV and ASTM D3039) and rectilinear (ASTMD3039) samples. In all comparisons, the samples with a 0% negative air gap were compared with the samples produced with 10%, 15%, and 20% air gaps. As the negative air gap ratio increased, ASTM D638 Type IV concentric samples showed an increase of 11,38% - 31,54% in E and 37,18% - 63,89% in UTS. The effect of the air gap was found to be negative in the concentric filling produced according to the ASTM D3039 standard (as the gap increased, there was a decrease between 2,84% and 10,20% in E, while a decrease between 4,9% and 8,14% in UTS was observed. The effect of the air gap was found to be positive in the rectilinear filling produced according to the ASTM D3039 standard (when the negative air gap increased, there was a decrease between 2,51% and 32,44% in E, and an increase between 6,24% and 17,45% in UTS). According to all these results, the parameter set that gave the best results was obtained in the sample with ASTM D638 Type IV, concentric infill, and 15% negative air gap (E: 1.87 Gpa and UTS: 41,84 Mpa). Another aim of this study is to examine the post-process effects. To examine the effects of annealing heat treatment, 20 samples of ASTM D638 Type IV, concentric and 15% negative air gap were produced. This study was planned for two annealing temperatures and two selected times. The selection of the tensile test specimen is still a controversial issue, and the effect of the two standards was examined and discussed in this study. In this study, the importance of the effect of heat treatment temperature and time on mechanical properties was emphasized. The effects of two temperatures, 80°C, and 55°C, were investigated. At these temperatures, each sample was held in the furnace for 1 hour and 4 hours. Samples that were heat treated at 80°C were first compared with those that were heat treated at 55 °C. The tensile test results of the samples annealed at 55°C for 1 hour are higher 17,94% in E and 13,73% in UTS than the samples kept at 80°C for 1 hour. In the same way, the tensile test results of samples that were heat treated at 55 °C kept for 4 hours, are higher 17,10% in E and 13,67% in UTS than compared to 80°C. In order to see the effect of the time, the temperature was kept constant and the samples were held for 1 hour and 4 hours. According to the results obtained, there was no high increase in E and UTS as the holding time increased. All results were compared with non-heat-treated concentric specimens produced with 15% air gap and treated according to ASTM D638 Type IV. As a result of this comparison, while a 14,32% decrease was observed in E in the samples kept at 80°C 1 hour, this decrease was recorded as 2,16% in UTS. In the samples kept at 55°C 1 hour, it increased up to 4,42% in E and up to 13,41% in UTS. These results were also compared with the data in the literature, and the results were also compatible with the literature. In the samples processed at 80°C for 4 hours, a decrease of 13,77% was observed in E, while this decrease was recorded as 0,39% in UTS. In samples processed at 55°C for 4 hours, it increased by 4,01% in E and 15,38% in UTS. In the literature, 7% increase in E and 6% increase in UTS were obtained in the heat treatment of line infill samples produced with ASTM D638 Type I, 100% infill, and held at 55°C for 1 hour. The reason for the higher increase in literature compared to the samples produced with 100% infill is the effects of the negative air gap. The mechanical properties of samples produced with FFF are always lower than those obtained by injection molding, due to molding defects (like voids) and anisotropy. It is known that due to the nature of the FFF method, there are many voids inside the structure in the parts printed with 100% infill ratio. All the results obtained in this thesis were also compared with the mechanical properties obtained by injection molding. As a result of this comparison, it was observed that the highest difference was in the rectilinear produced samples (57,76%-44,06% in E, 68,96%-61,21% in UTS). In concentric samples produced according to ASTM D3039, this difference was between 23,31% and 14,60% in E and 36,91%-42,05% in UTS. Samples produced according to ASTM D638 Type IV it was found to be lower in 10,78-32,18% in E; 14,14% and 47,6% in UTS compared to the samples produced by injection. It was determined that the results of the samples produced by injection were approached with the annealing heat treatment at 55°C at most. The difference was recorded as 6,84% in E; 5,09% in UTS for 1 hour and 7,21% in E; 3,44% in UTS for 4 hours. The novel approach of this study is that reach the injected molded part results with appropriate parameter optimization studies. After the tensile tests, the fracture surfaces of the samples were also examined, and it was observed that 2 of the 60 samples were fractured in the GAT (G: Failure Type A: Failure Area T: Failure Location) rupture mode. It was observed that 20 samples produced according to ASTM D638 Type IV were broken from the inner narrow length, except for 2 of them. In addition, PETG is an advantageous polymer; no delamination and shrinkage problems were encountered compared to other polymers. In this study, it has been seen that the effects of infill, tensile specimen standard selection and negative air GAP, heat treatment, time, and selected temperature have significant effects on mechanical properties.