LEE Uçak ve Uzay Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Çıkarma tarihi ile LEE Uçak ve Uzay Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri

ÖgeTeknoloji geliştirme bölgelerinin hizmet kalitesinin ölçümü: Türkiye genelinde bir uygulama(Fen Bilimleri Enstitüsü, 2020) Özyurt, Mehmet Akif ; Özkol, İbrahim ; 656880 ; Uçak ve Uzay Mühendisliği Ana Bilim DalıBilgi Üretimine ve bunun bir çıktısı olan teknolojik üretime dayalı ürünler bugün çağımıza damgasını vurmuş ve yaşadığımız zaman dilimi, bir çok düşünür tarafından "Bilgi Çağı" olarak adlandırılmıştır. Bu çağda yüksek teknoloji üretiminin merkezinde olan ülkelerin gücü, toprak ya da sermaye büyüklüğünden değil, kaliteli eğitilmiş insan gücünün büyüklüğünden ve bu gücün yüksek teknoloji içeren üretimlere aktarılmasından kaynaklanmaktadır. Eğitim seviyesi yüksek insanlara sahip ülkelerin, üretim kalite ve seviyeleri de yüksektir. Yaşadığımız yüzyılda ülkelerin bilimsel ve teknolojik gelişim hızı çok artmıştır. Bugüne kadar ortaya çıkan bu gelişmelerin çoğu, son 30 yıl içerisinde meydana gelmiş olup, bu hız her geçen gün katlanarak artmaktadır. Dolayısı ile, gelecek kısa vadeli zaman diliminde de, bilimsel ve teknolojik açıdan, şu an yaşadığımızdan çok daha ileride bir dünyanın ortaya çıkacağını öngörmek yanlış olmaz. Yüksek teknoloji üretimi günümüzde, rekabet üstünlüğü yarışının da en belirleyici unsuru haline gelmiştir. Bu nedenle, rekabet gücünün artırılması, sadece maliyetleri düşürmeye değil, tüketici tercih ve taleplerine hızlı bir şekilde yanıt vermenin ötesinde, sürekli gelişime, yenilik ve icatta bulunmaya bağlı bir duruma gelmiştir. Teknolojik bulguları, pazarlama şansı olan bir ürün ya da hizmete, yeni bir üretim veya dağıtım yöntemine, ya da yeni bir hizmet mekanizmasına dönüştürmede, yani teknolojik yenilik üretiminde (inovasyonda) başarılı olanlar artık, dünya pazarlarına egemen olmaktadırlar. Bu tür ArGe'ye dayalı teknolojik gelişmelerin ve yeniliklerin ortaya çıkartıldığı, kaliteli eğitilmiş insan gücünün istihdam edildiği, yüksek katma değerli ürünleri üreten şirketleri ve kurumları bünyesinde barındıran bölgelere, "Teknopark" ya da ülkemizde ilgili yasanın verdiği ad ile "Teknoloji Geliştirme Bölgesi" (TGB) adı verilmektedir. Kavramsal olarak, teknoparklar, ArGe yapıcılar ile, üniversiteler ve sanayi (firmaları) arasında bilim ve teknoloji akışını sağlamaya ve yaymaya yardımcı olan araçlardır. Ayrıca teknoparklar, kuluçka mekanizmalarının oluşturduğu sinerji ile, bilim ve teknoloji tabanlı firmaların gelişimini kolaylaştırmaktadırlar. Bu alanlarda, yüksek teknoloji ve destek araçları kullanılarak, firmalar yenilikçi olmaya teşvik edilmekte, bu yolla katma değeri yüksek ürünler ortaya çıkartılmaktadır. Uluslararası Bilim Parkı Birliği tarafından ise teknoparklar, temel amaçları yenilikçilik kültürünü ve işletmelerinin ya da bilgi merkezli kurumların rekabet gücünü artırmayı destekleyerek, toplumun refah seviyesini yükseltmek olan, alanında profesyonel ekipler tarafından yönetilen yapılar şeklinde tanımlanmaktadır. Bu hedeflere ulaşmak için teknoparklar, üniversiteler, ArGe yapıcıları ve firmalar arasındaki bilgi ve teknoloji akışını sağlar, yönetir, kuluçka ve spinoff mekanizmaları ile yenilikçilik eksenli şirketlerin oluşmasını ve gelişmesini kolaylaştırır, kaliteli yapılar üreterek, diğer katma değer sunan şirket ve hizmetlerin de ortaya çıkmasına altyapı hazırlarlar. Bu tanımlamalar doğrultusunda, diğer adı ile TGB'lerin aslında bilim ve teknoloji kümelenmesi oldukları da söylenebilir. Çünkü genel anlamda teknoparklar, yenilikçi fikirlerle bir araya gelen, ileri teknoloji üreten veya kullanan ve aynı zamanda bu teknolojiyi pazarlayan, ArGe merkezinden ya da üniversiteden faydalanan işletmelerin oluşturduğu bir küme olarak da tabir edilmektedirler. Teknoparklara yönelik yapılan bu tanımlamaların farklılığı büyüklüklerinden ve işkolu faaliyetlerindeki farklılıklardan kaynaklanmaktadır. Yüksek teknoloji üreticilerinin konumlanma merkezi olan teknoparklar, istihdam imkanlarının artırılmasında, gerekli bilgi birikimi sağlanarak sanayinin geliştirilmesinde, üniversiteler ile birlikte eğitim olanaklarının artırılması için firmalara destek verilmesinde ve KOBİ'lerin sayısının artırılmasının yanı sıra bunların desteklenmesinde de etkili bir araç olarak kullanılmaktadırlar. Bu açıdan teknoparkların en temel amaçlarından bir tanesi üniversite, sanayi ve devlet arasında iş birliği sağlamak ve buna bağlı olarak bilgi ve teknoloji ağırlıklı mekânların kurulması ile bölgesel, ulusal ve uluslararası rekabetçilik seviyesinin artırılarak, ülke kalkınmasına katkı sağlamaktır. Teknoparklar, ülkelerin istihdam yapısını olumlu yönde değiştiren ve işsizlik oranının düşmesinde önemli bir etken olan, yeni ve yüksek teknoloji altyapısına sahip alanlardır. Bunun örneklerini teknopark tecrübeleri eskiye dayanan gelişmiş ve sanayileşmiş ülkelerde görmek mümkündür. Bu değişim ve gelişmenin de etkisi ile istihdamın sektörel dağılım anlamında da farklılaştığı görülmektedir. Bilindiği gibi geçmişte gelişmişliğin bir ölçütü, işgücü dağılımının tarım ve sanayi sektörlerindeki durumu olarak görülmekteydi. Şimdilerde ise gelişmişliğin ölçütü olarak, teknoloji sektöründeki istihdam oranı bir ölçüt olarak görülmektedir. Örneğin gelişmiş bir ülke durumunda olan Almanya'da, tarım ve geleneksel sanayilerindeki yüksek istihdam oranı günümüzde ciddi bir azalış göstererek istihdam, yüksek teknolojik ürün üreten sektörlere doğru kaymıştır. Teknoparklarda, Üniversite  Sanayi  Devlet üçgeninde yer alan bütün aktörlerin karlı çıkması hedeflenerek, ArGe için yatırım yapacak yeterli gücü olmayan firmaların da desteklenmesi ve üniversitelerde üretilen bilginin ticarileştirilerek bu firmalara aktarılması düşüncesi de gerçekleştirilmeye çalışılmaktadır. Buna bağlı olarak oluşturulan teknopark ara yüzünün, üniversite, sanayi, bölge ve ülke ekonomik yapısına önemli katkılar sağlaması beklenmektedir. Nitekim teknoparklardan sanayiye akan bu bilgi, sanayi üretiminin modern ölçülerde yapılmasında ve üretim tabanının bilgi ve teknoloji kaynaklı olmasında etkili bir rol oynamaktadır. Bir diğer deyiş ile teknoparklar vasıtası ile, sanayinin üniversitede üretilen bilgiye ulaşması ve üniversitede üretilen bu bilginin de sanayi tarafından uygulama alanı bulması hedeflenmektedir. Bu çalışmada, Türkiye'de faaliyet gösteren teknoparkların sunmuş olduğu hizmet kalitesi ile bu hizmetlerden istifade eden oyuncuların algıladığı hizmet kalitesi arasındaki farkı ortaya çıkarmak, Servqual ölçeğinden yararlanılarak müşterilerin (ArGe yapıcılarının) memnuniyet düzeylerini belirlemek amaçlanmıştır. Çalışmada ayrıca teknoparkların faaliyette bulundukları süre ile müşterilerin teknoparklara ilişkin hizmet kalite algıları arasında bir ilişki olup olmadığı araştırılmıştır. Teknoparklar arasında geçiş yapan firmalarda, teknopark değiştirme kararı verirken hizmet kalitesinin etkisinin de belirlenmesi hedeflenmiştir. Çalışmada son olarak Vikor yöntemi kullanılarak Türkiye'de faaliyet gösteren teknoparklar, hizmet kalitesi açısından sıralanmıştır. Araştırmada Servqual ölçeğinde yer alan hizmet ölçüm faktörleri yer almıştır. Parasuraman ve ark. (1988) tarafından geliştirilen ve hizmet kalitesini belirlemek için ortaya koydukları Servqual ölçme aracı, bugüne kadar spor tesislerinden, otel hizmetlerine kadar tüm hizmet işletmelerinde sıklıkla kullanılmıştır. Bu ölçek hem yurtiçi hem de yurtdışında birer hizmet işletmesi olarak ele alınan teknoparkların hizmet kalitesini ölçmek için, ilk defa bu çalışmada kullanılmıştır. Bu nedenle öncelikle ölçeğin teknoparklara adaptasyonu yapılmış ve bu adaptasyonun güvenilirlik ve geçerlilik çalışması gerçekleştirilerek, analizlere geçilmiştir. Araştırmada ölçeğinde, Servqual Hizmet Kalitesi Ölçeğinde yer alan, "Fiziksel Özellikler", "Güvenilirlik", "Heveslilik", "Yeterlilik" ve "Empati (Duyarlılık)" faktörleri kullanılmıştır. Fiziki özellikler faktörü, binalarda kullanılmış olan cihazların, iletişim malzemelerinin ve çalışanların fiziki görünümünü kapsamaktadır. Güvenilirlik faktörü, teknoparkların verdikleri hizmetinin zamanında ve doğru olarak yerine getirmesi ile ilgili durumunu tespit etmek için kullanılmaktadır. Heveslilik faktörü, teknoparkların müşterilerine yardım etme, hızlı hizmet verme istekliliği ve işin zamanında bitirme yeteneğini ölçmektedir. Yeterlilik faktörü, teknoparklarda çalışan servis personellerinin gerekli ve yeterli bilgiye sahip olup olmadığını ölçmek için kullanılmıştır. Empati (Duyarlılık) faktörü ise müşteri ile direkt ilişki içinde olan çalışanların, saygı, nezaket ve samimiyet düzeylerini belirlemeyi amaçlamaktadır. Çalışmada teknoparkların hizmet kalitesi seviyelerinin ölçümünün sağlanması, ileride yapılabilecek bilimsel araştırmalar için de öncü bir rol oynayacaktır. Hem yurtiçinde hem de yurtdışında buna benzer bir çalışma olmaması nedeni ile sonuçlarının, teknopark yönetici şirketleri için de büyük önem arz edeceği düşünülmektedir.

ÖgeOptimization basedcontrol of cooperative and noncooperative multi aircraft systems( 2020) Başpınar, Barış ; Koyuncu, Emre ; 625456 ; Uçak ve Uzay MühendisliğiIn this thesis, we mainly focus on developing methods that ensure autonomous control of cooperative and noncooperative multiaircraft systems. Particularly, we focus on aerial combat, air traffic control problem, and control of multiple UAVs. We propose two different optimizationbased approaches and their implementations with civil and military applications. In the first method, we benefit from hybrid system theory to present the input space of decision process. Then, using a problem specific evaluation strategy, we formulate an optimization problem in the form of integer/linear programming to generate optimal strategy. As a second approach, we design a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. In this case, we benefit from differential flatness theory and flatnessbased control. We construct optimization problems in the form of mixedinteger linear programming (MILP) and nonconvex optimization problem. In both methods, we also benefit from game theory when there are competitive decision makers. We give the details of the approaches for both civil and military applications. We present the details of the hybrid maneuverbased method for airtoair combat. We use the performance parameters of F16 to model the aircraft for military applications. Using hybrid system theory, we describe the basic and advanced fighter maneuvers. These maneuvers present the input space of the aerial combat. We define a set of metrics to present the air superiority. Then, the optimal strategy generation procedure is formulated as a linear program. Afterwards, we use the similar maneuverbased optimization approach to model the decision process of the air traffic control operator. We mainly focus on providing a scalable and fully automated ATC system and redetermining the airspace capacity via the developed ATC system. Firstly, we present an aircraft model for civil aviation applications and describe guidance algorithms for trajectory tracking. These model and algorithms are used to simulate and predict the motion of the aircraft. Then, ATCo's interventions are modelled as a set of maneuvers. We propose a mapping process to improve the performance of separation assurance and formulate an integer linear programming (ILP) that benefits from the mapping process to ensure the safety in the airspace. Thereafter, we propose a method to redetermine the airspace capacity. We create a stochastic traffic environment to simulate traffics at different complexities and define breaking point of an airspace with regards to different metrics. The approach is validated on real air traffic data for enroute airspace, and it is shown that the designed ATC system can manage traffic much denser than current traffic. As a second approach, we develop a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. It is also an optimizationbased approach. Firstly, we focus on control of multiaircraft systems. We utilize the STL specifications to encode the missions of the multiple aircraft. We benefit from differential flatness theory to construct a mixedinteger linear programming (MILP) that generates optimal trajectories for satisfying the STL specifications and performance constraints. We utilize air traffic control tasks to illustrate our approach. We present a realistic nonlinear aircraft model as a partially differentially flat system and apply the proposed method on managing approach control and solving the arrival sequencing problem. We also simulate a case study with a quadrotor fleet to show that the method can be used with different multiagent systems. Afterwards, we use the similar flatnessbased optimization approach to solve the aerial combat problem. In this case, we benefit from differential flatness, curve parametrization, game theory and receding horizon control. We present the flat description of aircraft dynamics for military applications. We parametrize the aircraft trajectories in terms of flat outputs. By the help of game theory, the aerial combat is modeled as an optimization problem with regards to the parametrized trajectories. This method allows the presentation of the problem in a lower dimensional space with all given and dynamical constraints. Therefore, it speeds up the strategy generation process. The optimization problem is solved with a moving time horizon scheme to generate optimal combat strategies. We demonstrate the method with the aerial combats between two UAVs. We show the success of the method through two different scenarios.

ÖgeInvestigations on the effects of conical bluff body geometry on nonpremixed methane flames(Graduate Institute, 2021) Ata, Alper ; Özdemir, İlyas Bedii ; 675677 ; Department of Aeronautics and Astronautics EngineeringThis thesis is composed of three experimental studies, of which the first two are already published, and the third is under peer review. The first study investigates the effects of a stabilizer and the annular coflow air speed on turbulent nonpremixed methane flames stabilized downstream of a conical bluff body. Four bluff body variants were designed by changing the outer diameter of a conically shaped object. The coflow velocity was varied from zero to 7.4 m/s, while the fuel velocity was kept constant at 15 m/s. Radial distributions of temperature and velocity were measured in detail in the recirculation zone at vertical locations of 0.5D, 1D, and 1.5D. Measurements also included the CO2, CO, NOx, and O2 emissions at points downstream of the recirculation region. Flames were visualized under 20 different conditions, revealing various modes of combustion. The results evidenced that not only the coflow velocity but also the bluff body diameter play important roles in the structure of the recirculation zone and, hence, the flame behavior. The second study analyzes the flow, thermal, and emission characteristics of turbulent nonpremixed CH4 flames for three burner heads of different cone heights. The fuel velocity was kept constant at 15 m/s, while the coflow air speed was varied between 0 – 7.4 m/s. Detailed radial profiles of the velocity and temperature were obtained in the bluff body wake at three vertical locations of 0.5D, 1D, and 1.5D. Emissions of CO2, CO, NOx, and O2 were also measured at the tail end of every flame. Flames were digitally photographed to support the point measurements with the visual observations. Fifteen different stability points were examined, which were the results of three bluff body variants and five coflow velocities. The results show that a bluecolored ring flame is formed, especially at high coflow velocities. The results also illustrate that, depending on the mixing at the bluffbody wake, the flames exhibit two modes of combustion regimes, namely fuel jet and coflowdominated flames. In the jetdominated regime, the flames become longer compared to the flames of the coflowdominated regime. In the latter regime, emissions were largely reduced due to the dilution by the excess air, which also surpasses their production. The final study examines the thermal characteristics of turbulent nonpremixed methane flames stabilized by four burner heads with the same exit diameter but different heights. The fuel flow rate was kept constant with an exit velocity of 15 m/s, while the coflow air speed was increased from 0 to 7.6 m/s. The radial profiles of the temperature and flame visualizations were obtained to investigate the stability limits. The results evidenced that the air coflow and the cone angle have essential roles in the stabilization of the flame: Increase in the cone angle and/or the coflow speed deteriorated the stability of the flame, which eventually tended to blowoff. As the cone angle was reduced, the flame was attached to the bluff body. However, when the cone angle is very small, it has no effect on stability. The mixing and entrainment processes were described by the statistical moments of the temperature fluctuations. It appears that the rise in temperature coincides with the intensified mixing, and it becomes constant in the entrainment region.

ÖgeDynamic and aeroelastic analysis of advanced aircraft wings carrying external stores(Lisansüstü Eğitim Enstitüsü, 2021) Aksongur Kaçar, Alev ; Kaya, Metin Orhan ; 709160 ; Uçak ve Uzay MühendisliğiBu çalışma gelişmiş uçak kanatlarında harici yük ve takip edici kuvvet altında kanadın dinamik ve aeroleastik davranışlarını incelemektedir. Harici yüklerin ağırlığı, pozisyonu, birbirine göre yerleşimi, kompozit katmanların yönelimi ile itki kuvveti etkileri incelenmiş ve hepsinin kanadın doğal frekansı ve kritik çırpınma hızına olan etkileri tespit edilmiştir.

ÖgeExperimental investigation of leading edge suction parameter on massively separated flow(Graduate School, 20210510) Aydın, Egemen ; Yıldırım Çetiner, Nuriye Leman Okşan ; 511171150 ; Aerospace Engineering ; Uçak ve Uzay MühendisliğiThe study aims to investigate and understand the Leading Edge Suction Parameter (LESP) application on the massively separated flow. The experiment was done by gathering force data from the downstream flat plate and the visualization of the flow structures is done by Digital Particle Image Velocimetry. The experiments are conducted in free surface, closedcircuit, large scale water channel located in Trisonic Laboratory of Istanbul Technical University's Faculty of Aeronautics and Astronautics. The velocity of the tunnel is equal to 0.1 m/s which results in a 10.000 Reynolds Number. During the experiment, the flat plate at the downstream of the gust generator (plat plate) is kept constant angle of attack and the test cases are selecting to show that the LESP parameter that derived from only one force component works for different gust interaction with the flat plate. As already discussed in the literature, the critical LESP parameter depends on only airfoil shape and its ambient Reynolds Number. Also, the critical LESP number is calculated in literature as equal to 0,05 for plat plate at the 10,000 Reynolds Number. We did not perform an experiment to find critical LEPS numbers as our experiment was done with a flat plate on 10,000 Re. A different angle of attack and different gust impingement combination has been shown that the LESP parameter works even in a highly unstable gust environment. Flow structures around the airfoil leading edge are behaving as expected from the LESP theory (leadingedge vortex separation and unification).

ÖgeDevelopment of singleframe methods aided kalmantype filtering algorithms for attitude estimation of nanosatellites(Graduate School, 20210820) Çilden Güler, Demet ; Hacızade, Cengiz ; Kaymaz, Zerefşan ; 511162104 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThere is a growing demand for the development of highly accurate attitude estimation algorithms even for small satellite e.g. nanosatellites with attitude sensors that are typically cheap, simple, and light because, in order to control the orientation of a satellite or its instrument, it is important to estimate the attitude accurately. Here, the estimation is especially important in nanosatellites, whose sensors are usually lowcost and have higher noise levels than highend sensors. The algorithms should also be able to run on systems with very restricted computer power. One of the aims of the thesis is to develop attitude estimation filters that improve the estimation accuracy while not increasing the computational burden too much. For this purpose, Kalman filter extensions are examined for attitude estimation with a 3axis magnetometer and sun sensor measurements. In the first part of this research, the performance of the developed extensions for the state of art attitude estimation filters is evaluated by taking into consideration both accuracy and computational complexity. Here, singleframe methodaided attitude estimation algorithms are introduced. As the singleframe method, singular value decomposition (SVD) is used that aided extended Kalman filter (EKF) and unscented Kalman filter (UKF) for nanosatellite's attitude estimation. The development of the system model of the filter, and the measurement models of the sun sensors and the magnetometers, which are used to generate vector observations is presented. Vector observations are used in SVD for satellite attitude determination purposes. In the presented method, filtering stage inputs are coming from SVD as the linear measurements of attitude and their error covariance relations. In this step, UD is also introduced for EKF that factorizes the attitude angles error covariance with forming the measurements in order to obtain the appropriate inputs for the filtering stage. The necessity of the substep, called UD factorization on the measurement covariance is discussed. The accuracy of the estimation results of the SVDaided EKF with and without UD factorization is compared for the estimation performance. Then, a case including an eclipse period is considered and possible switching rules are discussed especially for the eclipse period, when the sun sensor measurements are not available. There are also other attitude estimation algorithms that have strengths in coping well with nonlinear problems or working well with heavytailed noise. Therefore, different types of filters are also tested to see what kind of filter provides the largest improvements in the estimation accuracy. Kalmantype filter extensions correspond to different ways of approximating the models. In that sense, a filter takes the nonGaussianity into account and updates the measurement noise covariance whereas another one minimizes the nonlinearity. Various other algorithms can be used for adapting the Kalman filter by scaling or updating the covariance of the filter. The filtering extensions are developed so that each of them is designed to mitigate different types of error sources for the Kalman filter that is used as the baseline. The distribution of the magnetometer noises for a better model is also investigated using sensor flight data. The filters are tested for the measurement noise with the best fitting distribution. The responses of the filters are performed under different operation modes such as nominal mode, recovery from incorrect initial state, short and longterm sensor faults. Another aspect of the thesis is to investigate two major environmental disturbances on the spacecraft close enough to a planet: the external magnetic field and the planet's albedo. As magnetometers and sun sensors are widely used attitude sensors, external magnetic field and albedo models have an important role in the accuracy of the attitude estimation. The magnetometers implemented on a spacecraft measure the internal geomagnetic field sources caused by the planet's dynamo and crust as well as the external sources such as solar wind and interplanetary magnetic field. However, the models that include only the internal field are frequently used, which might remain incapable when geomagnetic activities occur causing an error in the magnetic field model in comparison with the sensor measurements. Here, the external field variations caused by the solar wind, magnetic storms, and magnetospheric substorms are generally treated as bias on the measurements and removed from the measurements by estimating them in the augmented states. The measurement, in this case, diverges from the real case after the elimination. Another approach can be proposed to consider the external field in the model and not treat it as an error source. In this way, the model can represent the magnetic field closer to reality. If a magnetic field model used for the spacecraft attitude control does not consider the external fields, it can misevaluate that there is more noise on the sensor, while the variations are caused by a physical phenomenon (e.g. a magnetospheric substorm event), and not the sensor itself. Different geomagnetic field models are compared to study the errors resulting from the representation of magnetic fields that affect the satellite attitude determination system. For this purpose, we used magnetometer data from low Earthorbiting spacecraft and the geomagnetic models, IGRF and T89 to study the differences between the magnetic field components, strength, and the angle between the predicted and observed vector magnetic fields. The comparisons are made during geomagnetically active and quiet days to see the effects of the geomagnetic storms and substorms on the predicted and observed magnetic fields and angles. The angles, in turn, are used to estimate the spacecraft attitude, and hence, the differences between model and observations as well as between two models become important to determine and reduce the errors associated with the models under different space environment conditions. It is shown that the models differ from the observations even during the geomagnetically quiet times but the associated errors during the geomagnetically active times increase more. It is found that the T89 model gives closer predictions to the observations, especially during active times and the errors are smaller compared to the IGRF model. The magnitude of the error in the angle under both environmental conditions is found to be less than 1 degree. The effects of magnetic disturbances resulting from geospace storms on the satellite attitudes estimated by EKF are also examined. The increasing levels of geomagnetic activity affect geomagnetic field vectors predicted by IGRF and T89 models. Various sensor combinations including magnetometer, gyroscope, and sun sensor are evaluated for magnetically quiet and active times. Errors are calculated for estimated attitude angles and differences are discussed. This portion of the study emphasizes the importance of environmental factors on the satellite attitude determination systems. Since the sun sensors are frequently used in both planetorbiting satellites and interplanetary spacecraft missions in the solar system, a spacecraft close enough to the sun and a planet is also considered. The spacecraft receives electromagnetic radiation of direct solar flux, reflected radiation namely albedo, and emitted radiation of that planet. The albedo is the fraction of sunlight incident and reflected light from the planet. Spacecraft can be exposed to albedo when it sees the sunlit part of the planet. The albedo values vary depending on the seasonal, geographical, diurnal changes as well as the cloud coverage. The sun sensor not only measures the light from the sun but also the albedo of the planet. So, a planet's albedo interference can cause anomalous sun sensor readings. This can be eliminated by filtering the sun sensors to be insensitive to albedo. However, in most of the nanosatellites, coarse sun sensors are used and they are sensitive to albedo. Besides, some critical components and spacecraft systems e.g. optical sensors, thermal and power subsystems have to take the light reflectance into account. This makes the albedo estimations a significant factor in their analysis as well. Therefore, in this research, the purpose is to estimate the planet's albedo using a simple model with less parameter dependency than any albedo models and to estimate the attitude by comprising the corrected sun sensor measurements. A threeaxis attitude estimation scheme is presented using a set of Earth's albedo interfered coarse sun sensors (CSSs), which are inexpensive, small in size, and light in power consumption. For modeling the interference, a twostage albedo estimation algorithm based on an autoregressive (AR) model is proposed. The algorithm does not require any data such as albedo coefficients, spacecraft position, sky condition, or ground coverage, other than albedo measurements. The results are compared with different albedo models based on the reference conditions. The models are obtained using either a datadriven or estimated approach. The proposed estimated albedo is fed to the CSS measurements for correction. The corrected CSS measurements are processed under various estimation techniques with different sensor configurations. The relative performance of the attitude estimation schemes when using different albedo models is examined. In summary, the effects of two main space environment disturbances on the satellite's attitude estimation are studied with a comprehensive analysis with different types of spacecraft trajectories under various environmental conditions. The performance analyses are expected to be of interest to the aerospace community as they can be reproducible for the applications of spacecraft systems or aerial vehicles.

ÖgeImplementation of propulsion system integration losses to a supersonic military aircraft conceptual design( 20211007) Karaselvi, Emre ; Nikbay, Melike ; 511171151 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiMilitary aircraft technologies play an essential role in ensuring combat superiority from the past to the present. That is why the air forces of many countries constantly require the development and procurement of advanced aircraft technologies. A fifthgeneration fighter aircraft is expected to have significant technologies such as stealth, lowprobability of radar interception, agility with supercruise performance, advanced avionics, and computer systems for command, control, and communications. As the propulsion system is a significant component of an aircraft platform, we focus on propulsion system and airframe integration concepts, especially in addressing integration losses during the early conceptual design phase. The approach is aimed to be appropriate for multidisciplinary design optimization practices. Aircraft with jet engines were first employed during the Second World War, and the technology made a significant change in aviation history. Jet engine aircraft, which replaced propeller aircraft, had better maneuverability and flight performance. However, substituting a propeller engine with a jet engine required a new design approach. At first, engineers suggested that removing the propellers could simplify the integration of the propulsion system. However, with jet engines for fighter aircraft, new problems arose due to the full integration of the propulsion system and the aircraft's fuselage. These problems can be divided into two parts: designing air inlet, air intake integration, nozzle/afterbody design, and jet interaction with the tail. The primary function of the air intake is to supply the necessary air to the engine with the least amount of loss. However, the vast flight envelope of the fighter jets complicates the air intake design. Spillage drag, boundary layer formation, bypass air drag, and air intake internal performance are primary considerations for intake system integration. The design and integration of the nozzle is a challenging engineering problem with the complex structure of the afterbody and the presence of jet and freeflow mix over control surfaces. The primary considerations for the nozzle system are afterbody integration, boattail drag, jet flow interaction, engine spacing for twinengine configuration, and nozzle base drag. Each new generation of aircraft design has become a more challenging engineering problem to meet increasing military performances and operational capabilities. This increase is due to higher Mach speeds without afterburner, increased acceleration capability, high maneuverability, and low visibility. Tradeoff analysis of numerous intake nozzle designs should be carried out to meet all these needs. It is essential to calculate the losses caused by different intakes and nozzles at the conceptual design of aircraft. Since the changes made after the design maturation delay the design calendar or changes needed in a matured design cause high costs, it is crucial to accurately present intake and nozzle losses while constructing the conceptual design of a fighter aircraft. This design exploration process needs to be automated using numerical tools to investigate all possible alternative design solutions simultaneously and efficiently. Therefore, spillage drag, bypass drag, boundary layer losses due to intake design, boattail drag, nozzle base drag, and engine spacing losses due to nozzle integration are examined within the scope of this thesis. This study is divided into four main titles. The first section, "Introduction", summarizes previous studies on this topic and presents the classification of aircraft engines. Then the problems encountered while integrating the selected aircraft engine into the fighter aircraft are described under the "Problem Statement". In addition, the difficulties encountered in engine integration are divided into two zones. Problem areas are examined as inlet system and afterbody system. The second main topic, "Background on Propulsion," provides basic information about the propulsion system. Hence, the Brayton cycle is used in aviation engines. The working principle of aircraft engines is described under the Brayton Cycle subtitle. For the design of engines, numbers are used to standardize engine zone naming to present a common understanding. That is why the engine station numbers and the regions are shown before developing the methodology. The critical parameters used in engine performance comparisons are thrust, specific thrust and specific fuel consumption, and they are mathematically described. The Aerodynamics subtitle outlines the essential mathematical formulas to understand the additional drag forces caused by propulsion system integration. During the thesis, ideal gas and isentropic flow assumptions are made for the calculations. Definition of drag encountered in aircraft and engine integration are given because accurate definitions prevent double accounting in the calculation. Calculation results with developed algorithms and assumptions are compared with the previous studies of Boeing company in the validation subtitle. For comparison, a model is created to represent the J79 engine with NPSS. The engine's performance on the aircraft is calculated, and given definitions and algorithms add drag forces to the model. The results are converged to Boeing's data with a 5% error margin. After validation, developed algorithms are tested with 5th generation fighter aircraft F22 Raptor to see how the validated approach would yield results in the design of nextgeneration fighter aircraft. Engine design parameters are selected, and the model is developed according to the intake, nozzle, and afterbody design of the F22 aircraft. A model equivalent to the F119PW100 turbofan engine is modeled with NPSS by using the design parameters of the engine. Additional drag forces calculated with the help of algorithms are included in the engine performance results because the model is produced uninstalled engine performance data. Thus, the net propulsive force is compared with the F22 Raptor drag force Brandtl for 40000 ft. The results show that the F22 can fly at an altitude of 40000 ft, with 1.6M, meeting the aircraft requirements. In the thesis, a 2D intake assumption is modeled for losses due to inlet geometry. The effects of the intake capture area, throat area, wedge angle, and duct losses on motor performance are included. However, the modeling does not include a bump intake structure similar to the intake of the F35 aircraft losses due to 3D effects. CFD can model losses related to the 3D intake structure, and test results and thesis studies can be developed. The circular nozzle, nozzle outlet area, nozzle throat area, and nozzle maximum area are used for modeling. The movement of the nozzle blades is included in the model depending on the boattail angle and base area. The works of McDonald & P. Hughest are used as a reference to represent the 2Dsized nozzle. The method described in this thesis is one way of accounting for installation effects in supersonic aircraft. Additionally, the concept works for aircraft with conventional shock inlets or oblique shock inlets flying at speeds up to 2.5 Mach. The equation implementation in NPSS enables aircraft manufacturers to calculate the influence of installation effects on engine performance. The study reveals the methodology for calculating additional drag caused by an engineaircraft integration in the conceptual design phase of nextgeneration fighter aircraft. In this way, the losses caused by the propulsion system can be calculated accurately by the developed approach in projects where aircraft and engine design have not yet matured. If presented, drag definitions are not included during conceptual design causing significant change needs at the design stage where aircraft design evolves. Making changes in the evolved design can bring enormous costs or extend the design calendar.

ÖgeExperimental and numerical investigation of flapping airfoils interacting in various arrangements(Graduate School, 20211210) Yılmaz, Saliha Banu ; Ünal, Mehmet Fevzi ; Şahin, Mehmet ; 521082102 ; Aeronautical and Astronautical EngineeringIn the last decades, flapping wing aerodynamics has gained a great deal of interest. Inspired by insect flight, the utilization of multiple wings has become very popular in Micro Air Vehicle (MAV) and Micromechanical Flying Insect (MFI) design. Therefore, studies aiming to disclose the characteristics of flow around interacting flapping airfoils has received a particular attention. However, the majority of these studies were done using real, complex, three dimensional parameters and geometries without making any assessment on basic two dimensional vortex dynamics. The aim of this study is to identify the baseline flow field characteristics in order to better understand the flapping wing aerodynamics in nature and thus to provide a viewpoint for MAV and MFI design. The thesis contains numerical and experimental investigation of tandem (in line) and biplane (side by side) arrangements of NACA0012 airfoils undergoing harmonic pure plunging motion by means of vortex dynamics, thrust and propulsive efficiency. Additionally, the "deflected wake phenomenon" which is an interesting and a challenging benchmark problem for the validation of the numerical algorithms for moving boundary problems is investigated for a single airfoil due to its flow characteristics which accommodates strong transient effects at low Reynolds numbers. Throughout the study, effects of reduced frequency, nondimensional plunge amplitude, Reynolds number and phase angle between airfoils are considered. The vorticity patterns are obtained both numerically and experimentally whereas force statistics and propulsive efficiencies are evaluated only in numerical simulations. In the experimental phase of the study, Particle Image Velocimetry (PIV), which is a nonintrusive optical measurement technique, is utilized. Experiments are conducted in the large scale water channel in the Trisonic Laboratory of Istanbul Technical University. The motion of the wings is provided by two servo motors and their gear systems. To obtain a two dimensional flow around the wings, they are placed in between two large endplates one of which is having a slot to permit the connection between the wings and the servo motors. The flow is seeded with silver coated hollow glass spheres of 10µ diameter and illuminated with a dual cavity NdYag laser. To visualize a larger flow area, two 16bit CCD cameras are used together either inline or side by side depending on the positions of the wings. Dantec Dynamics's Dynamic Studio software is used for synchronization, image acquisition, image stitching and cross correlation purposes. Synchronization between servo motors and data acquisition system is done via LabView software. In post process, an inhouse Matlab code is used for masking of the airfoils. CleanVec and NFILVB software are utilized for vector range validation and for filtering. In order to gather mean velocity fields, NWENSAV software is used. From the experimental velocity vector fields, two dimensional vorticity fields are obtained in order to understand the flow field characteristics. The experimental results are also used as a benchmark for the numerical studies. In the numerical phase of the study, an arbitrary LagrangianEulerian (ALE) formulation based on an unstructured sidecentered finite volume method is utilized in order to solve the incompressible NavierStokes equations. The velocities are defined at the midpoint of each edge where the pressure is defined at element centroid. The present arrangement of the primitive variables leads to a stable numerical scheme and it does not require any adhoc modifications in order to enhance pressurevelocity coupling. The most appealing feature of this primitive variable arrangement is the availability of very efficient multigrid solvers. The mesh motion algorithm is based on an algebraic method using the minimum distance function from the airfoil surface due to its numerical efficiency, although in some cases where large mesh deformation occurs Radial Basis Function (RBF) algorithm is used. To satisfy Discrete Geometric Conservation Law (DGCL), the convective term in the momentum equation is modified in order to take account the grid velocity. The numerical grid is created via Gambit and Cubit softwares with quadrilateral elements. Grid and time independencies are achieved by means of force statistics and vorticity fields. To make direct comparison Finite Time Lyapunov Exponent (FTLE) fields are calculated for some cases. FTLE fields characterize fluid flow by measuring the amount of stretching between neighbouring particles and the Lagrangian Coherent Structures (LCS) are computed as the locally maximum regions of the FTLE field. On the other hand, using a secondorder RungeKutta method particle tracking algorithm is developed based on the integration of the massless particle trajectories on moving unstructured quadrilateral elements. Validation of results is performed by comparing the numerical results with the experimental results and also comparing with the corresponding cases in the literature. Accordingly, the results were substantially compatible within itself and also compatible with the literature. Highly accurate numerical results are obtained in order to investigate the flow pattern around a NACA0012 airfoil, undergoing pure harmonic plunging motion corresponding to the deflected wake phenomenon which are confirmed by means of spatial and temporal convergence. Present study successfully reproduces the details of the flow field which is not produced in literature such as fine vortical structures in opposite direction of the deflected wake and the vorticity structures close to airfoil surface which is dominated by complex interactions of LE with the plunging airfoil. Moreover, highly persistent transient effects and the calculations require two orders of magnitude larger duration than the heave period to reach the timeperiodic state which is prohibitively expensive for the numerical simulations. This persistent transient effect is not reported before in the literature. The threedimensional simulation also confirms highly persistent transient effects. In addition, the threedimensional simulation indicates that the flow field is highly threedimensional close to the airfoil leading edge. The threedimensional structure of the flow field is not noted in the literature for the parameters used herein. In case of tandem arrangement of airfoils, the experimental results agree well with the numerical solutions. Major flow structures are substantially compatible in both numerical and experimental results at Reynolds number of 2,000. For the considered parameters, during upstroke and downstroke corotating leading and trailing end vortices merge at the trailing end of the forewing and interact with the downstream airfoil in either constructive or destructive way in trust production. Thrust production of forewing is maximum when airfoil moves from topmost position to mid position for the considered reduced frequencies at all configurations. It is hard to specify thrustdrag generation characteristics of the hindwing since it depends on not only plunge motion parameters, but also on interactions with vortices from the forewing. For the considered phase angles of 0°, 90°, 180° and 270°, in addition to stationary hind wing case, the force statistics are strongly altered due to the airfoilwake interactions. In case of biplane arrangement of airfoils at phase angle of 180°, experimental and numerical vorticity results are also quite comparable. Regarding the parameters investigated, as the reduced frequency increases, vorticity structures get larger at constant plunge amplitude. However, vorticity structures do not change much after a certain reduced frequency value. As the plunge amplitude increases, the magnitude of vortices increases without depending on reduced frequency. Increasing plunge amplitude results in increased amount of fluid moving in the direction of motion in a constant period of time, commensurate with strong suction between airfoils as they move apart from each other. As a consequence of this suction force, energetic vortex pairs are formed which helps in thrust augmentation. For thrust production, among the phase angles considered, i.e. 0°, 90°, 180° and 270°, in addition to stationary lower wing case, the most efficient is φ=180°. Effect of three dimensionality is not observed at this phase angle for the considered parameters. Additionally, no remarkable difference is observed in general flow structure when Reynolds number is increased from 2,000 to 10,000.

ÖgeNumerical simulation of aircraft icing with an adaptive thermodynamic model considering ice accretion(Institute of Science and Technology, 2022) Siyahi, Hadi ; Baytaş, A. Cihat ; 754795 ; Department of Aeronautics and Astronautics EngineeringThe icing phenomenon is one of the most undesirable events in aircraft. We may see this phenomenon from different points of view. The safety of flight is undoubtedly the biggest concern of designers, nowadays. The icing causes the malfunctioning or even failure of the pressure and speed measurement devices, and consequently make difficulties for controllability of the flight. Icing in rudder, ailerons, and elevators can also make control of aircraft even impossible. During landing, the icing on the pilot window along with possible failures in the landing gears may cause major catastrophes. Besides, detachment of ice particles can cause serious mechanical damage to the aircraft when they collide with the body or sometimes with internal parts such as compressor blades. The other point of view is the degradation of the performance of aircraft, and consequently the increase of fuel consumption because of icing. Icing affects the aerodynamics of an airplane in an undesirable way and puts the aircraft in a situation that is far from what the aircraft is designed for. Therefore, it is necessary to study aircraft icing to provide a safer and more efficient flight. Since the icing in aircraft is of great importance, a precision analysis of this phenomenon should be performed. Tests in the wind tunnel and during the flight are very expensive. On contrary, the numericalcomputational simulations can be costeffective for studying aircraft icing. In the present study, the numericalcomputational simulation of aircraft icing has been performed by writing a computercode via FORTRAN. The computational simulation of aircraft icing is a modular procedure consisting of the grid generation, air solver, droplet solver and ice accretion modules. First, the computational domain is generated via elliptic grid generation. The differential methods based on the solution of the elliptic equations are commonly used for generating of the mesh for a geometry with arbitrary boundaries. Elliptic equations are also utilized for the unstructured grids. The most popular elliptic equation is the Poisson equation, which gives the wonderful possibility to satisfy smoothness, fine spacing, and orthogonality on the body surface by means of the controlling terms. Then, the velocity and pressure distributions of airflow around the wing have been found, and the convective heat transfer coefficient on the body will be calculated. The inviscid flow model has been selected in our simulation because it needs less effort and time in comparison with the NavierStokes codes. The twodimensional, steadystate, inviscid, incompressible, irrotational flow (potential flow) model has been applied for solving airflow.

ÖgeA highorder finitevolume solver for supersonic flows(Lisansüstü Eğitim Enstitüsü, 2022) Spinelli, Gregoria Gerardo ; Çelik, Bayram ; 721738 ; Uçak ve Uzay MühendisliğiNowadays, Computational Fluid Dynamics (CFD) is a powerful tool in engineering used in various industries such as automotive, aerospace and nuclear power. More than ever the growing computational power of modern computer systems allows for realistic modelization of physics. Most of the opensource codes, however, offer a secondorder approximation of the physical model in both space and time. The goal of this thesis is to extend this order of approximation to what is defined as highorder discretization in both space and time by developing a twodimensional finitevolume solver. This is especially challenging when modeling supersonic flows, which shall be addressed in this study. To tackle this task, we employed the numerical methods described in the following. Curvilinear meshes are utilized since an accurate representation of the domain and its boundaries, i.e. the object under investigation, are required. Highorder approximation in space is guaranteed by a Central Essentially NonOscillatory (CENO) scheme, which combines a piecewise linear reconstruction and a kexact reconstruction in region with and without discontinuities, respectively. The usage of multistep methods such as RungeKutta methods allow for a highorder approximation in time. The algorithm to evaluate convective fluxes is based on the family of Advection Upstream Splitting (AUSM) schemes, which use an upwind reconstruction. A central stencil is used to evaluate viscous fluxes instead. When using highorder schemes, discontinuities induce numerical problems, such as oscillations in the solution. To avoid the oscillations, the CENO scheme reverts to a piecewise linear reconstruction in regions with discontinuities. However, this introduces a loss of accuracy. The CENO algorithm is capable of confining this loss of accuracy to the cells closest to the discontinuity. In order to reduce this accuracy loss Adaptive Mesh Refinement (AMR) is used. This algorithm refines the mesh near the discontinuity, confining the loss of accuracy to a smaller portion of the domain. In this study, a combination of the CENO scheme and the AUSM schemes is used to model several problems in different compressibility regimes, with a focus on supersonic flows. The scope of this thesis is to analyze the capabilities and the limitations of the proposed combination. In comparison to traditional implementations, which can be found in literature, our implementation does not impose a limit on the refinement ratio of neighboring cells while utilizing AMR. Due to the high computational expenses of a highorder scheme in conjunction with AMR, our solver benefits from a shared memory parallelization. Another advantage over traditional implementations is that our solver requires one layer of ghost cells less for the transfer of information between adjacent blocks. The validation of the solver is performed in different steps. We assess the order of accuracy of the CENO scheme by interpolating a smooth function, in this case the spherical cosine function. Then we validate the algorithm to compute the inviscid fluxes by modeling a Sod shock tube. Finally, the Boundary Conditions (BCs) for the inviscid solver and its order of accuracy are validated by modeling a convected vortex in a supersonic uniform flow. The curvilinear mesh is validated by modeling the flow around a NACA0012 airfoil. The computation of the viscous fluxes is validated by modeling a viscous boundary layer developing on a flat plate. The BCs for viscous flows and the curvilinear implementation are validated by modeling the flow around a cylinder and a NACA0012 airfoil. The AUSM schemes are tested for shock robustness by modeling an inviscid hypersonic cylinder at a Mach number of 20 and a viscous hypersonic cylinder at a Mach number of 8.03. Then, we validate our AMR implementation by modeling a twodimensional Riemann problem. All the validation results agree well with either numerical or experimental results available in literature. The performance of the code, in terms of computational time required by the different orders of approximation and the parallel efficiency, is assessed. For the former a supersonic vortex convection served as an example, while the latter used a twodimensional Riemann problem. We obtained a linear speedup until 12 cores. The highest speedup value obtained is 20 with 32 cores. Furthermore, the solver is used to model three different supersonic applications: the interaction between a vortex and a normal shock, the double Mach reflection and the diffraction of a shock on a wedge. The first application resembles a strong interaction between a vortex and a steady shock wave for two different vortex strengths. In both cases our results perfectly match the ones obtained by a Weighted Essentially NonOscillatory (WENO) scheme documented in literature. Both schemes are approximating the solution with the same order of accuracy in both, time and space. The second application, the double Mach reflection, is a challenging problem for highorder solvers because the shock and its reflections interact strongly. For this application, all AUSMschemes under investigation fail to obtain a stable result. The main form of instability encountered is the Carbuncle phenomenon. Our implementation overcomes this problem by combining the AUSM+M scheme with the formulation of the speed of sound of the AUSM+up scheme. This combination is capable of modeling this problem without instabilities. Our results are in agreement with those obtained with a WENO scheme. Both, the reference solutions and our results, use the same order of accuracy in both, time and space. Finally, the third example is the diffraction of a shock past a delta wedge. In this configuration the shock is diffracted and forms three different main structures: two triple points, a vortex at the trailing edge of the wedge and a reflected shock traveling upwards. Our results agree well with both, numerical and experimental results available in literature. Here, a formation of a vortexlet is observed along the vortex slipline. This vorticity generation under inviscid flow condition is studied and we conclude that the stretching of vorticity due to compressibility is the reason. The same formation is observed when the angle of attack of the wedge is increased in the range of 030. In general, the AUSM+up2 scheme performed best in terms of accuracy for all problems tested here. However, for configurations, in which the Carbuncle phenomenon may appear, the combination of the AUSM+M scheme and the computation of the speed of sound formula of the AUSM+up scheme is preferable for stability reasons. During our computations, we observe a small undershooting right behind shocks on curved boundaries. This is imputable to the curvilinear approximation of the boundaries, which is only secondorder accurate. Our experience shows that the smoothness indicator formula in its original version, fails to label uniform flow regions as smooth. We solve the issue by introducing a threshold for the numerator of the formula. When the numerator is lower than the threshold, the cell is labeled as smooth. A value higher than 10^7 for the threshold might force the solver to apply highorder reconstruction across shocks, and therefore will not apply the piecewise linear reconstruction which prevents oscillations. We observe that the CENO scheme might cause unphysical states in both inviscid and viscous regime. By reconstructing the conservative variables instead of the primitive ones, we are able to prevent unphysical states for inviscid flows. For the viscous flows, temporarily reverting to firstorder reconstruction in the cells where the temperature is computed as negative, prevents unphysical states. This technique is solely required during the first iterations of the solver, when the flow is started impulsively. In this study the CENO, the AUSM and the AMR methods are combined and applied successfully to supersonic problems. When modeling supersonic flow with highorder accuracy in space, one should prefer the combination of the AUSM schemes and the CENO scheme. While the CENO scheme is simpler than the WENO scheme used in comparison, we show that it yields results of comparable accuracy. Although it was beyond the scope of this study, the AUSM can be extended to real gas modeling which constitutes another advantage of this approach.

ÖgeA modified anfis system for aerial vehicles control(Lisansüstü Eğitim Enstitüsü, 2022) Öztürk, Muhammet ; Özkol, İbrahim ; 713564 ; Uçak ve Uzay MühendisliğiThis thesis presents fuzzy logic systems (FLS) and their control applications in aerial vehicles. In this context, firstly, type1 fuzzy logic systems and secondly type2 fuzzy logic systems are examined. Adaptive NeuroFuzzy Inference System (ANFIS) training models are examined and new type1 and type2 models are developed and tested. The new approaches are used for control problems as quadrotor control. Fuzzy logic system is a humanly structure that does not define any case precisely as 1 or 0. The Fuzzy logic systems define the case with membership functions. In literature, there are very much fuzzy logic applications as data processing, estimation, control, modeling, etc. Different Fuzzy Inference Systems (FIS) are proposed as Sugeno, Mamdani, Tsukamoto, and ¸Sen. The Sugeno and Mamdani FIS are the most widely used fuzzy logic systems. Mamdani antecedent and consequent parameters are composed of membership functions. Because of that, Mamdani FIS needs a defuzzification step to have a crisp output. Sugeno antecedent parameters are membership functions but consequent parameters are linear or constant and so, the Sugeno FIS does not need a defuzzification step. The Sugeno FIS needs less computational load and it is simpler than Mamdani FIS and so, it is more widely used than Mamdani FIS. Training of Mamdani parameters is more complicated and needs more calculation than Sugeno FIS. The Mamdani ANFIS approaches in the literature are examined and a new Mamdani ANFIS model (MANFIS) is proposed. Training performance of the proposed MANFIS model is tested for a nonlinear function and control performance is tested on a DC motor dynamic. Besides, ¸Sen FIS that was used for estimation of sunshine duration in 1998, is examined. This ¸SEN FIS antecedent and consequent parameters are membership functions as Mamdani FIS and needs to defuzzification step. However, because of the structure of the ¸Sen defuzzification structure, the ¸Sen FIS can be calculated with less computational load, and therefore ¸Sen ANFIS training model has been created. These three approaches are trained on a nonlinear function and used for online control. In this study, the neurofuzzy controller is used as online controller. Neurofuzzy controllers consist of simultaneous operation of two functions named fuzzy logic and ANFIS. The fuzzy logic function is the one that generates the control signal. It generates a control signal according to the controller inputs. The other function is the ANFIS function that trains the parameters of the fuzzy logic function. Neurofuzzy controllers are intelligent controllers, independent of the model, and constantly adapting their parameters. For this reason, these controllers' parameters values are constantly changing according to the changes in the system. There are studies on different neurofuzzy control systems in the literature. Each approach is tested on a DC motor model that is a singleinput and singleoutput system, and the neurofuzzy controllers' advantages and performances are examined. In this way, the approaches in the literature and the approaches added within the scope of the thesis are compared to each other. Selected neurofuzzy controllers are used in quadrotor control. Quadrotors have a twostage controller structure. In the first stage, position control is performed and the position control results are defined as angles. In the second stage, attitude control is performed over the calculated angle values. In this thesis, the neurofuzzy controller is shown to work perfectly well in single layer control structures, i.e., there was not any overshooting, and settling time was very short. But it is seen from quadrotor control results that the neurofuzzy controller can not give the desired performance in the twolayered control structure. Therefore, the feedback error learning control system, in which the fuzzy controller works together with conventional controllers, is examined. Fundamentally, there is an inverse dynamic model parallel to a classical controller in the feedback error learning structure. The inverse dynamic model aims to increase the performance by influencing the classical controller signal. In the literature, there are a lot of papers about the structure of feedback error learning control and there are different proposed approaches. In the structure used in this work, fuzzy logic parameters are trained using ANFIS with error input.The fuzzy logic control signal is obtained as a result of training. The fuzzy logic control signal is added to the conventional controller signal. This study has been tested on models such as DC motor and quadrotor. It is seen that the feedback error learning control with the ANFIS increases the control performances. Antecedent and consequent parameters of type1 fuzzy logic systems consist of certain membership functions. A type2 FLS is proposed to better define the uncertainties, because of that, type2 fuzzy inference membership functions are proposed to include uncertainties. The type2 FLS is operationally difficult because of uncertainties. In order to simplify type2 FLS operations, interval type2 FLS is proposed as a special case of generalized type2 FLS in the literature. Interval type2 membership functions are designed as a twodimensional projection of general type2 membership functions and represent the area between two type1 membership functions. The area between these two type1 membership functions is called Footprint of Uncertainty (FOU). This uncertainty also occurs in the weight values obtained from the antecedent membership functions. Consequent membership functions are also type2 and it is not possible to perform the defuzzification step directly because of uncertainty. Therefore, type reduction methods have been developed to reduce the type2 FLS to the type1 FLS. Type reduction methods try to find the highest and lowest values of the fuzzy logic model. Therefore, a switch point should be determined between the weights obtained from the antecedent membership functions. Type reduction methods find these switch points by iterations and this process causes too much computation, so many different methods have been proposed to minimize this computational load. In 2018, an iterativefree method called Direct Approach (DA) was proposed. This method performs the type reduction process faster than other iterative methods. In the literature, studies such as neural networks and genetic algorithms on the training for parameters of the type2 FLS still continue. These studies are also used in the interval type2 fuzzy logic control systems. There are proposed interval type2 ANFIS structures in literature, but they are not effective because of uncertainties of interval type2 membership functions. FLS parameters for ANFIS training should not contain uncertainties. However, the type2 FLS should inherently contain uncertainty. For this reason, KarnikMendel algorithm is modified, which is one of the typereduction methods, to apply the ANFIS on interval type2 FLS. The modified KarnikMendel algorithm gives the same results as the KarnikMendel algorithm. The modified KarnikMendel algorithm also gives exact parameter values for use in ANFIS. One can notice that the ANFIS training of the interval type2 FLS has been developed successfully and has been used for system control.

ÖgeNumerical and experimental study of fluid structure interaction in a reciprocating piston compressor(Graduate School, 20220114) Coşkun, Umut Can ; Acar, Hayri ; Güneş, Hasan ; 511132113 ; Aeronautics and Astronautics EngineeringConsisting of household refrigerators, cold storages, cold chain logistics, industrial freezers, air conditioners, cryogenics and heat pumps, refrigeration industry are a vital part of many sectors such as food, health care, air conditioning, sports, leisure, production of plastics and chemicals along with electronic data processing centers and scientific research facilities, which can not operate without refrigeration. There are roughly 5 billion in operation refrigeration systems which consumes 20% of the electricity used worldwide, responsible of 7.8% of GHG emission of the world, 500 billion USD cost of annual equipment sale, 15 million of employed people. Around 37% of global warming impact caused by refrigeration is direct emission of fluorinated refrigerants (CFCs, HCFCs and HFCs), 63% is due to indirect emission caused by electricity generation required for refrigeration. Both economic goals of making refrigeration units cheaper, more durable, and environment concerns of making these units more efficient and less hazardous for the world, require meticulous research and study on these refrigeration units. Approximately 40% of refrigeration units consist of domestic refrigeration systems alone where mostly hermetic, reciprocating type compressors are used. Design and improvement of such compressors is a multidisciplinary subject and requires deep understanding of heat and momentum transfer between refrigerant and solid component of compressor which can only be done through scientific investigation, using experimental and numerical techniques. In this thesis study, concerning the advantages of numerical studies, a multiphysics numerical model of flow through the gas line of a household, hermetically sealed, reciprocating piston compressor and the fluid structure interaction around the valve reeds including the contact between deformable parts was developed. Concerning the complexity of the model, the problem divided into several steps and at each step, numerical results are validated with experiments. In the first chapter of this thesis, the motivation behind the thesis study is discussed along with a theoretical background about refrigeration, compressors, fluidstructure interaction and a comprehensive literature survey are summarized to express the position of the thesis study among academic literature and it's novelty. In the second chapter, experimental studies conducted throughout the thesis are presented. Experimental studies divided into two sections. In the first section, the valve reed dynamics are investigated experimentally outside the compressor in multiple test conditions. A test rig is built for this reason, and the displacement of valve reed under constant point load, free oscillation and the impact of valve reed to valve plate from a predeformed form are measured, in order to validate the numerical work. In the second section, the compressor specifications such as cooling capacity, compression work, average refrigerant mass flow rate, along with surface temperature and instantaneous pressure variation from several locations inside the compressor are measured inside a calorimeter setup, to provide boundary conditions and validation for numerical analyses. Numerical work of the thesis study is explained in the third chapter. Modelling the whole compressor gas line between compressor inlet and outlet, including the strong coupled interaction between the refrigerant and deformable solid parts such as valve reeds is too complex of an attempt to do in a single step. Therefore, the numerical problem divided into seven smaller numerical problems and investigated consecutively. At each consecutive steps, problems are isolated, identified, solved and results are validated. The similarity of each step to the final model is increased along with it's complexity as a natural consequence at each consecutive steps. The numerical studies also briefly cover the advantages and disadvantages of using an open source or a commercial multiphysics solver, where OpenFOAM and Ansys Workbench software are utilized for this purpose, respectively. After the simplified steps of the numerical model are completed, the whole gas line of a compressor produced by Arçelik is modelled. The numerical results compared against experimentally obtained data and a good agreement is achieved between them. The developed method is further used for parametric investigation on compressor design to show the capabilities and the benefits of the numerical model. Finally, results of whole thesis study, the experience gained throughout the thesis work and the planned future work are discussed in the final chapter.

ÖgeA study on static and dynamic buckling analysis of thin walled composite cylindrical shells(Graduate School, 20220124) Özgen, Cansu ; Doğan, Vedat Ziya ; 511171148 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThinwalled structures have many useage in many industries. Examples of these fields include: aircraft, spacecraft and rockets can be given. The reason for the use of thinwalled structures is that they have a high strength weight ratio. In order to define a cylinder as thinwalled, the ratio of radius to thickness must be more than 20, and one of the problems encountered in the use of such structures is the problem of buckling. It is possible to define the buckling as a state of instability in the structure under compressive loads. This state of instability can be seen in the load displacement graph as the curve follows two different paths. The possible behaviors; snap through or bifurcation behavior. Compressive loading that cause buckling; there may be an axial load, torsional load, bending load, external pressure. In addition to these loads, buckling may occur due to temperature change. Within the scope of this thesis, the buckling behavior of thinwalled cylinders under axial compression was examined. The cylinder under the axial load indicates some displacement. When the amount of load applied reaches critical level, the structure moves from one state of equilibrium to another. After some point, the structure shows high displacement behavior and loses stiffness. The amount of load that the structure will carry decreases considerably, but the structure continues to carry loads. The behavior of the structure after this point is called postbuckling behavior. The critical load level for the structure can be determined by using finite elements method. Linear eigenvalue analysis can be performed to determine the static buckling load. However, it should be noted here that eigenvalueeigenvector analysis can only be used to make an approximate estimate of the buckling load and input the resulting buckling shape into nonlinear analyses as a form of imperfection. In addition, it can be preferred to change parameters and compare them, since they are cheaper than other types of analysis. Since the buckling load is highly affected by the imperfection, nonlinear methods with geometric imperfection should be used to estimate a more precise buckling load. It is not possible to identify geometric imperfection in linear eigenvalue analysis. Therefore, a different type of analysis should be selected in order to add imperfection. For example, an analysis model which includes imperfection can be established with the Riks method as a nonlinear static analysis type. Unlike the NewtonRapson method, the Riks method is capable of backtracking in curves. Thus, it is suitable for use in buckling analysis. In Riks analysis, it is recommended to add imperfection in contrast to linear eigenvalue analysis. Because if the imperfection is added, the problem will be bifurcation problem instead of limit load problem and sharp turns in the graph can cause divergence in analysis. Another nonlinear method of static phenomena is called quasistatic analysis which is used dynamic solver. The important thing to note here is that the inertial effects should be too small to be neglected in the analysis. For this purpose, kinetic energy and internal energy should be compared at the end of the analysis and kinetic energy should be ensured to be negligible levels besides internal energy. Also, if the event is solved in the actual time length, this analysis will be quite expensive. Therefore, the time must be scaled. In order to scale the time correctly, frequency analysis can be performed first and the analysis time can be determined longer than the period corresponding to the first natural frequency. For three analysis methods mentioned within this study, validation studies were carried out with the examples in the literature. As a result of each type of analysis giving consistent results, the effect of parameters on static buckling load was examined, while linear eigenvalue analysis method was used because it was also sufficient for cheaper analysis method and comparison studies. While displacementcontrolled analyses were carried out in the static buckling analyses mentioned, loadcontrolled analyses were performed in the analyses for the determination of dynamic buckling force. As a result of these analyses, they were evaluated according to different dynamic buckling criteria. There are some of the dynamic buckling criteria; Volmir criterion, BudianskyRoth criterion, HoffBruce criterion, etc. When BudianskyRoth criterion is used, the first estimated buckling load is applied to the structure and displacement  time graph is drawn. If a major change in displacement is observed, it can be assumed that the structure is dynamically buckled. For HoffBruce criterion, the speed  displacement graph should be drawn. If this graph is not focused in a single area and is drawn in a scattered way, it is considered that the structure has moved to the unstable area. As in static buckling analyses, dynamic buckling analyses were primarily validated with a sample study in the literature. After the analysis methods, the numerical studies were carried out on the effect of some parameters on the buckling load. First, the effect of the stacking sequence of composite layers on the buckling load was examined. In this context, a comprehensive study was carried out, both from which layer has the greatest effect of changing the angle and which angle has the highest buckling load. In addition, the some angle combinations are obtained in accordance with the angle stacking rules found in the literature. For those stacking sequences, buckling forces are calculated with both finite element analyses and analytically. In addition, comparisons were made with different materials. Here, the buckling load is calculated both for cylinders with different masses of the same thickness and for cylinders with different thicknesses with the same mass. Here, the highest force value for cylinders with the same mass is obtained for a uniform composite. In addition, although the highest buckling force was obtained for steel material in the analysis of cylinders of the same thickness, when we look at the ratio of buckling load to mass, the highest value was obtained for composite material. In addition, the ratio of length to diameter and the effect of thickness were also examined. Here, as the length to diameter ratio increases, the buckling load decreases. As the thickness increases, the buckling load increases with the square of the thickness. In addition to the effect of the length to diameter ratio and the effect of thickness, the loading time and the shape of the loading profile are also known in dynamic buckling analysis. In addition, the critical buckling force is affected by imperfections in the structure, which usually occur during the production of the structure. How sensitive the structures are to the imperfection may vary depending on the different parameters. The imperfection can be divided into three different groups as geometric, material and loading. Cylinders under axial load are particularly affected by geometric imperfection. The geometric imperfection can be defined as how far the structure is from a perfect cylindrical structure. It is possible to determine the specified amount of deviation by different measurement methods. Although it is not possible to measure the amount of imperfection for all structures, an idea can be gained about how much imperfection is expected from the studies found in the literature. Both the change in the buckling load on the measured cylinders and the imperfection effect of the buckling load can be measured by adding the measured amount of imperfection to the buckling load calculations. In cases where the amount of imperfection cannot be measured, the finite element can be included in the analysis model as an eigenvector imperfection obtained from linear buckling analysis and the critical buckling load can be calculated for the imperfect structure using nonlinear analysis methods. In this study, studies were carried out on how imperfection sensitivity changes under both static and dynamic loading with different parameters. These parameters are the the lengthtodiameter ratio, the effect of the stacking sequence of the composite layers and the added imperfection shape. The most important result obtained in the study on imperfection sensitivity is that the effect of the imperfection on the buckling load is quite high. Even geometric imperfection equal to thickness can cause the buckling load to drop by up to half.

ÖgeA study on optimization of a wing with fuel sloshing effects(Graduate School, 20220124) Vergün, Tolga ; Doğan, Vedat Ziya ; 511181206 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiIn general, sloshing is defined as a phenomenon that corresponds to the free surface elevation in multiphase flows. It is a movement of liquid inside another object. Sloshing has been studied for centuries. The earliest work [48] was carried out in the literature by Euler in 1761 [17]. Lamb [32] theoretically examined sloshing in 1879. Especially with the development of technology, it has become more important. It appears in many different fields such as aviation, automotive, naval, etc. In the aviation industry, it is considered in fuel tanks. Since outcomes of sloshing may cause instability or damage to the structure, it is one of the concerns about aircraft design. To prevent its adverse effect, one of the most popular solutions is adding baffles into the fuel tank. Still, this solution also comes with a disadvantage: an increase in weight. To minimize the effects of added weight, designers optimize the structure by changing its shape, thickness, material, etc. In this study, a NACA 4412 airfoilshaped composite wing is used and optimized in terms of safety factor and weight. To do so, an initial composite layup is determined from current designs and advice from literature. When the design of the initial system is completed, the system is imported into a transient solver in the Ansys Workbench environment to perform numerical analysis on the time domain. To achieve more realistic cases, the wing with different fuel tank fill levels (25%, 50%, and 75%) is exposed to aerodynamic loads while the aircraft is rolling, yawing, and dutch rolling. The aircraft is assumed to fly with a constant speed of 60 m/s (~120 knots) to apply aerodynamic loads. Resultant force for 60 m/s airspeed is applied onto the wing surface by 1Way FluidStructure Interaction (1Way FSI) as a distributed pressure. Using this method, only fluid loads are transferred to the structural system, and the effect of wing deformation on the fluid flow field is neglected. Once gravity effects and aerodynamic loads are applied to the wing structure, displacement is defined as the wing is moving 20 deg/s for 3 seconds for all types of movements. On the other hand, fluid properties are described in the Ansys Fluent environment. Fluent defines the fuel level, fluid properties, computational fluid dynamics (CFD) solver, etc. Once both structural and fluid systems are ready, system coupling can perform 2Way FluidStructure Interaction (2Way FSI). Using this method, fluid loads and structural deformations are transferred simultaneously at each step. In this method, the structural system transfers displacement to the fluid system while the fluid system transfers pressure to the structural system. After nine analyses, the critical case is determined regarding the safety factor. Critical case, in which system has the lowest minimum safety factor, is found as 75% filled fuel tank while aircraft dutch rolling. After the determination of the critical case, the optimization process is started. During the optimization process, 1Way FSI is used since the computational cost of the 2Way FSI method is approximately 35 times that of 1Way FSI. However, taking less time should not be enough to accept 1Way FSI as a solution method; the deviation of two methods with each other is also investigated. After this investigation, it was found that the variation between the two methods is about 1% in terms of safety factors for our problem. In the light of this information, 1Way FSI is preferred to apply both sloshing and aerodynamic loads onto the structure to reduce computational time. After method selection, thickness optimization is started. Ansys Workbench creates a design of experiments (DOE) to examine response surface points. Latin Hypercube Sampling Design (LHSD) is preferred as a DOE method since it generates noncollapsing and spacefilling points to create a better response surface. After creating the initial response surface using Genetic Aggregation, the optimization process is started using the MultiObjective Genetic Algorithm (MOGA). Then, optimum values are verified by analyzing the optimum results in Ansys Workbench. When the optimum results are verified, it is realized that there is a notable deviation in results between optimized and verified results. To minimize the variation, refinement points are added to the response surface. This process is kept going until variation comes under 1%. After finding the optimum results, it is noticed that its precision is too high to maintain manufacturability so that it is rounded into 1% of a millimeter. In the end, final thickness values are verified. As a result, optimum values are found. It is found that weight is decreased from 100.64 kg to 94.35 kg, which means a 6.3% gain in terms of weight, while the minimum safety factor of the system is only reduced from 1.56 to 1.54. At the end of the study, it is concluded that a 6.3% reduction in weight would reflect energy saving.

ÖgeFonksiyonel derecelendirilmiş malzemeden üretilen plakların mekanik ve ısıl yükler altındaki burkulma analizi(Lisansüstü Eğitim Enstitüsü, 20220127) Aktaş, İbrahim Utku ; Doğan, Vedat Ziya ; 511171115 ; Uçak ve Uzay MühendisliğiMalzeme seçimi bütün mühendislik uygulamalarında çok önemli rol oynamaktadır. Neredeyse bütün mühendislik uygulamalarının gelişmesi ve ilerlemesi o alanda kullanılan malzemelerin gelişmişliği ile doğrudan ilişkilidir. Malzemelerin monolitik malzemeden alaşımlı malzemelere evrimi ve kompozit malzemelerin gelişimi, bir malzeme sınıfının çağın ihtiyaçlarına artık cevap veremiyor oluşundan doğmuştur. Çoğu mühendislik uygulamasında, monolitik bir malzemede bulunması imkânsız olan birbirleriyle çelişen özelliklere sahip malzemelerin kullanımına ihtiyaç duyulmaktadır. Ayrıca, farklı malzemelerin alaşımlanması, bileşen malzemelerin termodinamik davranışı ve bir malzemenin diğer malzemelerle karıştırılma derecesindeki kıstaslar ile sınırlıdır. Fonksiyonel derecelendirilmiş malzeme, iki malzemenin bir araya getirilmesi ve zorlu çalışma ortamlarına maruz kaldıktan sonra dahi işlevlerini yerine getirebilmesi ve özelliklerini koruyabilmesi gerekliliğinden doğmuştur. İşlevsel olarak derecelendirilmiş malzeme başlangıçta bir ısıl bariyer uygulaması ihtiyacı için geliştirilmiş olsa da, bu önemli gelişmiş malzemenin uygulaması artırılmış ve aşırı aşınma direnci ve korozyon direnci uygulamaları gibi mühendislik uygulamalarında bir dizi sorunu çözmek için kullanılmıştır. Bu yeni malzeme türünden havacılık, otomobil ve biyomedikal gibi uygulamalarda yararlanılmaktadır. Fonksiyonel derecelendirilmiş malzemeler, geleneksel kompozit malzemelerin zorlu çalışma ortamlarında kullanıldığında başarısız uygulamalara neden olmasının sonucunda ortaya çıkmıştır. Geleneksel kompozit malzemelerin mühendislik uygulamalarındaki başarısızlığı kompozit malzemeyi oluşturan katmanlar arasındaki belirgin bir şekilde tanımlanmış olan arayüzden kaynaklanmaktadır. Arayüz, bu bölgede yüksek bir gerilme yığılmasına sebebiyet vermekte ve kompozitin nihai başarısızlığına neden olan çatlak başlangıcını ve yayılmasını teşvik etmektedir. Bu çatlak oluşma ve ilerleme sürecine "delaminasyon" adı verilmektedir. Japonya' da bir uzay mekiği projesinde karşılaşılan ve fonksiyonel derecelendirilmiş malzemelerin ortaya çıkmasına ortam hazırlayan sorun, geleneksel kompozit malzemelerdeki bu belirgin arayüzün nasıl ortadan kaldırılabileceğini ve kompozitin istenen ısıl bariyer görevini nasıl yerine getirebileceği problemini ortaya koymuştur. Araştırmacılar, kademeli olarak değişen bir arayüz ile geleneksel kompozit malzemedeki keskin arayüzü sistematik olarak ortadan kaldırabildiler, böylece bu arayüzdeki gerilme yığılmasını azalttılar ve geliştirilen fonksiyonel derecelendirilmiş malzeme, zorlu çalışma koşullarında kırıma uğramadan ayakta kalabildi. Sonuç olarak malzemenin asıl geliştirilme amacı olan yapıya ısıl kalkan olması dışında çeşitli mühendislik uygulamaları için de fonksiyonel derecelendirilmiş malzemeler kullanılmıştır. Fonksiyonel derecelendirilmiş malzemeler, malzemenin hacmi boyunca değişen özelliklerle birlikte değişen bileşime sahip gelişmiş kompozit malzemelerdir. Havacılıkta kullanılan araçlar başta aerodinamik yükler olmak üzere birçok mekanik ve ısıl yüklere maruz kalmaktadır. Bu yükler hava aracının yapısallarının boyutlandırılmasında kullanılmaktadır. Güvenli bir hava aracı maruz kaldığı yükleri yapı içerisinde taşırken kırıma uğramayacak şekilde tasarlanmaktadır. Hava aracının yapısalları birçok farklı şekilde kırıma ya da hasara uğrayabilmektedir. Bunları öngörebilmek ve yapıyı ona göre tasarlamak hayati öneme sahiptir. Bununla beraber, yapıları kırıma uğratmayan fakat yapılarda yapısal kararsızlığa yol açan burkulma problemi havacılıkta çok önemli bir konudur. Örneğin bir uçağa gelen yükler kanat üzerindeki kabukların düzlem içi basma ya da çekme yüklerine maruz kalmasına sebep olabilmektedir. Kabuk elemanlarının basma yüküne maruz kaldığı durumlarda burkulma olayı gerçekleşebilir. Bu da hem kanat üzerindeki aerodinamik akışı bozabilmekte hem de yapının kararsız hale gelmesine sebep olabilmektedir. Bu gibi durumlarda yapının yük taşıma kapasitesi değişmekte ve burkulma sonrası hesaplamaların yapılması gerekmektedir. Bundan dolayı yapısal elemanların ne zaman burkulmaya uğrayabileceğini öngörebilmek büyük önem taşımaktadır. Bu tezde fonksiyonel derecelendirilmiş malzemeden üretilen plakların ısıl ve mekanik yüklemeler altındaki burkulma davranışları sistematik olarak ele alınacaktır. 1. Kısım' da yapılan çalışmadan genel olarak bahsedilip çalışmanın amacından ve isteğinden söz edilmiştir. 2. Kısım' da ise geçmişte fonksiyonel derecelendirilmiş plaklar üzerine yapılmış çalışmalar okuyucuya aktarılmıştır. Bu çalışmaları ifade etmeden önce temel burkulma probleminin tanımı yapılmıştır. Burkulma olayını tanımlamaya ilk olarak kolon ve kiriş elemanlarının burkulmasından başlanmış daha sonra plakların burkulması anlatılmıştır. Burkulma teorisinin alt yapısının okuyucuya bu şekilde verilmesi amaçlanmıştır. Ardından fonksiyonel derecelendirilmiş malzemelere kısa bir giriş yapılmış ve tarihçesinden bahsedilmiştir. Bu kısımda aynı zamanda fonksiyonel derecelendirilmiş malzemelerin burkulması üzerine yapılan akademik çalışmalardan da bahsedilmiştir. 3. Kısım' da fonksiyonel derecelendirilmiş malzemeden üretilen plakların mekaniğini anlamak adına geleneksel kompozit malzemeden üretilen plakların mekaniği okuyucuya aktarılmıştır. İlk olarak katmanlı kompozit plak teorilerinden kısaca bahsedilmiş ve sonra Klasik Kompozit Plaka Teorisi (KPT) ve Birinci Dereceden Kayma Şekil Değiştirme Teorisi (BKT) detaylı bir şekilde anlatılmıştır. Çünkü fonksiyonel derecelendirilmiş malzemeden üretilen plakların mekaniğini anlamak için geleneksel kompozit plakların mekaniğini iyice anlamak büyük önem taşımaktadır. 4. Kısım' da fonksiyonel derecelendirilmiş malzemelerin üretim yöntemlerinden kısaca bahsedilmiş ve etkin malzeme özelliklerinin nasıl modellendiği gösterilmiştir. 5. Kısım' a gelindiğinde daha önceden kısaca bahsedilen plakların burkulma problemi üzerinde durulmuş ve bu problemin belirli sınır koşulları altında analitik çözüm yöntemlerinden bahsedilmiştir. İlk olarak izotropik plakların burkulma probleminin çözümü, Navier ve Levy sınır koşullarını ayrı ayrı sağlayacak şekilde oluşturulan sınır koşulları altında çözülmüştür. Ardından Fonksiyonel derecelendirilmiş malzemeden üretilen plakların burkulma problemini çözebilmek için KPT kullanılarak analitik model oluşturulmuştur. Sonrasında oluşturulan analitik model her bir kenarından basit mesnetli kabul edilen fonksiyonel derecelendirilmiş plaklar için farklı yüklemeler altında MATLAB programında yazılan kod yardımı ile çözülmüştür. Bu yüklemeler mekanik ve ısıl yüklemeler olmak üzere ikiye ayrılmaktadır. Mekanik yüklemeler için üç farklı durum göz önüne alınmıştır. Bunlar: tek eksenli basma yükü, iki eksenli basma yükü ve iki eksenli basma – çekme yükü altındaki burkulma analizleridir. Isıl yükleme koşulları ise sıcaklığın kalınlık boyunca farklı şekillerde dağılımları göz önüne alınarak yine üç farklı şekilde yapıya uygulanacak ve burkulma analizi yapılmıştır. İlk olarak kalınlık boyunca sabit sıcaklık dağılımı için kritik burkulma sıcaklık farkı bulunmuştur. Ardından kalınlık boyunca doğrusal değişen sıcaklık dağılımı için burkulma analizi yapılıp kritik burkulma sıcaklık farkı elde edilmiş ve sonrasında ise kalınlık boyunca doğrusal olmayan sıcaklık dağılımı için bu analizler tekrarlanmıştır. Elde edilen tüm sonuçlar daha önceki çalışmalarla kıyaslanmış ve ince FD plaklar için KPT' nin oldukça başarılı sonuçlar verdiği görülmüştür. 6. Kısım' da ise sonlu elemanlar paket programı, PATRAN, NASTRAN yardımıyla burkulma analizleri gerçekleştirilmiş ve KPT ile elde edilen analitik sonuçlarla kıyaslanmıştır. Sonraki kısımlarda yapılan tüm çalışmalar kısaca değerlendirilmiş ve gelecekte bu konu üzerine yapılabilecek çalışmalardan bahsedilmiştir.

ÖgeModel predictive control based cooperative pursuit evasion for uav(Graduate School, 20220218) Akbıyık, Mustafa Berkay ; Acar, Hayri ; Özkol, İbrahim ; 511181131 ; Aeronautical and Astronautical EngineeringThis thesis proposes game theoretically model predictive control based guidance approach for pursuitevasion problem of uav's. The main idea is that guided swarm uavs pursue towards to adversary uav which evade to survive as long as possible. Game theoretical approach of pursuitevasion is based on designing the cost functions for each pursuer to converge adversary evader. Proposed approach is examined as decentralized. Therefore, each pursuer can be able to handle its mission independently without being affected by the other pursuer. The main contribution is the formulation of swarm pursuitevasion problem as the game theoretical which can enable to develop optimizationbased algorithms that bring superior strategies to pursuers for onetoone, twotoone scenarios during the air combat. This work proposes an algorithm to enhance applicability of the game theoretic nonconvex model predictive control problems on realsystems that have nonlinear controland state constraints. Proposed algorithm provide a model predictive controlbased guidance system which orientates the pursuers according to the evaders dynamics and positions. Nonlinear constraints are convexified along the finitehorizon time without loss of generality in successive linearizations. After discretization of dynamics, the suboptimal convex problem can be applied in model predictive concept for timecritical scenarios such as collaborative pursuitevasion of aerial vehicles.

ÖgeSafe motion planning and learning for unmanned aerial systems(Graduate School, 20220506) Perk, Barış Eren ; İnalhan, Gökhan ; 511142104 ; Aeronautics and Astronautics EngineeringTo control unmanned aerial systems, we rarely have a perfect system model. Safe and aggressive planning is also challenging for nonlinear and underactuated systems. Expert pilots, however, demonstrate maneuvers that are deemed at the edge of plane envelope. Inspired by biological systems, in this paper, we introduce a framework that leverages methods in the field of control theory and reinforcement learning to generate feasible, possibly aggressive, trajectories. For the control policies, Dynamic Movement Primitives (DMPs) imitate pilotinduced primitives, and DMPs are combined in parallel to generate trajectories to reach original or different goal points. The stability properties of DMPs and their overall systems are analyzed using contraction theory. For reinforcement learning, Policy Improvement with Path Integrals (PI2) was used for the maneuvers. The results in this paper show that PI2 updated policies are a feasible and parallel combination of different updated primitives transfer the learning in the contraction regions. Our proposed methodology can be used to imitate, reshape, and improve feasible, possibly aggressive, maneuvers. In addition, we can exploit trajectories generated by optimization methods, such as Model Predictive Control (MPC), and a library of maneuvers can be instantly generated. For application, 3DOF (degrees of freedom) Helicopter and 2DUAV (unmanned aerial vehicle) models are utilized to demonstrate the main results.

ÖgeFlight safety risk awareness at flight test activities with analytical hierarchy process method(Graduate School, 20220523) Akgür, Yusuf ; Kodal, Ali ; 511191143 ; Aeronautics and Astronautics EngineeringIn 1903, the Wright brothers succeeded in flying the first manned and propelled heavierthanair aircraft, which soon led to the birth of aviation and the spread of aircrafts. Aircrafts, which started to be produced for different purposes, have caused many accidents and even deaths in their postproduction use and especially in the design development stages. Over the years, various arrangements have been made, international agreements have been signed, and local and international organizations have been established in order to prevent these accidents and deaths and to manage aircraft operations safely. Annex19 Safety Management System (SMS), which is the 19th and last annex of the International Civil Aviation Organisation (ICAO) Air Transport rules, is a system for managing the safety risks of organizations carrying out aviation activities and ensuring the effectiveness of safety risk controls, and includes systematic procedures, practices and policies for the management of these risks. Implementation of SMS in organizations carrying out civil aviation activities has started to be made compulsory by relevant local and international authorities. The studies which aim to prove whether the designed and manufactured aircraft provide the desired performance are called flight tests. Advances in technology, when incorporated into aircraft design processes, have led to the creation of formal requirements and specifications that provide universal benchmarks in aircraft design processes. Parallel to these developments, the aims and applications of flight testing have also matured and become a discipline. Flight tests are highrisk flights since they are carried out with aircraft that have not been certified yet, have low flight hours, and have many unknowns about the nature of the aircraft. For these reasons, within the scope of flight test activities, the risks should be determined in advance, necessary mitigation studies should be carried out and test procedures should be determined. It is stated in the Flight Test Operational Manuel (FTOM) guide document published by EASA that flight test organizations should improve the SMS. In this document, flight test risk management activities and risk management activities that must be carried out within the scope of SMS are separated. Flight test risk management was held responsible for the management of specific risks specific to each flight test, while SMS risk management was held responsible for operational risks that constitute continuity. Within the scope of this study, the Analytical Hierarchy Process (AHP) method, which is a hierarchical weighted multipurpose decision analysis method that combines qualitative and quantitative analysis methods, was used to provide a holistic awareness of flight safety risks in flight test activities. When using the weighting function of the AHP method, the safety risk matrix published by the SMS risk management of the relevant institution is based on and it is aimed to determine how important the risks are to each other. The values selected from the risk matrix for the risk specific to the flight test and operational risks are multiplied with the coefficients to be determined for each risk level to create a comparison matrix and the weight of each risk is calculated. It is expected that the flight test risk will have the largest share in the weighting to be achieved, and the evaluation of the results in this direction. Providing corrective feedback on the coefficients determined for each risk level, the choice of risk value and the structure of the risk matrix are the gains that can be achieved in addition to flight safety risk awareness. The use of the safety risk matrix and the values here while calculating the weights of the risks eliminates the subjective evaluation in the AHP method and makes the consistency index 0. However, the method used is subjective due to the structure of the risk matrix, the selected risk values and coefficients. For this reason, the returns to be obtained in line with the outputs of the method will allow these subjective values to change and take their optimum form over time. This study, which started in line with the definitions in the EASA Part21 FTOM Guide document, became an example of how Flight Test Risk Management and Safety Management System can work together. As a result, it is aimed to raise awareness of the flight safety risks involved in Flight Test Activities to the relevant flight test team by making use of the weighting feature of the AHP method.

ÖgeDesign and optimization of variable stiffness composite structures modeled using Bézier curve(Graduate School, 20220609) Coşkun, Onur ; Türkmen, Halit S ; 511162115 ; Aeronautics and Astronautics EngineeringThe usage of advanced fiberreinforced polymer (FRP) matrix composites has been dramatically increased since the first carbon fiber patented in the 1960's. Particularly, the aerospace companies' interest has been gradually grown in carbon fiberreinforced polymer (CFRP) aircraft structures due to major performance improvements such as high strength and stiffness to weight ratios and reduced weight. The traditional design approaches and manufacturing methodologies of CFRP structures in various industries have been well established and applied for more than 50 years. They are mainly developed for straight fibers and the optimum design solutions have been achieved by the choice of constituent materials, different fiber orientation angles that are often limited to 0, ±45, and 90 degrees, laminate stacking sequence and total number of plies. However, increasing complexity of structure geometries have resulted in complex layups & contours; therefore, advanced manufacturing methodologies such as Automated Fiber Placement (AFP) and Tailored Fiber Placement (TFP) are developed to improve productivity and process reliability. Following the introduction of advanced manufacturing methods CFRP structures with complex geometry, complex layups & contours have been manufactured with improved productivity and process reliability. In addition to that, composite materials can be tailored more effectively to meet design requirements by changing the design approach from straight to curvilinear fibers. The composite structures designed with curvilinear fibers have spatially varying stiffness due to local fiber orientations in the ply, and accordingly they are named as variable stiffness (VS) structures. In this dissertation, the variable stiffness composite plates and circular cylindrical shells modeled using parametric Bézier curves as curvilinear fiber paths are designed and optimized. The design method with parametric Bézier curves covers a wide and complex design space from simple linear angle variation to constant curvature path to highly nonlinear angle variations. The designed VS composite structures are expressed with new layup definition conventions that use simple and intuitive variables such as segment/station angles and multipliers/curvatures. The optimum structural designs in the complex design space of plates and circular cylindrical shells are searched using a multistep optimization with multiobjective such as buckling and stiffness, and a novel pretrained multistep/cycle surrogatebased optimization (PMSO) framework with single objective, i.e. buckling, respectively. First, VS composite plates and circular cylinders are designed with 'Direct Fiber Path Parameterization' (DFPP) that uses continuous curve functions for fiber orientation angles at each point or grid in the laminate. The cubic and quadratic Bézier curves are used as curvilinear fiber path. The fiber paths as Bézier curves are constructed with approximation and interpolation formulations. The approximation curve captures the defined angles at the start point and the end point, and the shape of the curve changes with the position of the control points intuitively. On the other end, interpolation curve follows the exact positions of control points at the expense of control of the fiber angle. Therefore, fiber angles are different from the defined sector angles. Three types of parametric curves are formulated, i.e., cubic Bézier interpolation curve and quadratic and cubic Bézier approximation curves. Cubic Bézier approximation curves are specially formulated to define constant curvature fiber paths. Considering the characteristics of Bézier curves, intuitive conventions to define layups of laminated VS plates and shells are proposed. The position of course boundaries within each ply are calculated using the reference fiber path, and resulting courses are shifted along one direction to cover VS plate and cylindrical shell surfaces. The reference fiber paths are defined with design variables such as sector/station angles and multipliers/curvatures, which are used to calculate control points. Current proposal for layup definition allows one to move stations using multipliers within an interval, hence it is possible to find lower curvature fiber paths with the same sector angles. The minimum curvature value is a major characteristic of curvilinear fiber paths due to manufacturing constraints. Golden Section Search and Downhill Simplex methodologies are used depending on the design approach together with Bézier curve formulations. The Golden Section Search method, which is a technique for finding an extremum, (minimum or maximum) of a unimodal function, is applied to approximation curves, and Downhill Simplex method is applied to interpolation curves due to a multidimensional space with n multipliers. The curvature values are significantly minimized without changing the layup definitions; especially for quadratic Bézier approximation curves, the curvature distribution along characteristic length gets close to the constant curvature results. Three different geometries for VS plates (b/a ≈ 1.8) and two different geometries for VS circular cylinder Cylinder 1 (L/R ≈ 2.67) and Cylinder 2 (L/R = 2) are modeled. Considering the cylindrical coordinates, the courses laid on the cylinder are axially shifted to have circumferentially varying stiffness and strength; however, the effective width of the ply is modified to have continuous fiber paths around the circumference. To have averaged boundaries, which is called no gap condition, minimum effective course width is used as the reference shifting value. The layup process is completed on developed plane of the cylinder, and then translated into cylindrical coordinates. Second, finite element models of laminated VS plates and cylindrical shells are generated using Ansys Mechanical APDL codes. Four node Shell 181 quadrilateral elements with full integration are used to mesh the VS plate and the VS composite shells with Cylinder 1 geometry, and FE models of layered VS composite shells with Cylinder 2 geometry are generated using eight node Shell 281 elements with reduced integration. Both shell elements are based on the firstorder sheardeformation theory (referred to as MindlinReissner shell theory). These elements with six degrees of freedom at each node (translations in the nodal x, y, and z directions and rotations about the nodal x, y, and z axis) are usually used to analyze thin to moderatelythick shell structures. The mesh convergence studies of reference QI plate and VS circular shells and plates are performed, and reference element edge lengths are chosen considering accurate mapping of curvilinear fiber paths on finite element mesh, buckling results, and computational efficiency. The curvilinear fiber paths for each ply are then mapped to related element centroids by APDL functions. Next, the VS laminates and circular cylinders are optimized for maximum stiffness and/or buckling load using surrogatebased NSGAII algorithm. The NSGAII is an evolutionary algorithm and supports multiobjective optimizations. The design space development strategy is an important part of surrogate modeling to get optimal distribution of fewest number of points with maximum insight into the design. Thus, experimental designs are generated with Optimal Space Filling (OSF) algorithm according to specified intervals. Then, surrogate models are generated with Genetic Aggregation. The Genetic Aggregation selects the best solution from Full 2ndOrder Polynomials, NonParametric Regression, Kriging, and Moving Least Squares. The algorithm generates the population of all methods and then it applies single response surface or combination of response surfaces according to fitness functions. The assemblage of Genetic Aggregation surrogate model is constructed with weighted average of selected metamodels. The weight and the combination of metamodels depend on design of experiment method and the behavior of VS structures designed with the approximation and interpolation curves. Twocycle approach is used to increase the accuracy of the surrogate models. The first cycle consists of the design space between 80° and 80°, and the second cycle searches for ±20 degree of the optimum angle calculated at the first cycle. A better layup for Size 1 – Case 3 compared to results in literature is found by using reduced the domain in the second cycle. The best buckling performance is found for Size 3 plate with Case 3 boundary conditions that has 103% increase in buckling load against 44% reduction in equivalent stiffness compared to reference quasiisotropic laminate. It is clear that increase in plate size increases the buckling performance of VS plates. This is due to wider design space with relaxed curvature constraint that allows higher angle differences between edge and the middle of the plate, accordingly fiber angle at the plate edges can align closer to loading direction while the fiber angles far from edge converges to smaller angles. The quadratic Bézier approximation curve is found to be a good alternative of cubic Bézier approximation curve with constant curvature, as it has similar edge load distribution and buckling mode shapes. Additionally, the stations, which are fixed for cubic Bézier approximation curve with constant curvature, can be shifted for definition with quadratic approximation without changing the layup definition according to designer's need. Finally, a novel pretrained design optimization framework is proposed to optimize buckling load of VS composite circular cylinders under pure bending with curvature and strength constraints. By using Bézier curves, designers have more effective control on the design domain to improve the buckling performance in accordance with requirements such as curvature and strength. The strength constrain is calculated by using TsaiWu failure criterion. The optimizations are conducted using PMSO framework that utilizes NSGAII. The main benefit of this framework is to gather prior knowledge about the design space at the first step by conducting pretraining optimizations using laminated VS composite shells with single ply definition. This narrows down the design space significantly before conducting a full layup design optimization with large number of parameters at the second step. Moreover, multiple cycle approach at each step helps to reduce the complexity of the optimization together with increased surrogate model accuracy. The optimization is completed for four different laminate stackups that are made up of all VS plies and partial VS plies in combination with unidirectional fibers (±45°, 0° and 90°). The maximum increase in buckling load is found to be 31% for Laminate 1 and 41% for Laminate 4 compared to reference QI shells. This gives 14% and 16% higher buckling load than the literature studies, and the Laminate 4 results are achieved for two times more design variables using approximately same number of sampling points. The gain in buckling load is due to the redistribution of stresses on compression and tension side as a consequence of variable angle distribution within each ply. The fiber angles close to axial direction on the tension side increase the strength and stiffness of the structure, and angles close to circumferential axis on the compression side reduce the stiffness of buckling critical region to distribute the compressive loads onto wider region.

ÖgeDeğişken açılı elyaf kompozitlerin uygulanabilirlik açısından yeni bir tasarım yaklaşımı ve diferansiyel evrim ile burkulma yükü optimizasyonu(Lisansüstü Eğitim Enstitüsü, 20220622) Beyazgül, Umut ; Balkan, Demet ; Mecitoğlu, Zahit ; 511181207 ; Uçak ve Uzay MühendisligiHavauzay araçları başta olmak üzere bir çok sektörde kullanılan elyaf kompozitlerin başlıca tercih edilme nedenleri önceden kullanılan muadillerine göre daha yüksek spesifik dayanım ve daha düşük ağırlık ile birlikte büyük bir maliyet tasarrufu sağlamasıdır. Bir yapının maruz kaldığı yük isterleri göz önüne alınarak yöne bağlı mekanik özellikleri sayesinde her katmanda uygun bir oryantasyon açısı ile istifleme dizisi tasarlanarak geleneksel elyaf kompozitlerin performansı artırılmaktadır. Bu tezdeki değişken oryantasyon açısıyla ifade edilen ise elyaf kompozitlerin değişken direngenliğini, elyaf açısını aynı katman içerisinde değiştirerek serim güzergahını değiştirmektir. Böylece, elyaf kompozit malzemenin yöne bağlı mekanik özelliklerini kullanarak optimizasyon kapsamını genişletmekte ve potansiyeline ulaşmasını sağlamaktadır. Düzlem içinde daha avantajlı bir yük dağılımı oluşturmaya imkan vermektedir. Bu tasarıma sahip kompozitlerin üretimi, uzun yıllardır geleneksel kompozit üretiminde de kullanılan, yüksek hassasiyet ve hızlı serim sağlayan otomatik elyaf serim cihazı (AFP) ve otomatik bant serim cihazı (ATL) ile planlanmaktadır. Bu çalışmada, mevcut laminasyon teorilerinde sabit sayı olarak geçen her bir katman açısı yerine konuma bağlı, lineer değişen bir oryantasyon açısı tanımı yapılmıştır ve tek eksende yükleme altında burkulma incelenmiştir. Yüksek dereceden doğrusal olmayan, türevlenebilirlik açısından klasikgradyan bazlı yöntemlerde uygulaması zor olan amaç fonksiyonlarının optimizasyonunda buluşsal aramaya bağlı stokastik özelliği olan yöntemler arasından göreceli olarak daha hızlı ve yüksek doğruluğu olan evrimsel algoritma kullanılmıştır. Tasarım modellemesi ve optimizasyon kodu python programlama dilinde yazılarak Abaqus doğrusal burkulma analizi amaç fonksiyonu olarak entegre edilmiştir. Üretim sonrası kusurlardan kaçınmak için simetrik elyaf kompozit ve dengelenmiş katman açıları dizilimi tasarımın sınırlarını oluşturmuştur. Ayrıca, açı parametrelerinin alt ve üst limitlerinin yanı sıra, üretim cihazları ve filament demetlerinden kaynaklı eğrilik yarıçapı kısıtlaması da kullanılmıştır ve eğrilik yarıçap kısıtlama denklemi türetilmiştir. Optimizasyon ve sayısal analizler sonucunda, farklı katman sayılarında ve farklı boyen oranlarında değişken açılı elyaf kompozitlerin kritik burkulma yükü bakımından avantajlı olduğu gösterilmiştir ve boyen oranına ve katman sayısına göre yük kazanım oranları arasındaki korelasyon analiz edilmiştir.