LEE- Uçak ve Uzay Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Başlık ile LEE- Uçak ve Uzay Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeA high-order finite-volume solver for supersonic flows(Lisansüstü Eğitim Enstitüsü, 2022) Spinelli, Gregoria Gerardo ; Çelik, Bayram ; 721738 ; Uçak ve Uzay MühendisliğiNowadays, Computational Fluid Dynamics (CFD) is a powerful tool in engineering used in various industries such as automotive, aerospace and nuclear power. More than ever the growing computational power of modern computer systems allows for realistic modelization of physics. Most of the open-source codes, however, offer a second-order approximation of the physical model in both space and time. The goal of this thesis is to extend this order of approximation to what is defined as high-order discretization in both space and time by developing a two-dimensional finite-volume solver. This is especially challenging when modeling supersonic flows, which shall be addressed in this study. To tackle this task, we employed the numerical methods described in the following. Curvilinear meshes are utilized since an accurate representation of the domain and its boundaries, i.e. the object under investigation, are required. High-order approximation in space is guaranteed by a Central Essentially Non-Oscillatory (CENO) scheme, which combines a piece-wise linear reconstruction and a k-exact reconstruction in region with and without discontinuities, respectively. The usage of multi-step methods such as Runge-Kutta methods allow for a high-order approximation in time. The algorithm to evaluate convective fluxes is based on the family of Advection Upstream Splitting (AUSM) schemes, which use an upwind reconstruction. A central stencil is used to evaluate viscous fluxes instead. When using high-order schemes, discontinuities induce numerical problems, such as oscillations in the solution. To avoid the oscillations, the CENO scheme reverts to a piece-wise linear reconstruction in regions with discontinuities. However, this introduces a loss of accuracy. The CENO algorithm is capable of confining this loss of accuracy to the cells closest to the discontinuity. In order to reduce this accuracy loss Adaptive Mesh Refinement (AMR) is used. This algorithm refines the mesh near the discontinuity, confining the loss of accuracy to a smaller portion of the domain. In this study, a combination of the CENO scheme and the AUSM schemes is used to model several problems in different compressibility regimes, with a focus on supersonic flows. The scope of this thesis is to analyze the capabilities and the limitations of the proposed combination. In comparison to traditional implementations, which can be found in literature, our implementation does not impose a limit on the refinement ratio of neighboring cells while utilizing AMR. Due to the high computational expenses of a high-order scheme in conjunction with AMR, our solver benefits from a shared memory parallelization. Another advantage over traditional implementations is that our solver requires one layer of ghost cells less for the transfer of information between adjacent blocks. The validation of the solver is performed in different steps. We assess the order of accuracy of the CENO scheme by interpolating a smooth function, in this case the spherical cosine function. Then we validate the algorithm to compute the inviscid fluxes by modeling a Sod shock tube. Finally, the Boundary Conditions (BCs) for the inviscid solver and its order of accuracy are validated by modeling a convected vortex in a supersonic uniform flow. The curvilinear mesh is validated by modeling the flow around a NACA0012 airfoil. The computation of the viscous fluxes is validated by modeling a viscous boundary layer developing on a flat plate. The BCs for viscous flows and the curvilinear implementation are validated by modeling the flow around a cylinder and a NACA0012 airfoil. The AUSM schemes are tested for shock robustness by modeling an inviscid hypersonic cylinder at a Mach number of 20 and a viscous hypersonic cylinder at a Mach number of 8.03. Then, we validate our AMR implementation by modeling a two-dimensional Riemann problem. All the validation results agree well with either numerical or experimental results available in literature. The performance of the code, in terms of computational time required by the different orders of approximation and the parallel efficiency, is assessed. For the former a supersonic vortex convection served as an example, while the latter used a two-dimensional Riemann problem. We obtained a linear speed-up until 12 cores. The highest speedup value obtained is 20 with 32 cores. Furthermore, the solver is used to model three different supersonic applications: the interaction between a vortex and a normal shock, the double Mach reflection and the diffraction of a shock on a wedge. The first application resembles a strong interaction between a vortex and a steady shock wave for two different vortex strengths. In both cases our results perfectly match the ones obtained by a Weighted Essentially Non-Oscillatory (WENO) scheme documented in literature. Both schemes are approximating the solution with the same order of accuracy in both, time and space. The second application, the double Mach reflection, is a challenging problem for high-order solvers because the shock and its reflections interact strongly. For this application, all AUSM-schemes under investigation fail to obtain a stable result. The main form of instability encountered is the Carbuncle phenomenon. Our implementation overcomes this problem by combining the AUSM+M scheme with the formulation of the speed of sound of the AUSM+up scheme. This combination is capable of modeling this problem without instabilities. Our results are in agreement with those obtained with a WENO scheme. Both, the reference solutions and our results, use the same order of accuracy in both, time and space. Finally, the third example is the diffraction of a shock past a delta wedge. In this configuration the shock is diffracted and forms three different main structures: two triple points, a vortex at the trailing edge of the wedge and a reflected shock traveling upwards. Our results agree well with both, numerical and experimental results available in literature. Here, a formation of a vortex-let is observed along the vortex slip-line. This vorticity generation under inviscid flow condition is studied and we conclude that the stretching of vorticity due to compressibility is the reason. The same formation is observed when the angle of attack of the wedge is increased in the range of 0-30. In general, the AUSM+up2 scheme performed best in terms of accuracy for all problems tested here. However, for configurations, in which the Carbuncle phenomenon may appear, the combination of the AUSM+M scheme and the computation of the speed of sound formula of the AUSM+up scheme is preferable for stability reasons. During our computations, we observe a small undershooting right behind shocks on curved boundaries. This is imputable to the curvilinear approximation of the boundaries, which is only second-order accurate. Our experience shows that the smoothness indicator formula in its original version, fails to label uniform flow regions as smooth. We solve the issue by introducing a threshold for the numerator of the formula. When the numerator is lower than the threshold, the cell is labeled as smooth. A value higher than 10^-7 for the threshold might force the solver to apply high-order reconstruction across shocks, and therefore will not apply the piece-wise linear reconstruction which prevents oscillations. We observe that the CENO scheme might cause unphysical states in both inviscid and viscous regime. By reconstructing the conservative variables instead of the primitive ones, we are able to prevent unphysical states for inviscid flows. For the viscous flows, temporarily reverting to first-order reconstruction in the cells where the temperature is computed as negative, prevents unphysical states. This technique is solely required during the first iterations of the solver, when the flow is started impulsively. In this study the CENO, the AUSM and the AMR methods are combined and applied successfully to supersonic problems. When modeling supersonic flow with high-order accuracy in space, one should prefer the combination of the AUSM schemes and the CENO scheme. While the CENO scheme is simpler than the WENO scheme used in comparison, we show that it yields results of comparable accuracy. Although it was beyond the scope of this study, the AUSM can be extended to real gas modeling which constitutes another advantage of this approach.
-
ÖgeA modified anfis system for aerial vehicles control(Lisansüstü Eğitim Enstitüsü, 2022) Öztürk, Muhammet ; Özkol, İbrahim ; 713564 ; Uçak ve Uzay MühendisliğiThis thesis presents fuzzy logic systems (FLS) and their control applications in aerial vehicles. In this context, firstly, type-1 fuzzy logic systems and secondly type-2 fuzzy logic systems are examined. Adaptive Neuro-Fuzzy Inference System (ANFIS) training models are examined and new type-1 and type-2 models are developed and tested. The new approaches are used for control problems as quadrotor control. Fuzzy logic system is a humanly structure that does not define any case precisely as 1 or 0. The Fuzzy logic systems define the case with membership functions. In literature, there are very much fuzzy logic applications as data processing, estimation, control, modeling, etc. Different Fuzzy Inference Systems (FIS) are proposed as Sugeno, Mamdani, Tsukamoto, and ¸Sen. The Sugeno and Mamdani FIS are the most widely used fuzzy logic systems. Mamdani antecedent and consequent parameters are composed of membership functions. Because of that, Mamdani FIS needs a defuzzification step to have a crisp output. Sugeno antecedent parameters are membership functions but consequent parameters are linear or constant and so, the Sugeno FIS does not need a defuzzification step. The Sugeno FIS needs less computational load and it is simpler than Mamdani FIS and so, it is more widely used than Mamdani FIS. Training of Mamdani parameters is more complicated and needs more calculation than Sugeno FIS. The Mamdani ANFIS approaches in the literature are examined and a new Mamdani ANFIS model (MANFIS) is proposed. Training performance of the proposed MANFIS model is tested for a nonlinear function and control performance is tested on a DC motor dynamic. Besides, ¸Sen FIS that was used for estimation of sunshine duration in 1998, is examined. This ¸SEN FIS antecedent and consequent parameters are membership functions as Mamdani FIS and needs to defuzzification step. However, because of the structure of the ¸Sen defuzzification structure, the ¸Sen FIS can be calculated with less computational load, and therefore ¸Sen ANFIS training model has been created. These three approaches are trained on a nonlinear function and used for online control. In this study, the neuro-fuzzy controller is used as online controller. Neuro-fuzzy controllers consist of simultaneous operation of two functions named fuzzy logic and ANFIS. The fuzzy logic function is the one that generates the control signal. It generates a control signal according to the controller inputs. The other function is the ANFIS function that trains the parameters of the fuzzy logic function. Neuro-fuzzy controllers are intelligent controllers, independent of the model, and constantly adapting their parameters. For this reason, these controllers' parameters values are constantly changing according to the changes in the system. There are studies on different neuro-fuzzy control systems in the literature. Each approach is tested on a DC motor model that is a single-input and single-output system, and the neuro-fuzzy controllers' advantages and performances are examined. In this way, the approaches in the literature and the approaches added within the scope of the thesis are compared to each other. Selected neuro-fuzzy controllers are used in quadrotor control. Quadrotors have a two-stage controller structure. In the first stage, position control is performed and the position control results are defined as angles. In the second stage, attitude control is performed over the calculated angle values. In this thesis, the neuro-fuzzy controller is shown to work perfectly well in single layer control structures, i.e., there was not any overshooting, and settling time was very short. But it is seen from quadrotor control results that the neuro-fuzzy controller can not give the desired performance in the two-layered control structure. Therefore, the feedback error learning control system, in which the fuzzy controller works together with conventional controllers, is examined. Fundamentally, there is an inverse dynamic model parallel to a classical controller in the feedback error learning structure. The inverse dynamic model aims to increase the performance by influencing the classical controller signal. In the literature, there are a lot of papers about the structure of feedback error learning control and there are different proposed approaches. In the structure used in this work, fuzzy logic parameters are trained using ANFIS with error input.The fuzzy logic control signal is obtained as a result of training. The fuzzy logic control signal is added to the conventional controller signal. This study has been tested on models such as DC motor and quadrotor. It is seen that the feedback error learning control with the ANFIS increases the control performances. Antecedent and consequent parameters of type-1 fuzzy logic systems consist of certain membership functions. A type-2 FLS is proposed to better define the uncertainties, because of that, type-2 fuzzy inference membership functions are proposed to include uncertainties. The type-2 FLS is operationally difficult because of uncertainties. In order to simplify type-2 FLS operations, interval type-2 FLS is proposed as a special case of generalized type-2 FLS in the literature. Interval type-2 membership functions are designed as a two-dimensional projection of general type-2 membership functions and represent the area between two type-1 membership functions. The area between these two type-1 membership functions is called Footprint of Uncertainty (FOU). This uncertainty also occurs in the weight values obtained from the antecedent membership functions. Consequent membership functions are also type-2 and it is not possible to perform the defuzzification step directly because of uncertainty. Therefore, type reduction methods have been developed to reduce the type-2 FLS to the type-1 FLS. Type reduction methods try to find the highest and lowest values of the fuzzy logic model. Therefore, a switch point should be determined between the weights obtained from the antecedent membership functions. Type reduction methods find these switch points by iterations and this process causes too much computation, so many different methods have been proposed to minimize this computational load. In 2018, an iterative-free method called Direct Approach (DA) was proposed. This method performs the type reduction process faster than other iterative methods. In the literature, studies such as neural networks and genetic algorithms on the training for parameters of the type-2 FLS still continue. These studies are also used in the interval type-2 fuzzy logic control systems. There are proposed interval type-2 ANFIS structures in literature, but they are not effective because of uncertainties of interval type-2 membership functions. FLS parameters for ANFIS training should not contain uncertainties. However, the type-2 FLS should inherently contain uncertainty. For this reason, Karnik-Mendel algorithm is modified, which is one of the type-reduction methods, to apply the ANFIS on interval type-2 FLS. The modified Karnik-Mendel algorithm gives the same results as the Karnik-Mendel algorithm. The modified Karnik-Mendel algorithm also gives exact parameter values for use in ANFIS. One can notice that the ANFIS training of the interval type-2 FLS has been developed successfully and has been used for system control.
-
ÖgeA study on optimization of a wing with fuel sloshing effects(Graduate School, 2022-01-24) Vergün, Tolga ; Doğan, Vedat Ziya ; 511181206 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiIn general, sloshing is defined as a phenomenon that corresponds to the free surface elevation in multiphase flows. It is a movement of liquid inside another object. Sloshing has been studied for centuries. The earliest work [48] was carried out in the literature by Euler in 1761 [17]. Lamb [32] theoretically examined sloshing in 1879. Especially with the development of technology, it has become more important. It appears in many different fields such as aviation, automotive, naval, etc. In the aviation industry, it is considered in fuel tanks. Since outcomes of sloshing may cause instability or damage to the structure, it is one of the concerns about aircraft design. To prevent its adverse effect, one of the most popular solutions is adding baffles into the fuel tank. Still, this solution also comes with a disadvantage: an increase in weight. To minimize the effects of added weight, designers optimize the structure by changing its shape, thickness, material, etc. In this study, a NACA 4412 airfoil-shaped composite wing is used and optimized in terms of safety factor and weight. To do so, an initial composite layup is determined from current designs and advice from literature. When the design of the initial system is completed, the system is imported into a transient solver in the Ansys Workbench environment to perform numerical analysis on the time domain. To achieve more realistic cases, the wing with different fuel tank fill levels (25%, 50%, and 75%) is exposed to aerodynamic loads while the aircraft is rolling, yawing, and dutch rolling. The aircraft is assumed to fly with a constant speed of 60 m/s (~120 knots) to apply aerodynamic loads. Resultant force for 60 m/s airspeed is applied onto the wing surface by 1-Way Fluid-Structure Interaction (1-Way FSI) as a distributed pressure. Using this method, only fluid loads are transferred to the structural system, and the effect of wing deformation on the fluid flow field is neglected. Once gravity effects and aerodynamic loads are applied to the wing structure, displacement is defined as the wing is moving 20 deg/s for 3 seconds for all types of movements. On the other hand, fluid properties are described in the Ansys Fluent environment. Fluent defines the fuel level, fluid properties, computational fluid dynamics (CFD) solver, etc. Once both structural and fluid systems are ready, system coupling can perform 2-Way Fluid-Structure Interaction (2-Way FSI). Using this method, fluid loads and structural deformations are transferred simultaneously at each step. In this method, the structural system transfers displacement to the fluid system while the fluid system transfers pressure to the structural system. After nine analyses, the critical case is determined regarding the safety factor. Critical case, in which system has the lowest minimum safety factor, is found as 75% filled fuel tank while aircraft dutch rolling. After the determination of the critical case, the optimization process is started. During the optimization process, 1-Way FSI is used since the computational cost of the 2-Way FSI method is approximately 35 times that of 1-Way FSI. However, taking less time should not be enough to accept 1-Way FSI as a solution method; the deviation of two methods with each other is also investigated. After this investigation, it was found that the variation between the two methods is about 1% in terms of safety factors for our problem. In the light of this information, 1-Way FSI is preferred to apply both sloshing and aerodynamic loads onto the structure to reduce computational time. After method selection, thickness optimization is started. Ansys Workbench creates a design of experiments (DOE) to examine response surface points. Latin Hypercube Sampling Design (LHSD) is preferred as a DOE method since it generates non-collapsing and space-filling points to create a better response surface. After creating the initial response surface using Genetic Aggregation, the optimization process is started using the Multi-Objective Genetic Algorithm (MOGA). Then, optimum values are verified by analyzing the optimum results in Ansys Workbench. When the optimum results are verified, it is realized that there is a notable deviation in results between optimized and verified results. To minimize the variation, refinement points are added to the response surface. This process is kept going until variation comes under 1%. After finding the optimum results, it is noticed that its precision is too high to maintain manufacturability so that it is rounded into 1% of a millimeter. In the end, final thickness values are verified. As a result, optimum values are found. It is found that weight is decreased from 100.64 kg to 94.35 kg, which means a 6.3% gain in terms of weight, while the minimum safety factor of the system is only reduced from 1.56 to 1.54. At the end of the study, it is concluded that a 6.3% reduction in weight would reflect energy saving.
-
ÖgeA study on static and dynamic buckling analysis of thin walled composite cylindrical shells(Graduate School, 2022-01-24) Özgen, Cansu ; Doğan, Vedat Ziya ; 511171148 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThin-walled structures have many useage in many industries. Examples of these fields include: aircraft, spacecraft and rockets can be given. The reason for the use of thin-walled structures is that they have a high strength weight ratio. In order to define a cylinder as thin-walled, the ratio of radius to thickness must be more than 20, and one of the problems encountered in the use of such structures is the problem of buckling. It is possible to define the buckling as a state of instability in the structure under compressive loads. This state of instability can be seen in the load displacement graph as the curve follows two different paths. The possible behaviors; snap through or bifurcation behavior. Compressive loading that cause buckling; there may be an axial load, torsional load, bending load, external pressure. In addition to these loads, buckling may occur due to temperature change. Within the scope of this thesis, the buckling behavior of thin-walled cylinders under axial compression was examined. The cylinder under the axial load indicates some displacement. When the amount of load applied reaches critical level, the structure moves from one state of equilibrium to another. After some point, the structure shows high displacement behavior and loses stiffness. The amount of load that the structure will carry decreases considerably, but the structure continues to carry loads. The behavior of the structure after this point is called post-buckling behavior. The critical load level for the structure can be determined by using finite elements method. Linear eigenvalue analysis can be performed to determine the static buckling load. However, it should be noted here that eigenvalue-eigenvector analysis can only be used to make an approximate estimate of the buckling load and input the resulting buckling shape into nonlinear analyses as a form of imperfection. In addition, it can be preferred to change parameters and compare them, since they are cheaper than other types of analysis. Since the buckling load is highly affected by the imperfection, nonlinear methods with geometric imperfection should be used to estimate a more precise buckling load. It is not possible to identify geometric imperfection in linear eigenvalue analysis. Therefore, a different type of analysis should be selected in order to add imperfection. For example, an analysis model which includes imperfection can be established with the Riks method as a nonlinear static analysis type. Unlike the Newton-Rapson method, the Riks method is capable of backtracking in curves. Thus, it is suitable for use in buckling analysis. In Riks analysis, it is recommended to add imperfection in contrast to linear eigenvalue analysis. Because if the imperfection is added, the problem will be bifurcation problem instead of limit load problem and sharp turns in the graph can cause divergence in analysis. Another nonlinear method of static phenomena is called quasi-static analysis which is used dynamic solver. The important thing to note here is that the inertial effects should be too small to be neglected in the analysis. For this purpose, kinetic energy and internal energy should be compared at the end of the analysis and kinetic energy should be ensured to be negligible levels besides internal energy. Also, if the event is solved in the actual time length, this analysis will be quite expensive. Therefore, the time must be scaled. In order to scale the time correctly, frequency analysis can be performed first and the analysis time can be determined longer than the period corresponding to the first natural frequency. For three analysis methods mentioned within this study, validation studies were carried out with the examples in the literature. As a result of each type of analysis giving consistent results, the effect of parameters on static buckling load was examined, while linear eigenvalue analysis method was used because it was also sufficient for cheaper analysis method and comparison studies. While displacement-controlled analyses were carried out in the static buckling analyses mentioned, load-controlled analyses were performed in the analyses for the determination of dynamic buckling force. As a result of these analyses, they were evaluated according to different dynamic buckling criteria. There are some of the dynamic buckling criteria; Volmir criterion, Budiansky-Roth criterion, Hoff-Bruce criterion, etc. When Budiansky-Roth criterion is used, the first estimated buckling load is applied to the structure and displacement - time graph is drawn. If a major change in displacement is observed, it can be assumed that the structure is dynamically buckled. For Hoff-Bruce criterion, the speed - displacement graph should be drawn. If this graph is not focused in a single area and is drawn in a scattered way, it is considered that the structure has moved to the unstable area. As in static buckling analyses, dynamic buckling analyses were primarily validated with a sample study in the literature. After the analysis methods, the numerical studies were carried out on the effect of some parameters on the buckling load. First, the effect of the stacking sequence of composite layers on the buckling load was examined. In this context, a comprehensive study was carried out, both from which layer has the greatest effect of changing the angle and which angle has the highest buckling load. In addition, the some angle combinations are obtained in accordance with the angle stacking rules found in the literature. For those stacking sequences, buckling forces are calculated with both finite element analyses and analytically. In addition, comparisons were made with different materials. Here, the buckling load is calculated both for cylinders with different masses of the same thickness and for cylinders with different thicknesses with the same mass. Here, the highest force value for cylinders with the same mass is obtained for a uniform composite. In addition, although the highest buckling force was obtained for steel material in the analysis of cylinders of the same thickness, when we look at the ratio of buckling load to mass, the highest value was obtained for composite material. In addition, the ratio of length to diameter and the effect of thickness were also examined. Here, as the length to diameter ratio increases, the buckling load decreases. As the thickness increases, the buckling load increases with the square of the thickness. In addition to the effect of the length to diameter ratio and the effect of thickness, the loading time and the shape of the loading profile are also known in dynamic buckling analysis. In addition, the critical buckling force is affected by imperfections in the structure, which usually occur during the production of the structure. How sensitive the structures are to the imperfection may vary depending on the different parameters. The imperfection can be divided into three different groups as geometric, material and loading. Cylinders under axial load are particularly affected by geometric imperfection. The geometric imperfection can be defined as how far the structure is from a perfect cylindrical structure. It is possible to determine the specified amount of deviation by different measurement methods. Although it is not possible to measure the amount of imperfection for all structures, an idea can be gained about how much imperfection is expected from the studies found in the literature. Both the change in the buckling load on the measured cylinders and the imperfection effect of the buckling load can be measured by adding the measured amount of imperfection to the buckling load calculations. In cases where the amount of imperfection cannot be measured, the finite element can be included in the analysis model as an eigenvector imperfection obtained from linear buckling analysis and the critical buckling load can be calculated for the imperfect structure using nonlinear analysis methods. In this study, studies were carried out on how imperfection sensitivity changes under both static and dynamic loading with different parameters. These parameters are the the length-to-diameter ratio, the effect of the stacking sequence of the composite layers and the added imperfection shape. The most important result obtained in the study on imperfection sensitivity is that the effect of the imperfection on the buckling load is quite high. Even geometric imperfection equal to thickness can cause the buckling load to drop by up to half.
-
ÖgeDevelopment of single-frame methods aided kalman-type filtering algorithms for attitude estimation of nano-satellites(Graduate School, 2021-08-20) Çilden Güler, Demet ; Hacızade, Cengiz ; Kaymaz, Zerefşan ; 511162104 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiThere is a growing demand for the development of highly accurate attitude estimation algorithms even for small satellite e.g. nanosatellites with attitude sensors that are typically cheap, simple, and light because, in order to control the orientation of a satellite or its instrument, it is important to estimate the attitude accurately. Here, the estimation is especially important in nanosatellites, whose sensors are usually low-cost and have higher noise levels than high-end sensors. The algorithms should also be able to run on systems with very restricted computer power. One of the aims of the thesis is to develop attitude estimation filters that improve the estimation accuracy while not increasing the computational burden too much. For this purpose, Kalman filter extensions are examined for attitude estimation with a 3-axis magnetometer and sun sensor measurements. In the first part of this research, the performance of the developed extensions for the state of art attitude estimation filters is evaluated by taking into consideration both accuracy and computational complexity. Here, single-frame method-aided attitude estimation algorithms are introduced. As the single-frame method, singular value decomposition (SVD) is used that aided extended Kalman filter (EKF) and unscented Kalman filter (UKF) for nanosatellite's attitude estimation. The development of the system model of the filter, and the measurement models of the sun sensors and the magnetometers, which are used to generate vector observations is presented. Vector observations are used in SVD for satellite attitude determination purposes. In the presented method, filtering stage inputs are coming from SVD as the linear measurements of attitude and their error covariance relations. In this step, UD is also introduced for EKF that factorizes the attitude angles error covariance with forming the measurements in order to obtain the appropriate inputs for the filtering stage. The necessity of the sub-step, called UD factorization on the measurement covariance is discussed. The accuracy of the estimation results of the SVD-aided EKF with and without UD factorization is compared for the estimation performance. Then, a case including an eclipse period is considered and possible switching rules are discussed especially for the eclipse period, when the sun sensor measurements are not available. There are also other attitude estimation algorithms that have strengths in coping well with nonlinear problems or working well with heavy-tailed noise. Therefore, different types of filters are also tested to see what kind of filter provides the largest improvements in the estimation accuracy. Kalman-type filter extensions correspond to different ways of approximating the models. In that sense, a filter takes the non-Gaussianity into account and updates the measurement noise covariance whereas another one minimizes the nonlinearity. Various other algorithms can be used for adapting the Kalman filter by scaling or updating the covariance of the filter. The filtering extensions are developed so that each of them is designed to mitigate different types of error sources for the Kalman filter that is used as the baseline. The distribution of the magnetometer noises for a better model is also investigated using sensor flight data. The filters are tested for the measurement noise with the best fitting distribution. The responses of the filters are performed under different operation modes such as nominal mode, recovery from incorrect initial state, short and long-term sensor faults. Another aspect of the thesis is to investigate two major environmental disturbances on the spacecraft close enough to a planet: the external magnetic field and the planet's albedo. As magnetometers and sun sensors are widely used attitude sensors, external magnetic field and albedo models have an important role in the accuracy of the attitude estimation. The magnetometers implemented on a spacecraft measure the internal geomagnetic field sources caused by the planet's dynamo and crust as well as the external sources such as solar wind and interplanetary magnetic field. However, the models that include only the internal field are frequently used, which might remain incapable when geomagnetic activities occur causing an error in the magnetic field model in comparison with the sensor measurements. Here, the external field variations caused by the solar wind, magnetic storms, and magnetospheric substorms are generally treated as bias on the measurements and removed from the measurements by estimating them in the augmented states. The measurement, in this case, diverges from the real case after the elimination. Another approach can be proposed to consider the external field in the model and not treat it as an error source. In this way, the model can represent the magnetic field closer to reality. If a magnetic field model used for the spacecraft attitude control does not consider the external fields, it can misevaluate that there is more noise on the sensor, while the variations are caused by a physical phenomenon (e.g. a magnetospheric substorm event), and not the sensor itself. Different geomagnetic field models are compared to study the errors resulting from the representation of magnetic fields that affect the satellite attitude determination system. For this purpose, we used magnetometer data from low Earth-orbiting spacecraft and the geomagnetic models, IGRF and T89 to study the differences between the magnetic field components, strength, and the angle between the predicted and observed vector magnetic fields. The comparisons are made during geomagnetically active and quiet days to see the effects of the geomagnetic storms and sub-storms on the predicted and observed magnetic fields and angles. The angles, in turn, are used to estimate the spacecraft attitude, and hence, the differences between model and observations as well as between two models become important to determine and reduce the errors associated with the models under different space environment conditions. It is shown that the models differ from the observations even during the geomagnetically quiet times but the associated errors during the geomagnetically active times increase more. It is found that the T89 model gives closer predictions to the observations, especially during active times and the errors are smaller compared to the IGRF model. The magnitude of the error in the angle under both environmental conditions is found to be less than 1 degree. The effects of magnetic disturbances resulting from geospace storms on the satellite attitudes estimated by EKF are also examined. The increasing levels of geomagnetic activity affect geomagnetic field vectors predicted by IGRF and T89 models. Various sensor combinations including magnetometer, gyroscope, and sun sensor are evaluated for magnetically quiet and active times. Errors are calculated for estimated attitude angles and differences are discussed. This portion of the study emphasizes the importance of environmental factors on the satellite attitude determination systems. Since the sun sensors are frequently used in both planet-orbiting satellites and interplanetary spacecraft missions in the solar system, a spacecraft close enough to the sun and a planet is also considered. The spacecraft receives electromagnetic radiation of direct solar flux, reflected radiation namely albedo, and emitted radiation of that planet. The albedo is the fraction of sunlight incident and reflected light from the planet. Spacecraft can be exposed to albedo when it sees the sunlit part of the planet. The albedo values vary depending on the seasonal, geographical, diurnal changes as well as the cloud coverage. The sun sensor not only measures the light from the sun but also the albedo of the planet. So, a planet's albedo interference can cause anomalous sun sensor readings. This can be eliminated by filtering the sun sensors to be insensitive to albedo. However, in most of the nanosatellites, coarse sun sensors are used and they are sensitive to albedo. Besides, some critical components and spacecraft systems e.g. optical sensors, thermal and power subsystems have to take the light reflectance into account. This makes the albedo estimations a significant factor in their analysis as well. Therefore, in this research, the purpose is to estimate the planet's albedo using a simple model with less parameter dependency than any albedo models and to estimate the attitude by comprising the corrected sun sensor measurements. A three-axis attitude estimation scheme is presented using a set of Earth's albedo interfered coarse sun sensors (CSSs), which are inexpensive, small in size, and light in power consumption. For modeling the interference, a two-stage albedo estimation algorithm based on an autoregressive (AR) model is proposed. The algorithm does not require any data such as albedo coefficients, spacecraft position, sky condition, or ground coverage, other than albedo measurements. The results are compared with different albedo models based on the reference conditions. The models are obtained using either a data-driven or estimated approach. The proposed estimated albedo is fed to the CSS measurements for correction. The corrected CSS measurements are processed under various estimation techniques with different sensor configurations. The relative performance of the attitude estimation schemes when using different albedo models is examined. In summary, the effects of two main space environment disturbances on the satellite's attitude estimation are studied with a comprehensive analysis with different types of spacecraft trajectories under various environmental conditions. The performance analyses are expected to be of interest to the aerospace community as they can be reproducible for the applications of spacecraft systems or aerial vehicles.
-
ÖgeDynamic and aeroelastic analysis of advanced aircraft wings carrying external stores(Lisansüstü Eğitim Enstitüsü, 2021) Aksongur Kaçar, Alev ; Kaya, Metin Orhan ; 709160 ; Uçak ve Uzay MühendisliğiBu çalışma gelişmiş uçak kanatlarında harici yük ve takip edici kuvvet altında kanadın dinamik ve aeroleastik davranışlarını incelemektedir. Harici yüklerin ağırlığı, pozisyonu, birbirine göre yerleşimi, kompozit katmanların yönelimi ile itki kuvveti etkileri incelenmiş ve hepsinin kanadın doğal frekansı ve kritik çırpınma hızına olan etkileri tespit edilmiştir.
-
ÖgeExperimental investigation of leading edge suction parameter on massively separated flow(Graduate School, 2021-05-10) Aydın, Egemen ; Yıldırım Çetiner, Nuriye Leman Okşan ; 511171150 ; Aerospace Engineering ; Uçak ve Uzay MühendisliğiThe study aims to investigate and understand the Leading Edge Suction Parameter (LESP) application on the massively separated flow. The experiment was done by gathering force data from the downstream flat plate and the visualization of the flow structures is done by Digital Particle Image Velocimetry. The experiments are conducted in free surface, closed-circuit, large scale water channel located in Trisonic Laboratory of Istanbul Technical University's Faculty of Aeronautics and Astronautics. The velocity of the tunnel is equal to 0.1 m/s which results in a 10.000 Reynolds Number. During the experiment, the flat plate at the downstream of the gust generator (plat plate) is kept constant angle of attack and the test cases are selecting to show that the LESP parameter that derived from only one force component works for different gust interaction with the flat plate. As already discussed in the literature, the critical LESP parameter depends on only airfoil shape and its ambient Reynolds Number. Also, the critical LESP number is calculated in literature as equal to 0,05 for plat plate at the 10,000 Reynolds Number. We did not perform an experiment to find critical LEPS numbers as our experiment was done with a flat plate on 10,000 Re. A different angle of attack and different gust impingement combination has been shown that the LESP parameter works even in a highly unstable gust environment. Flow structures around the airfoil leading edge are behaving as expected from the LESP theory (leading-edge vortex separation and unification).
-
ÖgeImplementation of propulsion system integration losses to a supersonic military aircraft conceptual design( 2021-10-07) Karaselvi, Emre ; Nikbay, Melike ; 511171151 ; Aeronautics and Astronautics Engineering ; Uçak ve Uzay MühendisliğiMilitary aircraft technologies play an essential role in ensuring combat superiority from the past to the present. That is why the air forces of many countries constantly require the development and procurement of advanced aircraft technologies. A fifth-generation fighter aircraft is expected to have significant technologies such as stealth, low-probability of radar interception, agility with supercruise performance, advanced avionics, and computer systems for command, control, and communications. As the propulsion system is a significant component of an aircraft platform, we focus on propulsion system and airframe integration concepts, especially in addressing integration losses during the early conceptual design phase. The approach is aimed to be appropriate for multidisciplinary design optimization practices. Aircraft with jet engines were first employed during the Second World War, and the technology made a significant change in aviation history. Jet engine aircraft, which replaced propeller aircraft, had better maneuverability and flight performance. However, substituting a propeller engine with a jet engine required a new design approach. At first, engineers suggested that removing the propellers could simplify the integration of the propulsion system. However, with jet engines for fighter aircraft, new problems arose due to the full integration of the propulsion system and the aircraft's fuselage. These problems can be divided into two parts: designing air inlet, air intake integration, nozzle/afterbody design, and jet interaction with the tail. The primary function of the air intake is to supply the necessary air to the engine with the least amount of loss. However, the vast flight envelope of the fighter jets complicates the air intake design. Spillage drag, boundary layer formation, bypass air drag, and air intake internal performance are primary considerations for intake system integration. The design and integration of the nozzle is a challenging engineering problem with the complex structure of the afterbody and the presence of jet and free-flow mix over control surfaces. The primary considerations for the nozzle system are afterbody integration, boat-tail drag, jet flow interaction, engine spacing for twin-engine configuration, and nozzle base drag. Each new generation of aircraft design has become a more challenging engineering problem to meet increasing military performances and operational capabilities. This increase is due to higher Mach speeds without afterburner, increased acceleration capability, high maneuverability, and low visibility. Tradeoff analysis of numerous intake nozzle designs should be carried out to meet all these needs. It is essential to calculate the losses caused by different intakes and nozzles at the conceptual design of aircraft. Since the changes made after the design maturation delay the design calendar or changes needed in a matured design cause high costs, it is crucial to accurately present intake and nozzle losses while constructing the conceptual design of a fighter aircraft. This design exploration process needs to be automated using numerical tools to investigate all possible alternative design solutions simultaneously and efficiently. Therefore, spillage drag, bypass drag, boundary layer losses due to intake design, boat-tail drag, nozzle base drag, and engine spacing losses due to nozzle integration are examined within the scope of this thesis. This study is divided into four main titles. The first section, "Introduction", summarizes previous studies on this topic and presents the classification of aircraft engines. Then the problems encountered while integrating the selected aircraft engine into the fighter aircraft are described under the "Problem Statement". In addition, the difficulties encountered in engine integration are divided into two zones. Problem areas are examined as inlet system and afterbody system. The second main topic, "Background on Propulsion," provides basic information about the propulsion system. Hence, the Brayton cycle is used in aviation engines. The working principle of aircraft engines is described under the Brayton Cycle subtitle. For the design of engines, numbers are used to standardize engine zone naming to present a common understanding. That is why the engine station numbers and the regions are shown before developing the methodology. The critical parameters used in engine performance comparisons are thrust, specific thrust and specific fuel consumption, and they are mathematically described. The Aerodynamics subtitle outlines the essential mathematical formulas to understand the additional drag forces caused by propulsion system integration. During the thesis, ideal gas and isentropic flow assumptions are made for the calculations. Definition of drag encountered in aircraft and engine integration are given because accurate definitions prevent double accounting in the calculation. Calculation results with developed algorithms and assumptions are compared with the previous studies of Boeing company in the validation subtitle. For comparison, a model is created to represent the J79 engine with NPSS. The engine's performance on the aircraft is calculated, and given definitions and algorithms add drag forces to the model. The results are converged to Boeing's data with a 5% error margin. After validation, developed algorithms are tested with 5th generation fighter aircraft F-22 Raptor to see how the validated approach would yield results in the design of next-generation fighter aircraft. Engine design parameters are selected, and the model is developed according to the intake, nozzle, and afterbody design of the F-22 aircraft. A model equivalent to the F-119-PW-100 turbofan engine is modeled with NPSS by using the design parameters of the engine. Additional drag forces calculated with the help of algorithms are included in the engine performance results because the model is produced uninstalled engine performance data. Thus, the net propulsive force is compared with the F-22 Raptor drag force Brandtl for 40000 ft. The results show that the F-22 can fly at an altitude of 40000 ft, with 1.6M, meeting the aircraft requirements. In the thesis, a 2D intake assumption is modeled for losses due to inlet geometry. The effects of the intake capture area, throat area, wedge angle, and duct losses on motor performance are included. However, the modeling does not include a bump intake structure similar to the intake of the F-35 aircraft losses due to 3D effects. CFD can model losses related to the 3D intake structure, and test results and thesis studies can be developed. The circular nozzle, nozzle outlet area, nozzle throat area, and nozzle maximum area are used for modeling. The movement of the nozzle blades is included in the model depending on the boattail angle and base area. The works of McDonald & P. Hughest are used as a reference to represent the 2D-sized nozzle. The method described in this thesis is one way of accounting for installation effects in supersonic aircraft. Additionally, the concept works for aircraft with conventional shock inlets or oblique shock inlets flying at speeds up to 2.5 Mach. The equation implementation in NPSS enables aircraft manufacturers to calculate the influence of installation effects on engine performance. The study reveals the methodology for calculating additional drag caused by an engine-aircraft integration in the conceptual design phase of next-generation fighter aircraft. In this way, the losses caused by the propulsion system can be calculated accurately by the developed approach in projects where aircraft and engine design have not yet matured. If presented, drag definitions are not included during conceptual design causing significant change needs at the design stage where aircraft design evolves. Making changes in the evolved design can bring enormous costs or extend the design calendar.
-
ÖgeInvestigations on the effects of conical bluff body geometry on nonpremixed methane flames(Graduate Institute, 2021) Ata, Alper ; Özdemir, İlyas Bedii ; 675677 ; Department of Aeronautics and Astronautics EngineeringThis thesis is composed of three experimental studies, of which the first two are already published, and the third is under peer review. The first study investigates the effects of a stabilizer and the annular co-flow air speed on turbulent nonpremixed methane flames stabilized downstream of a conical bluff body. Four bluff body variants were designed by changing the outer diameter of a conically shaped object. The co-flow velocity was varied from zero to 7.4 m/s, while the fuel velocity was kept constant at 15 m/s. Radial distributions of temperature and velocity were measured in detail in the recirculation zone at vertical locations of 0.5D, 1D, and 1.5D. Measurements also included the CO2, CO, NOx, and O2 emissions at points downstream of the recirculation region. Flames were visualized under 20 different conditions, revealing various modes of combustion. The results evidenced that not only the co-flow velocity but also the bluff body diameter play important roles in the structure of the recirculation zone and, hence, the flame behavior. The second study analyzes the flow, thermal, and emission characteristics of turbulent nonpremixed CH4 flames for three burner heads of different cone heights. The fuel velocity was kept constant at 15 m/s, while the coflow air speed was varied between 0 – 7.4 m/s. Detailed radial profiles of the velocity and temperature were obtained in the bluff body wake at three vertical locations of 0.5D, 1D, and 1.5D. Emissions of CO2, CO, NOx, and O2 were also measured at the tail end of every flame. Flames were digitally photographed to support the point measurements with the visual observations. Fifteen different stability points were examined, which were the results of three bluff body variants and five coflow velocities. The results show that a blue-colored ring flame is formed, especially at high coflow velocities. The results also illustrate that, depending on the mixing at the bluff-body wake, the flames exhibit two modes of combustion regimes, namely fuel jet- and coflow-dominated flames. In the jet-dominated regime, the flames become longer compared to the flames of the coflow-dominated regime. In the latter regime, emissions were largely reduced due to the dilution by the excess air, which also surpasses their production. The final study examines the thermal characteristics of turbulent nonpremixed methane flames stabilized by four burner heads with the same exit diameter but different heights. The fuel flow rate was kept constant with an exit velocity of 15 m/s, while the co-flow air speed was increased from 0 to 7.6 m/s. The radial profiles of the temperature and flame visualizations were obtained to investigate the stability limits. The results evidenced that the air co-flow and the cone angle have essential roles in the stabilization of the flame: Increase in the cone angle and/or the co-flow speed deteriorated the stability of the flame, which eventually tended to blow-off. As the cone angle was reduced, the flame was attached to the bluff body. However, when the cone angle is very small, it has no effect on stability. The mixing and entrainment processes were described by the statistical moments of the temperature fluctuations. It appears that the rise in temperature coincides with the intensified mixing, and it becomes constant in the entrainment region.
-
ÖgeNumerical and experimental study of fluid structure interaction in a reciprocating piston compressor(Graduate School, 2022-01-14) Coşkun, Umut Can ; Acar, Hayri ; Güneş, Hasan ; 511132113 ; Aeronautics and Astronautics EngineeringConsisting of household refrigerators, cold storages, cold chain logistics, industrial freezers, air conditioners, cryogenics and heat pumps, refrigeration industry are a vital part of many sectors such as food, health care, air conditioning, sports, leisure, production of plastics and chemicals along with electronic data processing centers and scientific research facilities, which can not operate without refrigeration. There are roughly 5 billion in operation refrigeration systems which consumes 20% of the electricity used worldwide, responsible of 7.8% of GHG emission of the world, 500 billion USD cost of annual equipment sale, 15 million of employed people. Around 37% of global warming impact caused by refrigeration is direct emission of fluorinated refrigerants (CFCs, HCFCs and HFCs), 63% is due to indirect emission caused by electricity generation required for refrigeration. Both economic goals of making refrigeration units cheaper, more durable, and environment concerns of making these units more efficient and less hazardous for the world, require meticulous research and study on these refrigeration units. Approximately 40% of refrigeration units consist of domestic refrigeration systems alone where mostly hermetic, reciprocating type compressors are used. Design and improvement of such compressors is a multidisciplinary subject and requires deep understanding of heat and momentum transfer between refrigerant and solid component of compressor which can only be done through scientific investigation, using experimental and numerical techniques. In this thesis study, concerning the advantages of numerical studies, a multi-physics numerical model of flow through the gas line of a household, hermetically sealed, reciprocating piston compressor and the fluid structure interaction around the valve reeds including the contact between deformable parts was developed. Concerning the complexity of the model, the problem divided into several steps and at each step, numerical results are validated with experiments. In the first chapter of this thesis, the motivation behind the thesis study is discussed along with a theoretical background about refrigeration, compressors, fluid-structure interaction and a comprehensive literature survey are summarized to express the position of the thesis study among academic literature and it's novelty. In the second chapter, experimental studies conducted throughout the thesis are presented. Experimental studies divided into two sections. In the first section, the valve reed dynamics are investigated experimentally outside the compressor in multiple test conditions. A test rig is built for this reason, and the displacement of valve reed under constant point load, free oscillation and the impact of valve reed to valve plate from a pre-deformed form are measured, in order to validate the numerical work. In the second section, the compressor specifications such as cooling capacity, compression work, average refrigerant mass flow rate, along with surface temperature and instantaneous pressure variation from several locations inside the compressor are measured inside a calorimeter setup, to provide boundary conditions and validation for numerical analyses. Numerical work of the thesis study is explained in the third chapter. Modelling the whole compressor gas line between compressor inlet and outlet, including the strong coupled interaction between the refrigerant and deformable solid parts such as valve reeds is too complex of an attempt to do in a single step. Therefore, the numerical problem divided into seven smaller numerical problems and investigated consecutively. At each consecutive steps, problems are isolated, identified, solved and results are validated. The similarity of each step to the final model is increased along with it's complexity as a natural consequence at each consecutive steps. The numerical studies also briefly cover the advantages and disadvantages of using an open source or a commercial multi-physics solver, where OpenFOAM and Ansys Workbench software are utilized for this purpose, respectively. After the simplified steps of the numerical model are completed, the whole gas line of a compressor produced by Arçelik is modelled. The numerical results compared against experimentally obtained data and a good agreement is achieved between them. The developed method is further used for parametric investigation on compressor design to show the capabilities and the benefits of the numerical model. Finally, results of whole thesis study, the experience gained throughout the thesis work and the planned future work are discussed in the final chapter.
-
ÖgeOptimization based-control of cooperative and noncooperative multi aircraft systems( 2020) Başpınar, Barış ; Koyuncu, Emre ; 625456 ; Uçak ve Uzay MühendisliğiIn this thesis, we mainly focus on developing methods that ensure autonomous control of cooperative and noncooperative multi-aircraft systems. Particularly, we focus on aerial combat, air traffic control problem, and control of multiple UAVs. We propose two different optimization-based approaches and their implementations with civil and military applications. In the first method, we benefit from hybrid system theory to present the input space of decision process. Then, using a problem specific evaluation strategy, we formulate an optimization problem in the form of integer/linear programming to generate optimal strategy. As a second approach, we design a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. In this case, we benefit from differential flatness theory and flatness-based control. We construct optimization problems in the form of mixed-integer linear programming (MILP) and non-convex optimization problem. In both methods, we also benefit from game theory when there are competitive decision makers. We give the details of the approaches for both civil and military applications. We present the details of the hybrid maneuver-based method for air-to-air combat. We use the performance parameters of F-16 to model the aircraft for military applications. Using hybrid system theory, we describe the basic and advanced fighter maneuvers. These maneuvers present the input space of the aerial combat. We define a set of metrics to present the air superiority. Then, the optimal strategy generation procedure is formulated as a linear program. Afterwards, we use the similar maneuver-based optimization approach to model the decision process of the air traffic control operator. We mainly focus on providing a scalable and fully automated ATC system and redetermining the airspace capacity via the developed ATC system. Firstly, we present an aircraft model for civil aviation applications and describe guidance algorithms for trajectory tracking. These model and algorithms are used to simulate and predict the motion of the aircraft. Then, ATCo's interventions are modelled as a set of maneuvers. We propose a mapping process to improve the performance of separation assurance and formulate an integer linear programming (ILP) that benefits from the mapping process to ensure the safety in the airspace. Thereafter, we propose a method to redetermine the airspace capacity. We create a stochastic traffic environment to simulate traffics at different complexities and define breaking point of an airspace with regards to different metrics. The approach is validated on real air traffic data for en-route airspace, and it is shown that the designed ATC system can manage traffic much denser than current traffic. As a second approach, we develop a method that generates control inputs as continuous real valued functions instead of predefined maneuvers. It is also an optimization-based approach. Firstly, we focus on control of multi-aircraft systems. We utilize the STL specifications to encode the missions of the multiple aircraft. We benefit from differential flatness theory to construct a mixed-integer linear programming (MILP) that generates optimal trajectories for satisfying the STL specifications and performance constraints. We utilize air traffic control tasks to illustrate our approach. We present a realistic nonlinear aircraft model as a partially differentially flat system and apply the proposed method on managing approach control and solving the arrival sequencing problem. We also simulate a case study with a quadrotor fleet to show that the method can be used with different multi-agent systems. Afterwards, we use the similar flatness-based optimization approach to solve the aerial combat problem. In this case, we benefit from differential flatness, curve parametrization, game theory and receding horizon control. We present the flat description of aircraft dynamics for military applications. We parametrize the aircraft trajectories in terms of flat outputs. By the help of game theory, the aerial combat is modeled as an optimization problem with regards to the parametrized trajectories. This method allows the presentation of the problem in a lower dimensional space with all given and dynamical constraints. Therefore, it speeds up the strategy generation process. The optimization problem is solved with a moving time horizon scheme to generate optimal combat strategies. We demonstrate the method with the aerial combats between two UAVs. We show the success of the method through two different scenarios.
-
ÖgeTeknoloji geliştirme bölgelerinin hizmet kalitesinin ölçümü: Türkiye genelinde bir uygulama(Fen Bilimleri Enstitüsü, 2020) Özyurt, Mehmet Akif ; Özkol, İbrahim ; 656880 ; Uçak ve Uzay Mühendisliği Ana Bilim DalıBilgi Üretimine ve bunun bir çıktısı olan teknolojik üretime dayalı ürünler bugün çağımıza damgasını vurmuş ve yaşadığımız zaman dilimi, bir çok düşünür tarafından "Bilgi Çağı" olarak adlandırılmıştır. Bu çağda yüksek teknoloji üretiminin merkezinde olan ülkelerin gücü, toprak ya da sermaye büyüklüğünden değil, kaliteli eğitilmiş insan gücünün büyüklüğünden ve bu gücün yüksek teknoloji içeren üretimlere aktarılmasından kaynaklanmaktadır. Eğitim seviyesi yüksek insanlara sahip ülkelerin, üretim kalite ve seviyeleri de yüksektir. Yaşadığımız yüzyılda ülkelerin bilimsel ve teknolojik gelişim hızı çok artmıştır. Bugüne kadar ortaya çıkan bu gelişmelerin çoğu, son 30 yıl içerisinde meydana gelmiş olup, bu hız her geçen gün katlanarak artmaktadır. Dolayısı ile, gelecek kısa vadeli zaman diliminde de, bilimsel ve teknolojik açıdan, şu an yaşadığımızdan çok daha ileride bir dünyanın ortaya çıkacağını öngörmek yanlış olmaz. Yüksek teknoloji üretimi günümüzde, rekabet üstünlüğü yarışının da en belirleyici unsuru haline gelmiştir. Bu nedenle, rekabet gücünün artırılması, sadece maliyetleri düşürmeye değil, tüketici tercih ve taleplerine hızlı bir şekilde yanıt vermenin ötesinde, sürekli gelişime, yenilik ve icatta bulunmaya bağlı bir duruma gelmiştir. Teknolojik bulguları, pazarlama şansı olan bir ürün ya da hizmete, yeni bir üretim veya dağıtım yöntemine, ya da yeni bir hizmet mekanizmasına dönüştürmede, yani teknolojik yenilik üretiminde (inovasyonda) başarılı olanlar artık, dünya pazarlarına egemen olmaktadırlar. Bu tür Ar-Ge'ye dayalı teknolojik gelişmelerin ve yeniliklerin ortaya çıkartıldığı, kaliteli eğitilmiş insan gücünün istihdam edildiği, yüksek katma değerli ürünleri üreten şirketleri ve kurumları bünyesinde barındıran bölgelere, "Teknopark" ya da ülkemizde ilgili yasanın verdiği ad ile "Teknoloji Geliştirme Bölgesi" (TGB) adı verilmektedir. Kavramsal olarak, teknoparklar, Ar-Ge yapıcılar ile, üniversiteler ve sanayi (firmaları) arasında bilim ve teknoloji akışını sağlamaya ve yaymaya yardımcı olan araçlardır. Ayrıca teknoparklar, kuluçka mekanizmalarının oluşturduğu sinerji ile, bilim ve teknoloji tabanlı firmaların gelişimini kolaylaştırmaktadırlar. Bu alanlarda, yüksek teknoloji ve destek araçları kullanılarak, firmalar yenilikçi olmaya teşvik edilmekte, bu yolla katma değeri yüksek ürünler ortaya çıkartılmaktadır. Uluslararası Bilim Parkı Birliği tarafından ise teknoparklar, temel amaçları yenilikçilik kültürünü ve işletmelerinin ya da bilgi merkezli kurumların rekabet gücünü artırmayı destekleyerek, toplumun refah seviyesini yükseltmek olan, alanında profesyonel ekipler tarafından yönetilen yapılar şeklinde tanımlanmaktadır. Bu hedeflere ulaşmak için teknoparklar, üniversiteler, Ar-Ge yapıcıları ve firmalar arasındaki bilgi ve teknoloji akışını sağlar, yönetir, kuluçka ve spin-off mekanizmaları ile yenilikçilik eksenli şirketlerin oluşmasını ve gelişmesini kolaylaştırır, kaliteli yapılar üreterek, diğer katma değer sunan şirket ve hizmetlerin de ortaya çıkmasına altyapı hazırlarlar. Bu tanımlamalar doğrultusunda, diğer adı ile TGB'lerin aslında bilim ve teknoloji kümelenmesi oldukları da söylenebilir. Çünkü genel anlamda teknoparklar, yenilikçi fikirlerle bir araya gelen, ileri teknoloji üreten veya kullanan ve aynı zamanda bu teknolojiyi pazarlayan, Ar-Ge merkezinden ya da üniversiteden faydalanan işletmelerin oluşturduğu bir küme olarak da tabir edilmektedirler. Teknoparklara yönelik yapılan bu tanımlamaların farklılığı büyüklüklerinden ve işkolu faaliyetlerindeki farklılıklardan kaynaklanmaktadır. Yüksek teknoloji üreticilerinin konumlanma merkezi olan teknoparklar, istihdam imkanlarının artırılmasında, gerekli bilgi birikimi sağlanarak sanayinin geliştirilmesinde, üniversiteler ile birlikte eğitim olanaklarının artırılması için firmalara destek verilmesinde ve KOBİ'lerin sayısının artırılmasının yanı sıra bunların desteklenmesinde de etkili bir araç olarak kullanılmaktadırlar. Bu açıdan teknoparkların en temel amaçlarından bir tanesi üniversite, sanayi ve devlet arasında iş birliği sağlamak ve buna bağlı olarak bilgi ve teknoloji ağırlıklı mekânların kurulması ile bölgesel, ulusal ve uluslararası rekabetçilik seviyesinin artırılarak, ülke kalkınmasına katkı sağlamaktır. Teknoparklar, ülkelerin istihdam yapısını olumlu yönde değiştiren ve işsizlik oranının düşmesinde önemli bir etken olan, yeni ve yüksek teknoloji altyapısına sahip alanlardır. Bunun örneklerini teknopark tecrübeleri eskiye dayanan gelişmiş ve sanayileşmiş ülkelerde görmek mümkündür. Bu değişim ve gelişmenin de etkisi ile istihdamın sektörel dağılım anlamında da farklılaştığı görülmektedir. Bilindiği gibi geçmişte gelişmişliğin bir ölçütü, işgücü dağılımının tarım ve sanayi sektörlerindeki durumu olarak görülmekteydi. Şimdilerde ise gelişmişliğin ölçütü olarak, teknoloji sektöründeki istihdam oranı bir ölçüt olarak görülmektedir. Örneğin gelişmiş bir ülke durumunda olan Almanya'da, tarım ve geleneksel sanayilerindeki yüksek istihdam oranı günümüzde ciddi bir azalış göstererek istihdam, yüksek teknolojik ürün üreten sektörlere doğru kaymıştır. Teknoparklarda, Üniversite - Sanayi - Devlet üçgeninde yer alan bütün aktörlerin karlı çıkması hedeflenerek, Ar-Ge için yatırım yapacak yeterli gücü olmayan firmaların da desteklenmesi ve üniversitelerde üretilen bilginin ticarileştirilerek bu firmalara aktarılması düşüncesi de gerçekleştirilmeye çalışılmaktadır. Buna bağlı olarak oluşturulan teknopark ara yüzünün, üniversite, sanayi, bölge ve ülke ekonomik yapısına önemli katkılar sağlaması beklenmektedir. Nitekim teknoparklardan sanayiye akan bu bilgi, sanayi üretiminin modern ölçülerde yapılmasında ve üretim tabanının bilgi ve teknoloji kaynaklı olmasında etkili bir rol oynamaktadır. Bir diğer deyiş ile teknoparklar vasıtası ile, sanayinin üniversitede üretilen bilgiye ulaşması ve üniversitede üretilen bu bilginin de sanayi tarafından uygulama alanı bulması hedeflenmektedir. Bu çalışmada, Türkiye'de faaliyet gösteren teknoparkların sunmuş olduğu hizmet kalitesi ile bu hizmetlerden istifade eden oyuncuların algıladığı hizmet kalitesi arasındaki farkı ortaya çıkarmak, Servqual ölçeğinden yararlanılarak müşterilerin (Ar-Ge yapıcılarının) memnuniyet düzeylerini belirlemek amaçlanmıştır. Çalışmada ayrıca teknoparkların faaliyette bulundukları süre ile müşterilerin teknoparklara ilişkin hizmet kalite algıları arasında bir ilişki olup olmadığı araştırılmıştır. Teknoparklar arasında geçiş yapan firmalarda, teknopark değiştirme kararı verirken hizmet kalitesinin etkisinin de belirlenmesi hedeflenmiştir. Çalışmada son olarak Vikor yöntemi kullanılarak Türkiye'de faaliyet gösteren teknoparklar, hizmet kalitesi açısından sıralanmıştır. Araştırmada Servqual ölçeğinde yer alan hizmet ölçüm faktörleri yer almıştır. Parasuraman ve ark. (1988) tarafından geliştirilen ve hizmet kalitesini belirlemek için ortaya koydukları Servqual ölçme aracı, bugüne kadar spor tesislerinden, otel hizmetlerine kadar tüm hizmet işletmelerinde sıklıkla kullanılmıştır. Bu ölçek hem yurtiçi hem de yurtdışında birer hizmet işletmesi olarak ele alınan teknoparkların hizmet kalitesini ölçmek için, ilk defa bu çalışmada kullanılmıştır. Bu nedenle öncelikle ölçeğin teknoparklara adaptasyonu yapılmış ve bu adaptasyonun güvenilirlik ve geçerlilik çalışması gerçekleştirilerek, analizlere geçilmiştir. Araştırmada ölçeğinde, Servqual Hizmet Kalitesi Ölçeğinde yer alan, "Fiziksel Özellikler", "Güvenilirlik", "Heveslilik", "Yeterlilik" ve "Empati (Duyarlılık)" faktörleri kullanılmıştır. Fiziki özellikler faktörü, binalarda kullanılmış olan cihazların, iletişim malzemelerinin ve çalışanların fiziki görünümünü kapsamaktadır. Güvenilirlik faktörü, teknoparkların verdikleri hizmetinin zamanında ve doğru olarak yerine getirmesi ile ilgili durumunu tespit etmek için kullanılmaktadır. Heveslilik faktörü, teknoparkların müşterilerine yardım etme, hızlı hizmet verme istekliliği ve işin zamanında bitirme yeteneğini ölçmektedir. Yeterlilik faktörü, teknoparklarda çalışan servis personellerinin gerekli ve yeterli bilgiye sahip olup olmadığını ölçmek için kullanılmıştır. Empati (Duyarlılık) faktörü ise müşteri ile direkt ilişki içinde olan çalışanların, saygı, nezaket ve samimiyet düzeylerini belirlemeyi amaçlamaktadır. Çalışmada teknoparkların hizmet kalitesi seviyelerinin ölçümünün sağlanması, ileride yapılabilecek bilimsel araştırmalar için de öncü bir rol oynayacaktır. Hem yurtiçinde hem de yurtdışında buna benzer bir çalışma olmaması nedeni ile sonuçlarının, teknopark yönetici şirketleri için de büyük önem arz edeceği düşünülmektedir.