LEE- Matematik Mühendisliği-Yüksek Lisans
Bu koleksiyon için kalıcı URI
Gözat
Son Başvurular
1 - 5 / 17
-
ÖgeSuppression of symmetry-breaking bifurcations of optical solitons in parity-time symmetric potentials(Graduate School, 2022)Optical soliton refers to any optical field that maintains its special structure during propagation because of the balance between diffraction and self-phase modulation of the medium. The dynamics of optical solitons are investigated comprehensively due to their fundamental structures and potential applications. In particular, optical solitons play an important role in fiber optic communication system that uses pulses of infrared light to transmit information from one place to another over a long distance. The propagation of the electromagnetic wave in optical fibers is modeled by the cubic-quintic nonlinear Schrödinger (CQNLS) equation iΨ_z+Ψ_{xx}+α|Ψ|^2Ψ+β|Ψ|^4Ψ=0, where Ψ(x,z) is normalized complex-valued slowly varying pulse envelope of the electric field, z is the scaled propagation distance, x is the transverse coordinate, Ψ_{xx} corresponds to diffraction, α and β are the coefficients of cubic and quintic nonlinearities, respectively. A higher-order dispersion needs to be considered for performance enhancement along trans-oceanic and trans-continental distances. Fourth order dispersion needs to be taken into account for short pulse widths where the group velocity dispersion changes within the spectral bandwidth of the signal. In addition, it is known from many studies in the literature that an external potential added to the system can be also beneficial for performance improvement. In this thesis, we consider the nonlinear paraxial beam propagation in cubic-quintic nonlinearity with a complex parity-time (PT) symmetric potential and fourth order dispersion. This propagation is modeled by the following CQNLS equation iΨ_z+Ψ_{xx}−γΨ_{xxxx}+V(x)Ψ+α|Ψ|^2Ψ+β|Ψ|^4Ψ=0, where γ>0 is the coupling constant of the fourth order dispersion, V(x) represents a complex PT-symmetric potential. In this thesis, we consider PT-symmetric potentials that are of the form V(x)=g^2(x)+c0*g(x)+ig′(x) where g(x) is an arbitrary real and even function, c0 is an arbitrary real constant and PT-symmetric solitons undergo symmetry breaking. We take a localized double-hump function g(x) in the form of g(x)=A*[exp(−(x+x0)^2)+exp(−(x−x0)^2)] where A and x0 are related to the modulation strength and separation of PT-symmetric potential, respectively. The soliton solutions of CQNLS equation with fourth order dispersion and a complex PT-symmetric potential are numerically obtained by means of the squared-operator method since the equation is nonintegrable. The linear stability analysis of the numerically obtained solitons is examined by linear spectrum analysis and the nonlinear stability analysis is examined by nonlinear evolution with split-step Fourier method. The existence of symmetry breaking of solitons and suppression of symmetry-breaking bifurcations have been investigated. To examine the effect of fourth order dispersion on this symmetry breaking, the coefficient of fourth order dispersion γ is incremented gradually. Consequently, we have demonstrated that the symmetry-breaking bifurcation of the solitons in this problem is completely suppressed as the strength of the fourth order dispersion increases. Moreover, increasing the strength of fourth order dispersion positively influences the linear and nonlinear stability behaviors of solitons.
-
ÖgeBiconservative and biharmonic surfaces in Euclid and Minkowski spaces(Graduate School, 2024-07-02)In 1964, Sampson and Eells formulated the concept of biharmonic maps as an extension of harmonic maps while investigating the energy functional $E$ between Riemannian manifolds, a subject of both geometric and physical significance. Subsequently, numerous mathematicians have shown interest in the study of biharmonic mappings.\\ By the definition, the bienergy functional between semi-Riemannian manifolds $(M^m,g)$ and $(N^n,\tilde{g})$ is defined by $$E_2(\varphi)=\frac {1}{2} \int_M \|\tau(\varphi)\|^2 v_g$$ for a smooth map $\varphi:M \to N$, where $\tau(\varphi)$ represents the tension field of $\varphi$. $\varphi:M\to N$ is said to be biconservative if it is a critical point of $E_2$. This condition is equivalent to satisfying the Euler-Lagrange equation associated with the bienergy functional $$\tau_2(\varphi)=0,$$ where $\tau_2$ is the bitension field defined by $$\tau_2(\varphi):=\Delta\tau(\varphi)-\mathrm{tr\,} \tilde{R}(d\varphi,\tau(\varphi))d\varphi.$$\\ In the 1980s, B. Y. Chen conducted research on biharmonic submanifolds within Euclidean spaces as a component of B. Y. Chen's initiative to comprehend submanifolds of finite type in semi-Euclidean spaces. B. Y. Chen proposed an other characterization of biharmonic submanifolds in these spaces. Let $x:M\to \mathbb E^n_r$ be an isometric immersion. By examining normal and tangential parts of $\tau_2(x)$, the following results can be obtained.\\ Let $x: M^m \rightarrow \mathbb E^n_r$ be an isometric immersion of an $n$-dimensional semi-Riemannian submanifold $M^m$ into the semi-Euclidean space $\mathbb E^n_r$. If $x$ satisfies the fourth-order semi-linear PDE system given by the equations $$\Delta^\perp H+\mathrm{trace}h(A_H(\cdot),\cdot)=0$$ and $$m\mathrm{grad} \Vert H \Vert^2 +4\mathrm{trace} A_{\nabla^\perp_\cdot H}(\cdot)=0,$$ then $M^m$ is biharmonic.\\ On the other hand, if a mapping $\varphi: M \to N$ satisfies the weaker condition $$\langle \tau_2(\varphi), d\varphi \rangle = 0,$$ then it is said to be biconservative. Mainly, if $x: M \to N$ is an isometric immersion, then the previous equation is equivalent to $$\tau_2(x)^\top = 0,$$ where $\tau_2(x)^\top$ represents the tangential part of $\tau_2(x)$. In this case, $M$ is said to be a biconservative submanifold of $N$.\\ In this thesis, we mainly focus on biharmonic and biconservative surfaces in four dimensional Euclidean and Minkowski spaces. The first section provides a concise overview of the historical background and underlying principles concerning biharmonic and biconservative submanifolds, as well as an overview of the research conducted far in this field. In the second section, we give some basic notations and basic facts about submanifolds of semi- Euclidean spaces, the definition of biconservative submanifolds and we introduce the rotational surfaces. In the third section, we give biconservative PNMCV surfaces in $\mathbb E^4$. We obtain local parameterizations of these surfaces and demonstrate that they are not biharmonic. In the fourth section, we give biconservative rotational surfaces in $\mathbb E_1^4$. We study with three different class of rotational surfaces and obtain the condition for each of them to be biconservative. In the concluding section, the derived conclusions are presented, along with recommendations regarding possible future researches.
-
ÖgeInnovative computational techniques for accurate internal defect detection in trees: A stress wave tomography approach enhanced by machine learning(Graduate School, 2024-06-10)The detection of internal defects in trees holds critical importance given the health of forest ecosystems and the industrial significance of wood products. The identification of these internal defects without damaging the wood is a significant factor in the forestry industry and in the production of wood products. While traditional methods often require cutting or processing the wood, non-invasive techniques such as stress wave tomography offer the possibility of identifying internal defects without disrupting the wood's structure. This contributes both to the sustainable management of forest resources and to the improvement of wood product quality. A branch of artificial intelligence, machine learning algorithms allow computer systems to analyze data, recognize patterns, make decisions, and solve problems. These algorithms are critical tools in analyzing large datasets obtained from non-invasive techniques like stress wave tomography, and in accurately detecting and classifying internal defects. In this thesis, an algorithm design capable of generating stress wave tomography based on ray segmentation and machine learning has been developed for the purpose of detecting internal defects in trees. A two-stage algorithm has been proposed based on data obtained from stress waves produced by sensors mounted on trees and on the segmented propagation rays generated from these data. In the first step, a ray segmentation method maps the velocity of stress waves to create segmented sensors. In the second step, data obtained from these segmented rays are processed using K-Nearest Neighbors (KNN) and Gaussian Process Classifier (GPC) algorithms to create a tomographic image of defects within the tree. The algorithm carries the potential to detect internal defects in wood without causing damage and provides more precise results compared to traditional methods. Implemented using the Python programming language, the algorithm equips researchers with the ability to understand and analyze the internal structure of trees. This method stands out as a practical tool for contributing to forest health assessment and conservation through stress wave tomography. During experiments, data from four real trees were collected via sensors, and an algorithm was developed to generate four sets of synthetic defective tree data in the sensor's data format. Real tree data was provided by Istanbul University Cerrahpaşa Faculty of Forestry. All tree data were individually used to feed the proposed defect detection algorithm, and the outputs were transformed into tomographic images. Success rates above 90% were achieved for all evaluation metrics. Compared to related studies, the results showed improvements ranging from 7% to 22% relative to the literature. This thesis aims to contribute to the development of the sustainable wood industry by offering a new approach to detecting internal tree defects. Although the results obtained are quite good compared to the results in the scientific literature, it is thought that even better results will be obtained by optimizing the parameters of the algorithm or by differentiating the machine learning algorithms integrated into the method.
-
ÖgeHopf bifurcation in a generalized Goodwin model with delay(Graduate School, 2024-06-26)In the theory of dynamical systems, delay differential equations have an important place. While in a non-delayed dynamical system the rate of change of state variables depends instantaneously on the state variables, in delayed dynamical systems this functional dependence can be with a time delay. In real life problems, this may occur, for example, when the signals transmitted to the processor of a physical system that collects and evaluates signals from different points in space are transmitted with a time difference due to the path difference. Methods and simulation tools are available in the literature for analysing the stability of a dynamic system formulated without delay, either locally at equilibrium points or globally. The "stable" and "unstable" conditions that we encounter in stability analysis can be target conditions according to the physical model under investigation. For example, in a dynamic system that approximately models the vibrations of a structural element vibrating under the effect of an earthquake, it is desired that the vibrations evolve to zero equilibrium point over time and that the zero equilibrium point is stable. In a mechanical system which is desired to generate energy with its vibrations, it will be the target condition that the vibrations are not damped. Stability analysis is performed to determine the parameter conditions that will give the stable and unstable conditions of the equilibrium points. However, if the dynamical system modelling the relevant physical system actually has a delayed time dynamics, the system may actually be unstable in a parameter set that is predicted as a stable equilibrium point by the non-delayed analysis. Therefore, the analysis of the relevant dynamics needs to be carried out in the formulation of the theory of delayed dynamical systems. Goodwin's model is one of the well-known dynamical systems in macroeconomics which formulates the mechanism between the employment ratio and the wage share in a closed economy. The model is formulated under the assumptions of steady technical progress and steady growth in technical force. Only two factors of production are considered: labour and capital. Working class consume all their wages, whereas all profits are invested by the capital holders. A constant capital-output ratio is assumed, and the relation between the inflation rate and unemployment rate is determined by a linearized Phillips curve. There is an argument in the literature that the functional dependence of the Phillips curve, which expresses the relationship between the inflation rate and the unemployment rate, depends on the time delay. There are only a few publications that consider this dependence with a delay and dynamically analyse modified versions of the Goodwin model. The Goodwin model, which is essentially a mathematical economics analogue of the predator-prey system of population dynamics, despite its simplicity, explains to some extent the periodic behaviour of state variables observed at certain time intervals.
-
ÖgePenalized stable regression(Graduate School, 2024-06-24)In machine learning, the process of data splitting is critical for developing both accurate and consistent models. This process involves dividing the data into separate sets for training, validation, and testing. The training set enables the training of models, the validation set assists in selecting the best parameters, and the test data allows for the assessment of performance of the model in real-world scenarios. Various data splitting techniques exist, each serving specific characteristics of the data set and modeling objectives, such as one-time split and k-fold cross-validation. In the method of one-time split, the data set is randomly divided into two subsets at a predetermined ratio. In the method of k-fold cross-validation, the data set is randomly divided into k equal parts. The model is trained on k-1 parts, and the remaining part is used for validation or testing. This process is repeated such that each part is used for training exactly once. Over-fitting is an issue in machine learning where a model learns the details and noise in the training data to an extent that adversely affects its performance on previously unseen data. Regularized regression methods play a crucial role in addressing the problem of over-fitting, especially for models that perform excellently on training data but fail on new and previously unseen data. Techniques such as Ridge regression, the Least Absolute Shrinkage and Selection Operator (LASSO), Smoothly Clipped Absolute Deviation (SCAD), and the Minimax Concave Penalty (MCP) hold significant places in enhancing model training. By penalizing the coefficients of features, these methods help reduce over-fitting, encouraging the development of simpler models because such models are more likely to generalize better to new data sets. The penalty implemented by Ridge regression is proportional to the sum of the squares of the coefficients, which reduces their effect while retaining all features in the model, but does not eliminate any feature completely. LASSO aims both to shrink regression coefficients and to remove insignificant features from the model. It employs the sum of the absolute values of the coefficients as the penalty term. This method zeroes out the coefficients of insignificant features, thereby automatically performing feature selection. SCAD applies a penalty similar to that of LASSO to small coefficients but avoids penalizing large coefficients, allowing the model to retain large coefficients that are significantly different from zero. MCP is a method developed to address variable selection in high-dimensional data, offering a non-convex penalty mechanism and promoting sparse solutions while penalizing large coefficient values with less bias, thus reducing the effect on large coefficients differently than Ridge. In this thesis, we propose an optimization-based algorithmic data splitting method to effectively select training and validation sets. Our proposed method systematically assigns data points to training or validation sets based on their contributions to the performance of the model. If the contribution of a data point to the performance of the model is high, it is placed in the training set; if low, in the validation set. In this study, the proposed approach is tested on various regression models using penalties such as Ridge, LASSO, SCAD, and MCP. The approach is compared with traditional data splitting techniques like one-time split method and k-fold cross-validation, applied to two different data sets using various evaluation metrics. Each data splitting scenario has been repeated one thousand times to ensure the consistency of the results and to obtain statistically reliable outcomes. The evaluation metrics include the runtime, the average value of the regularization parameter lambda, the standard deviation of the regularization parameter lambda, errors in prediction for the validation, training, and test sets, average coefficients, and the standard deviation of the coefficients. In the scenario of one-time split method, the data set is randomly divided such that 80% of the observations are in the training and validation set, and 20% are in the test set. Then, based on a predetermined ratio, the training and validation sets are further randomly split. Models are constructed using these sets, and performance is measured. In the scenario of k-fold cross-validation, the data set is randomly divided so that 80% of the observations are in the training and validation set, and 20% are in the test set. The training and validation set is then is divided into k equal parts. k-1 parts are used for training while the remaining part is used for validation. This process is repeated k times, each time with a different part used as the validation set, and the performance of the models is measured. In the scenario evaluating the optimization-based data splitting approach, the data set is randomly divided so that 80% of the observations are in the training and validation set, and 20% are in the test set. Then, considering the contribution of each data point to the performance of the model, the training and validation sets are split according to a predetermined ratio. Models are built using these sets, and their performance is measured. The findings obtained from the tests conducted over the mentioned scenarios are as follows: When evaluated in terms of the runtime, the proposed method, although requiring more time compared to the method of single random splitting, provides effective results in similar or less time when compared to k-fold cross-validation. This situation becomes more pronounced especially when working with large data sets or complex models. Our method optimizes data splitting processes, balancing the cost of time while maximizing the accuracy and performance of the model. In terms of the average value of the regularization parameter lambda, variability in lambda values across different scenarios indicates that regularization methods such as Ridge and SCAD significantly impact the fit of the models to the data. In the case of LASSO, low lambda values result in outcomes similar to those of unregularized regression models, suggesting a minimal impact of the regularization. When evaluating the standard deviation of the regularization parameter lambda, the proposed method reduces the standard deviations of lambda values, ensuring more consistent data fit by the model. This reduction indicates an enhancement in the generalization ability of the model and a better fit to the data. In terms of prediction errors (MSE) evaluated scenario-wise, the proposed method maintains consistency in MSE values across the validation, training, and test sets. Notably, tests conducted with both k-fold cross-validation and the proposed optimization approach enhance the generalization capacity of the model, offering the lowest MSE values. The results demonstrate that the proposed optimization-based data splitting method can produce models with prediction errors comparable to, and in some cases more successful than, those developed using k-fold cross-validation. When compared in terms of computational cost, the optimization-based data splitting method appears to be more advantageous than the time spent on k-fold cross-validation. Furthermore, the models have exhibited significantly lower standard deviations in predictions, model coefficients, and hyperparameters. This indicates a marked increase in model stability and suggests that the proposed method can contribute to the development of more reliable and consistent machine learning models. These findings offer promising perspectives on the applicability and effectiveness of the method.