LEE- Fizik Mühendisliği-Yüksek Lisans

Bu koleksiyon için kalıcı URI

Gözat

Son Başvurular

Şimdi gösteriliyor 1 - 5 / 27
  • Öge
    Development and application of imaging ellipsometry for optical characterization and defect analysis of advanced thin films
    (ITU Graduate School, 2025) Yıldız, Furkan ; Zayim, Esra ; Aşar, Muharrem ; 509221110 ; Physics Engineering
    This study is based on a research conducted under the title of " Development and Application of Imaging Ellipsometry for Optical Characterization and Defect Analysis of Advanced Thin Films ". The research covers substrate materials such as GaN and SiC, which are critical in power electronics within the scope of the PowerElec project, and different materials in the ATMOC project. The main purpose of the study is to perform both defect analysis and determination of optical constants of these materials and to contribute to these processes with measurements in the visible and infrared (IR) bands. Classical ellipsometry methods have various limitations, especially in terms of defect analysis. The disadvantages of these systems, such as low spatial resolution, narrow angular measurement range and complex sample preparation, have led to significant deficiencies in studies conducted in this field in the literature. Against this background, instead of assuming that Imaging Ellipsometry implies a totally new principle, the current thesis puts its essential ideas to work to correct particular inadequacies found in traditional ellipsometry. Though traditional ellipsometry has already served, at one time or other, as a highly effective and precise technique for measuring optical constants and thin-film thicknesses, it necessarily supplies a macroscopic average of reflected light, consequently restricting its usefulness in surface investigation when lateral inhomogeneities or microscale structure attributes are concerned. To overcome this limitation, Imaging Ellipsometry has proven to be a highly effective auxiliary technique, allowing for spatially resolved surface and thin-film characterization. Here, unlike in usual dual-arm imaging ellipsometry systems, a small but easy-to-operate imaging ellipsometry instrument has been conceived and put in place. This tool unites optical microscopy principles and ellipsometric measurement, seeking to improve usability while, at the same time, maintaining high spatial resolution and clear imagery. Accordingly, it holds much potential for microscale surface defects analysis, besides surface and bulk substance characterization. The system consists of optical components such as a single wavelength tunable laser source (633 nm, 612 nm, 604 nm, 594 nm and 543 nm), and other ligth sources, motorized compensator and analyzer units, high NA (0.75) Nikon Plan Fluorite objective, a beam splitter on the light path and a Retiga-R6 CCD camera. The data of the optical system is processed with a high-performance analysis infrastructure. In this context, CUDA-based software was developed to increase the parallel processing capacity and obtain faster analysis results. In the software development process, C++ programming language was used to optimize both system control and data analysis processes. This choice provided a performance-oriented and efficient infrastructure. Empirical analysis has proven that the Imaging Ellipsometer system can be optimally applied for defect analysis and for determining optical constants for materials like GaN, SiC, and others. Moreover, this system has proven to offer higher accuracy and resolution compared to conventional ellipsometry methods described in published work elsewhere. The study provides significant contributions in areas like material science and power electronics in terms of creating an innovative Imaging Ellipsometer. This research presents itself as a relevant resource for researchers, teachers, and professionals who encourage a wide-ranging use of the technology in processes that evaluate material quality.
  • Öge
    Design of a microprocessor based multi channel analyser for use in nuclear spectroscopy
    (ITU Graduate School, 2025) Çoşkun, Cebrail ; Özben, Cenap Şahabettin ; 509231125 ; Physics Engineering
    Nuclear spectroscopy is the branch of science concerned with the voltage distributions (energy distributions) of the signals produced when radiation from radioisotopes or particle beams interacts with detector materials. Multi-Channel Analyzers (MCAs), indispensable components of nuclear spectroscopy, play a critical role in this context. Compared to their historical predecessors the single-channel analyzers, MCAs offer the advantage of recording the energy distribution of radiation across multiple channels simultaneously, making them essential in nuclear spectroscopy. These systems are employed in a wide range of applications, from environmental radiation monitoring to experimental nuclear physics, and are particularly common in the identification of radionuclide signatures today. Continuous advances in microcontroller technology have paved the way for more compact, cost-effective, and versatile MCA designs. Through the integration of advanced hardware components with flexible software solutions, modern MCAs can deliver high performance without the need for bulky equipment or extensive peripheral electronics. This master's thesis focuses on the development of a microcontroller-based MCA for nuclear spectroscopy, combining hardware efficiency with software capabilities to provide precise, real-time spectral analysis. The prototype MCA is based on an STM32F405 microcontroller featuring a high-speed, 12-bit analog-to-digital converter (ADC). Spectral data collected within a defined voltage range and counting interval are stored in a 4096-channel array and transmitted to a personal computer via a USB interface using the RS-232 protocol. Software for the STM32 microcontroller was developed in C to efficiently control data acquisition and communication processes. On the computer side, all software was developed in Python; a PyQt5-based graphical user interface (GUI) together with the pyqtgraph library enables real-time data processing, analysis, and visualization. The system offers dynamic resolution adjustment through software-based processing techniques, allowing analysis at lower resolutions (e.g., 8-bit, 10-bit) according to user needs. Data can be displayed by channel number or energy value. Channel-to-energy conversion (in other word, the calibration) is performed via linear fitting based on user-defined reference points, employing linear regression. Key features include Region of Interest (ROI) analysis for detailed examination of specific energy ranges, options for logarithmic and linear scaling on the count axis, and tools for saving spectral data and reloading previously stored datasets for further analysis. These features enhance the system's flexibility and usability. The MCA's real-time data acquisition capability is supported by robust error management mechanisms developed to ensure data integrity during continuous operation. Additionally, LED indicators for system status and a dual-window visualization feature (especially useful for comparing ROI data) enrich the user experience. Test measurements demonstrate that the developed MCA provides accurate, efficient, and user-friendly nuclear spectroscopy capabilities, making it a suitable tool for education, research, and practical applications in nuclear physics. Its adaptability and performance underscore its potential as a reliable analysis instrument in both academic and professional settings.
  • Öge
    Dynamo generation of neutron star magnetic fields
    (ITU Graduate School, 2025-06-25) Bakır, İrem ; Ekşi, Kazım Yavuz ; 509221114 ; Physics Engineering
    Neutron stars are very dense objects with a radius of 10-12 km, and a mass of a few solar masses. Like most of the celestial bodies, they are magnetized with dipole (poloidal) field strength of $B_{\rm p} \sim 10^{12}$ G, and toroidal field strength of $B_\phi \sim 10^{14}$~G. Magnetars, on the other hand, are known as the most magnetized objects in the Universe with field strengths of $B_{\rm p} \sim 10^{14}$ G and $B_\phi \gtrsim 10^{15}$ G. Thus, the origin of neutron stars' magnetic fields became a discussion with the identification of magnetars. Two ideas are considered as the possible origin of the magnetic fields of neutron stars. One of them is the fossil-field hypothesis, which states that neutron stars inherit their magnetic fields from their progenitors, since the magnetic flux is conserved during the core collapse. In this scenario, there is no new field generation; the seed field grows as the radius shrinks, with $B \propto R^{-2}$. The other idea that is considered as the source of the magnetic fields of neutron stars is a dynamo process, which states that the magnetic fields of neutron stars are generated inside the proto-neutron star by the fluid motions. However, studies show that the number of progenitors with strong magnetic fields is much lower than the number of known magnetars (30). Let us consider a collapsing core with a radius of 3000 km where the magnetic field strength is approximately $5\times10^5$ G in the surrounding medium. After the collapse, only a magnetic field of $3\times 10^9$ G will be inherited by a proto-neutron star with a radius of 40 km, by only the magnetic flux conservation. When this proto-neutron star shrinks to a neutron star with a radius of 12 km, the neutron star will inherit a magnetic field of $\sim10^{10}$~G by flux conservation. This field strength is approximately 2 orders of magnitude smaller than the magnetic fields of standard neutron stars. However, this field strength is of the order of the dipole magnetic fields of central compact objects. Therefore, although the flux conservation is widely accepted as the source of the magnetic fields in some populations, it is not enough to explain the magnetic fields of neutron stars, especially the field strengths at magnetar levels. Thus, it became clear that there must be another mechanism that generates magnetic fields at those levels. A dynamo process operating inside the proto-neutron star is now the most promising scenario for the generation of neutron star magnetic fields. Two main types of dynamo mechanisms are the $\alpha^2$ and $\alpha-\Omega$ dynamos. Just after the core collapse, hydrodynamic instabilities operate inside the star, and these instabilities create convective motions. In an $\alpha^2$ dynamo, toroidal and poloidal fields generate each other by only convective motions, which is called the $\alpha$-effect. On the other hand, different parts inside the star rotate with different angular velocities, which is the well-known differential rotation. This differential rotation plays a key role in generating strong magnetic fields in an $\alpha-\Omega$ dynamo. In this type of dynamo, when convective motions generate poloidal field by lifting and twisting the toroidal field lines (the $\alpha$-effect), differential rotation generates toroidal field shearing the poloidal field lines, which is known as the $\Omega$-effect. Due to the absence of strong effect of the differential rotation, $\alpha^2$ dynamos generate relatively weak fields compared to $\alpha-\Omega$ dynamos. Thus, an $\alpha-\Omega$ dynamo is the most accepted mechanism for the generation of magnetar fields. Studies demonstrate that magnetic field strengths of even $\gtrsim 10^{15}$ G can be achieved by an $\alpha-\Omega$ dynamo. Therefore, in this study, the field generation at neutron star levels is investigated by an $\alpha-\Omega$ dynamo. In this study, a 1-dimensional $\alpha-\Omega$ dynamo model, which is first proposed for white dwarf fields is adopted to proto-neutron stars, adding the shrinkage of the radius, accordingly, loss of mass, and the flux conservation. Moreover, two viscous processes are involved in the model. One of them is the viscosity due to magneto-rotational instability. Magneto-rotational instability is a dynamical instability that arises from the electrically conducting and differentially rotating fluids in the presence of a weak magnetic field. This instability generates turbulence, which creates this type of viscosity. The other viscous process is the convective viscosity, which is created by convective motions. Dynamo processes are studied with several 2 and 3-dimensional models. However, these models can not be studied with realistic parameters. With this 1-dimensional model, a dynamo process is examined with realistic parameters. In the study, the model equations are solved with Runge-Kutta method, and it is seen that the field components grow in time and get saturated at the end of the dynamo process (approximately 50 s), as expected. Both of the saturation values of the fields are at the magnetar levels. Thus, this study demonstrates that the magnetar fields can be generated by an $\alpha-\Omega$ dynamo which operates inside a proto-neutron star. On the other hand, examinations for proto-neutron stars with relatively long rotational periods are conducted, and results demonstrate that magnetic fields at levels of standard pulsars, high-field pulsars and low-field magnetars can be achieved for slow rotations. With this result, it is evident that the fast rotation of the proto-neutron star plays a key role in the generation of magnetic fields of magnetars in dynamo processes. This is consistent with the studies which indicate that relatively slower rotations generate weaker fields. Moreover, results show that as the poloidal fields of central compact objects ($B_{\rm p}\sim 10^{10}$~G) are inherited from the progenitor star by flux conservation, their toroidal fields are amplified by the $\Omega$-effect. This is an interesting result which indicates that central compact objects can experience a dynamo process in which the $\alpha$-effect is ineffective. Additionally, with this 1-dimensional model, the impacts of the parameters on the results are also investigated, which is not possible with 2 or 3-dimensional models.
  • Öge
    Investigation of thermal conduction in microcontacts created by indentation
    (Graduate School, 2022) Uluca, Ahmed ; Özer, Hakan Özgür ; 509191101 ; Physics Engineering Programme
    Thermal contact conduction has been investigated on different scales for many practical and scientific motivations in the literature. Demands for engineering the interfaces are increasing for accurately managing the contact mechanics and heat transfer with miniaturization of the electronics devices. In this study, microcontacts, that are created by indentation, have been investigated with experimental, simulation, and analytical works. The spreading resistance perspective of the disc constriction case has been extended for the studied highly plastic microcontacts of indentation. Creating the microcontacts and investigating the conductance through them had been realized by indentation of metallic surfaces by specially prepared diamond micro-particles/indenters. Thermal measurements had been realized by mounting thin thermocouples on diamond tips. The experimental setup is home-built with commercial piezo, motor, DAQ utilities, and other miscellaneous devices. PC User Interface and Intercessor Microcontroller Unit had been programmed to properly manage to conduct experiments. Furthermore, to measure the resistance, we employed an oscillatory experimental procedure and lumped analysis of transient heat transfer. The application of oscillations at different indentation depths has enabled us to extract the RC behavior of the microcontacts created by high plastic deformation. Therefore, the time constant of the contacts can be obtained. Additionally, we could find an effective measure of the thermal diffusivity of the contact through the diamond tip by fitting the change of time constant to depth with the proposed modified constriction models. Moreover, to analyze and predict the change of the time constant with respect to depth and load, several simulations and calculation work had been pursued. The increase in the contact area by indenter penetration into the sample has been concerned to be suppressed by gradient occurrence along the tip-sample contact. Moreover, with help of the simulations, we deduced the effect of plasticity such as pile-up on the improvement of the indentation contact for the heat transfer can be effective. Consequently, for the first time, we conducted the periodic contact procedure for the thermal contact of single micro asperity of indentation. The periodic experimental procedure and fin efficiency application to spreading cases for single microcontact are unique parts of this work. Results with the diamond tip on three different metallic samples showed that the gradient occurrence along the indentation contact can be analyzed with the fin solutions of the literature. Experimental results were fitted properly to a unified function of conic fin and spreading resistance functions. In addition, parameters of the fits can be deduced for the conductivity and interface conductance. However, state of the results are not sufficient to exactly determine the contact and material parameters due to need for exact parameters for transient analysis and, uncertainties in the properties of the tip and samples. With help of more precise thermal measurements and indenter systems, this experimental procedure may provide further advances and ease in the investigation of the thermal contacts of many different materials and scales. In addition, for the solid-state thermal interface materials solutions, we deduce that investigation of the geometry optimization for pressure and heat transfer as indicated in this thesis would provide insights into the bottlenecks of the contact heat transfer. Specifically, the gradient occurrence and its effectivity on the overall contact heat transfer should be taken into account for the indentation contacts while improving the contact by plasticity.
  • Öge
    Newtonian perturbation theory in cosmology: From inflation to large-scale structure
    (Graduate School, 2025-01-28) Kinsiz, Rumeysa ; Arapoğlu, A. Savaş ; 509211113 ; Physics Engineering
    Cosmology is the scientific study of the physical characteristics of the universe, its beginning, development and organization, based on observational outcomes and theoretical foundations. The Lambda-CDM model is currently one of the most popular theories in cosmology. This model of the universe outlines the behavior of the cosmos through the use of dark matter and energy. The cosmological constant (dark energy) is an energy density used to describe the acceleration of the expansion of the universe. From this model, it can be seen that cold dark matter and dark energy contribute greatly to the total mass-energy density of the universe. While dark matter affects the dynamics of galaxies and large-scale structures, dark energy drives the accelerated expansion of the universe. However, ongoing problems led to the formulation of "inflation theory." Inflation theory is a convincing paradigm that solves fundamental questions like the flatness problem and the horizon problem, which ask why the universe appears nearly flat and why distant parts show similar properties. Inflation hypothesis argues that the universe had a rapid expansion during its formative period, which mitigated initial anomalies and established the foundational conditions for the world we observe today. Numerous mathematical models have been introduced to advance inflation theory, including scalar field inflation, Starobinsky inflation, and Higgs inflation, which explain the dynamics of early expansion and the transformation of primordial perturbations into extensive cosmic structures. We also need observational evidence from the early cosmos to prove these theoretical hypotheses. The cosmic microwave background (CMB) and large-scale structure (LSS) are two of the most critical. CMB is described as the conditions immediately after the Big Bang and gives us a perspective on what the early universe was like, while Large Scale Structure (LSS) refers to the general arrangement of galaxies and matter throughout cosmic history. To form these structures one has to consider both the observation of them and the processes by which they are formed. The growth of cosmic structures is mainly due to gravitational collapse, which amplifies small density perturbations in the early universe. This process is also understood by using Newtonian perturbation theory, which is a useful approach to describing how early anisotropies evolve into the large scale structures we see today. The concepts of Jeans length, growth function, transfer function and power spectrum are useful tools to study the evolution of structures and distribution of matter and to generate theoretical data to compare with experimental data. However, the examination of nonlinear evolution show that the creation of xxi structures has a more complex background. Different theoretical instruments have been used to analyze this complicated structure. The spherical collapse model elucidates the evolution of overdense regions into stable entities like galaxies and galaxy clusters, whereas the idea of virialization delineates the equilibrium state of these structures, especially dark matter halos. Moreover, the Press-Schechter theory offers a statistical framework for elucidating the creation of cosmic formations. This theory provides an analytical approach to assess the mass distribution of collapsed entities. The mass function forecasts the probability of structure formation across various masses, whereas biasing delineates the correlation between observable galaxies and the fundamental density field. Comprehending the genesis and evolution of the universe necessitates a comprehensive methodology that integrates theoretical, observational, and statistical analyses. Newtonian perturbation theory is a crucial instrument for examining large-scale structures, with its validity corroborated by empirical evidence and simulations.