LEE- Telekomünikasyon Mühendisliği-Yüksek Lisans

Bu koleksiyon için kalıcı URI

Gözat

Son Başvurular

Şimdi gösteriliyor 1 - 5 / 43
  • Öge
    Power amplifer assisted ISAC
    (Graduate School, 2024-11-28) Akca, Hüseyin ; Çırpan, Hakan Ali ; 504191321 ; Telecommunication Engineering
    Integrated sensing and communication (ISAC) has emerged as a critical phenomenon for next-generation communication systems. Particularly, in the context of Institute of Electrical and Electronics Engineers (IEEE) standards and fifth-generation (5G) beyond standardization efforts, ISAC has established itself as a significant area of focus in both academia and industry. Given its importance to both the communication and radar communities, it is inevitable that research in the ISAC domain will continue to progress. Despite the similarities in the hardware of radar and communication systems, the two functions generally have distinct objectives. This distinction often results in trade-offs, as similar hardware components are used for different purposes with different waveform requirements. Addressing these trade-offs to enhance the joint performance of both functions introduces various challenges. This thesis focuses on the orthogonal frequency division multiplexing (OFDM) waveform. OFDM has already demonstrated its functionality in almost all communication standards and is expected to be indispensable in next-generation communication systems. Consequently, improving the performance of ISAC systems integrated with such networks necessitates optimizing their operation on the OFDM waveform. The first study examines the impact of radio frequency (RF) front-end components on ISAC Performance. The effects of RF front-end components, such as power amplifiers (PA), on communication performance are well-documented in the literature when using OFDM waveforms. Numerous studies have addressed the performance degradation caused by nonlinearities introduced by PAs. However, the impact of PA behaviour on sensing performance has received relatively little attention. This gap in the literature likely stems from the fact that traditional radar systems typically employ constant-envelope waveforms. Since OFDM lacks a constant-envelope structure, this study evaluates the influence of PA nonlinearities on sensing performance. To mitigate potential performance degradation, the study proposes extracting a PA model and incorporating it into the baseband reference signal. The second study takes into account an ISAC system employing OFDM waveforms, where the transmitter and receiver are distinct entities. The goal is to develop a more accurate channel estimation algorithm, impairment-aided channel state information (IA-CSI) that considers the distortions introduced by the PA. This approach aims to improve both sensing and communication performance. It is anticipated that ISAC-related research in both academia and industry will continue to accelerate, contributing to advancements in next-generation communication systems.
  • Öge
    Delay violation probability analysis for URLLC multi-user systems for finite blocklength regime
    (Graduate School, 2024-07-30) Öztürk Turfan, Hilal ; Çırpan, Hakan Ali ; 504201324 ; Telecommunications Engineering
    The need for better communication is increasing day by day and consumer demands are shaping the development of mobile wireless networks. 5G mobile technologies promises to transfer a huge amount of data much faster. In wireless network systems with multiple users, minimizing delay violation probability is a critical challenge for ensuring a high-quality user experience. This study introduces a framework designed for the assessment of delay violation probabilities in multi-user downlink URLLC systems. Emphasizing the significance of employing a FBL transmission model to account for transmission errors, our proposed framework utilizes an infinite length of user queues at the base station, and delay violation probabilities are reported with respect to predefined target delays. To shed light on crucial trade-offs and performance limits, we evaluate and compare the delay violation performances of three well-known user scheduling algorithms, namely, RR, MaxCQI, and LQ, under various network conditions. Simulation results show that selecting an appropriate scheduling algorithm is critical to meet challenging target delay violation probabilities.
  • Öge
    Vector-driven: A new projection and backprojection algorithm based on vector mapping
    (Graduate School, 2024-08-19) Türker, İsmail Melik ; Yıldırım, İsa ; 504211319 ; Telecommunications Engineering
    X-ray imaging is a technique used to visualize the internal structures of objects, primarily employed in medical diagnostics. X-rays, which are special electromagnetic waves with wavelengths ranging from 10 nanometers to 10 picometers, penetrate objects and are absorbed by varying internal densities. This differential absorption allows detectors to capture and measure the X-rays' intensities, creating images that reveal internal structures. A major advantage of X-ray imaging is the ability to observe internal structures without invasive procedures. However, radiation exposure poses significant risks, especially in medical applications, requiring a careful balance between radiation dose and image quality. This study focuses on the medical applications of X-ray imaging, particularly the need for multiple projections to achieve 3D reconstructions from 2D projections. The trade-off between image quality and radiation exposure requires careful parameter optimization. Image reconstruction transforms 2D projections into 3D images, a process involving forward projection (3D to 2D transformation) and backprojection (2D to 3D transformation). The body is modeled as an array of attenuation coefficients, which absorb X-rays to varying degrees. The reconstruction process involves calculating line integrals on the image to fill detector cells, storing projections in a sinogram for further processing. Two main domains are used: the image domain (representing the reconstructed image) and the sinogram domain (used for filtering and analyzing projection quality). Projection and backprojection are critical operations in image reconstruction, with three main approaches: pixel-driven (PD), ray-driven (RD), and distance-driven (DD). Each approach models the geometrical structure of X-ray imaging differently. PD calculates X-ray beams passing through pixel centers to detector cells, RD attaches X-rays to detector cell centers and evaluates contributions along the X-ray trace, and DD maps pixel and detector cell boundaries onto a common axis to calculate overlapping regions. DD is the state-of-the-art for projection and backprojection. However, DD algorithm faces problems like index mismatching and non-rectangular overlapping when the detector is tilted. These issues significantly limit DD's applicability, necessitating new solutions. This study proposes a new algorithm called vector-driven to address the tilted detector problem, non-rectangular overlapping, and index matching issues in DD. The algorithm involves three stages: mapping, redefining boundaries, and calculating overlapping regions. The first stage projects voxels directly onto the detector plane, eliminating distortions caused by mapping onto a common axis. The second stage interpolates non-rectangular voxel distributions onto a regular grid, simplifying overlap calculations. The third stage uses standard overlap calculation methods after redefining boundaries. This approach enhances robustness against geometrical limitations, making the reconstruction system more flexible and suitable for custom designs. The proposed algorithm also facilitates parallelization, improving computational efficiency. To examine the proposed model, we developed a tomographic imaging toolbox using Python 3.8.0 to simulate X-ray imaging. The system is organized into modules following object-oriented programming principles, categorized by fundamental stages of tomographic imaging. The setup includes components like the beam source, detector, data object, projector, backprojector, and reconstructor. Each module corresponds to real or imaginary entities linked by attributes and methods. Several experiments compared the efficiency of the proposed method against DD and branchless distance-driven (BDD). The proposed vector-driven algorithm proved robustness against rotations, capturing projections without causing artifacts. However, DD and BDD exhibited significant distortions, particularly with detector rotations around y and z axis. The VD algorithm's robustness makes it more compatible with flexible digital breast tomosynthesis (DBT) applications. Timing is a less critical criterion compared to accuracy, but important for handling high-resolution images and extensive computations required by AI applications. The proposed algorithm, while not the fastest, performed acceptably and can be further optimized. The proposed vector-driven algorithm outperformed DD and BDD in handling rotations and geometrical distortions, making it a robust and feasible solution for projection and backprojection operations. Future improvements could include parallelization, different interpolators, and new filtering topologies to enhance performance and flexibility further.
  • Öge
    Time difference of arrival based passive sensing and positioning system integrated into moving platforms
    (Graduate School, 2024-07-12) Çelebi, Burak Ahmet ; Akıncı, Mehmet Nuri ; 504211305 ; Telecommunication Engineering
    The ability to locate the source of information has been a fundamental need for humanity since its earliest days. This necessity even explains why living beings have evolved to have two ears: to determine the position of a sound source, which is a form of information. As sound propagates, it reaches our ears at different distances, creating an interaural time difference (ITD). The brain uses this time difference, along with the intensity of the sound, to estimate the location of the sound source. This thesis explores the principle of localization using ITD, which has existed in nature for millions of years, from the perspective of a communications engineer. Specifically, it examines the application of this principle to locate a source emitting radio frequencies. There are various methods for locating a signal source. These methods primarily rely on the signal's strength, arrival time, frequency, phase, or a combination of these factors to determine the position. The strength of the incoming signal depends directly on the terrain, the signal's frequency, and its output power. Since these variables are often unpredictable, using signal strength for positioning usually yields low accuracy results. Measuring the signal's phase requires multiple antennas and RF stages, which can only estimate the target's angle, not its precise location. However, by determining the arrival times of signals received by multiple receivers and analyzing the time differences, the position of the target can be estimated. To solve for the unknown x, y, and z coordinates in a 3D space using only time difference of arrival (TDOA) information, at least three equations are necessary. These equations provide the minimum amount of information required for position determination. However, obtaining three linearly independent TDOA equations necessitates a minimum of four receivers. Among these receivers, one is designated as the reference, and the time differences between this reference and the other three receivers are used to create linearly independent equations. These equations are then utilized to determine the target's position. However, because the equations are typically nonlinear, achieving a quick and highly accurate solution is not always straightforward. Additionally, factors such as hardware imperfections and noise can prevent a clear solution to the equation system. Various methods can be employed to address these challenges and improve the accuracy of the results. This study compares algebraic methods such as Least Squares (LS) and heuristic methods like Particle Swarm Optimization (PSO) for signal source localization. LS methods solve the system of equations directly to estimate the target position, while PSO methods optimize a target function to find the best location. Heuristic methods, including PSO, can yield effective results even with nonlinear equations or in noisy environments. In this research, we utilized a variant of the PSO algorithm known as the Firefly Algorithm. The Firefly Algorithm begins by distributing fireflies randomly across a cost function map. The fireflies move towards the solution with the lowest cost, switching to the new best fireflies as lower-cost solutions are found. This approach is advantageous for several reasons: it uses an infinite number of TDOA measurements rather than just three equations, minimizes the likelihood of getting stuck in local minima on the cost map, and achieves high-accuracy localization. Although the Firefly Algorithm requires more computational power compared to algebraic solutions, modern computers can handle this demand effectively. While signal source localization in a 3D environment using time difference of arrival (TDOA) information has often been tested with a 4-receiver system model, successful localization can also be achieved with different system models. Unlike traditional methods, where TDOA data is collected simultaneously from fixed receivers, we propose a system where two receivers are moved to collect TDOA data at different time instances, followed by localization using the collected data. Practical issues encountered with such a system model were investigated through simulation and measurement setups. One challenge was accurately estimating the time differences of arrival of signals received by the receivers. Due to the slow variation of signals in time, time estimation is affected by noise. Another potential problem arises when the sampling frequency of the system is narrower than the signal's bandwidth, causing the cross-correlation of received signals to not yield peak values at the delay samples, making time differences difficult to discern. To address this, we decided to operate the system with the highest possible sampling rate when the bandwidth of the target signal is unknown. Ensuring reliable signal sampling, both receivers are synchronized to the same frequency and time using a GPS-disciplined oscillator. Furthermore, 1 Pulse Per Second (PPS) from GPS is used for time synchronization. Apart from these technical considerations, the trajectory of the receiver stations plays a crucial role in system performance. As the distance between the target receivers increases, so does the distance they need to cover for accurate localization. Additionally, ensuring a high-reliability and high-capacity communication network between receiver stations and the base unit is crucial during system implementation. Without this network, communication disruptions between the receivers and the base station would prevent TDOA data collection and, consequently, localization algorithms from functioning. Lastly, challenges were observed when there are multiple sources emitting signals at the same frequency or when environmental factors cause signal reflections and changes in direction, affecting TDOA measurements by the receivers. Before finalizing the system setup, creating a realistic simulation environment is crucial. In the fourth section, we introduced a simulation environment designed in MATLAB to anticipate potential scenarios before the measurement setup. The simulation environment was designed to be consistent with real measurements, including terrain features using the WGS84 geolocation method. Since the system was anticipated to be tested in the TÜBİTAK-BİLGEM Gebze campus area, tests in the simulation environment were conducted accordingly. After creating the simulation environment, the first test was to examine the hyperbolas formed when different receiver paths were created. It was observed that when the target was far from the receiver paths, the hyperbolas intersected each other over a wide area. Conversely, when a receiver rotated around the target, the hyperbolas intersected each other from all directions within a small area. A small intersection area of hyperbolas is crucial for the successful operation of localization algorithms like the firefly algorithm. Secondly, the formation of hyperbolas was observed when the bandwidth of the target signal and the system's sampling frequency were reduced. With low sampling frequency, the resolution of hyperbolas was significantly reduced, spreading over a very small space. When the bandwidth dropped below a few hundred MHz, the hyperbolas generally did not pass near the target. As the sampling frequency and bandwidth increased, the hyperbolas gradually approached the target and began to intersect over it. In the third simulation, a cost function was generated for the firefly algorithm, and costs in the solution space for different receiver paths were examined. As expected, as the target moved away from the receivers, the slope of the cost function around the target decreased, allowing for wider areas to be estimated as solutions. Based on these simulations, two suitable options for receiver paths were identified for the TÜBİTAK-BİLGEM Gebze campus. Finally, the average error in position determination was investigated for different sampling frequencies of sampled target signals for the identified two paths using the firefly algorithm. As expected, errors were significantly higher at low sampling frequencies, decreasing as the sampling frequency increased. The fifth chapter illustrates the measurement setup devised from the insights gleaned from simulations, deductions, and experiences presented thus far in the thesis. It starts by delineating the hardware and software of the ground station, followed by those of the receiver units, and then narrates the measurements encompassing various scenarios. The ground station hardware comprises a simple setup, consisting of a powerful computer and a modem supporting Wi-Fi 6 for communication with the receiver unit. The user interface software enables control of the receiver units from the ground station, allowing adjustments to frequency bandwidth and gain configurations. Additionally, signals received by each unit can be individually represented in time and frequency space for adjusting gains to account for signal visibility variations. Moreover, the interface facilitates data collection, time difference calculation, and execution of the firefly algorithm, with results visualized on a map. The hardware design of the receiver unit has been the most multidisciplinary aspect, given its need for lightweight deployment on UAVs while meeting power requirements throughout the flight. The design considerations for the receiver unit include power needs for communication, computation, and GPS, with antennas strategically positioned on the UAV. Extensive efforts resulted in reducing the system weight to below 3 kilograms when integrated with the protective casing. The software running on the receiver unit operates at a lower level compared to that on the ground station, directly transferring GPS-derived position, time, and frequency information to the SDR and computer hardware, then transmitting it to the ground station via Wi-Fi 6 using the SSH interface. At the TÜBİTAK-BİLGEM Gebze campus test site, various measurements were conducted to evaluate the performance of the system hardware and software. Two receivers were mounted on drone and ground vehicle setups for different test scenarios. Different signal types and receiver paths were tested. Initially, to assess system performance under optimal conditions, a 20 MHz bandwidth high-autocorrelation M-sequence signal was transmitted from a vector signal generator, attempting to locate it with drones. Subsequently, a 20 MHz bandwidth LTE downlink signal was examined. In the third measurement, the focus shifted to existing real LTE signal sources after discontinuing the use of the signal generator. The fourth measurement pushed system boundaries by utilizing a ground vehicle and stationary receiver to locate a narrowband and intermittently available LTE uplink signal. The system performed better than expected, locating the LTE uplink signal source with a 12-meter margin of error. In the final measurement, a pulse-type modulation radar was positioned to test the system's applicability in military settings. In conclusion, this thesis demonstrated the integration of two RF receivers utilizing the TDOA principle onto drones. Simulation environments were initially created to examine system performance, followed by the implementation of the system and localization of various signal sources. These efforts illustrated that the localization accuracy varies based on the type of radio signals emitted and the trajectories followed by the drones. Moreover, the feasibility of performing localization by placing TDOA-based receivers on moving units was established.
  • Öge
    Power allocation for cooperative NOMA systems based on adaptive-neuro fuzzy inference system
    (Graduate School, 2023) Üçbaş, Melike Nur ; Çırpan, Hakan Ali ; 504191352 ; Telecommunications Engineering Programme
    Innovative technologies improving capacity, coverage, energy efficiency, and service quality are required to meet the exponentially increasing traffic demands in wireless communication systems. Non-Orthogonal Multiple Access (NOMA), which allows multiple users to transmit their data simultaneously at the same frequency and time interval, is a promising radio access technology to cope with the challenging requirements of 5G and beyond systems. However, the importance of energy efficiency in cellular networks for the NOMA becomes a major issue as the number of users increases. In a cooperative NOMA architecture, relays are effective in increasing system performance and reducing outage probability. The power allocation in a cooperative NOMA system is a challenging task having a significant impact on the user's perceived quality of service. In this thesis, a fuzzy logic (FL) based relay selection and power allocation approach are proposed for a multi-relay NOMA system with imperfect successive interference cancellation. The power is allocated between the NOMA user pair within a resource block in such a way that the rate fairness is maximized and the system outage is minimized. In order to demonstrate the effectiveness of the proposed system model, we utilize a network scenario including a base station, a variable number of relays, and two users. Relay selection and power allocation are performed using two different fuzzy inference systems (FIS). These FISs are created by training parameters such as channel coefficients, signal-to-noise ratio (SNR), and interference with the Adaptive-Neuro Fuzzy Inference System (ANFIS) method. The first FIS is designed to find relays that can achieve the minimum rate required for communication between the base station and relays. Its input parameters include the channel coefficient, SNR, self-interference, and residual interference between the base station and relays. The output of the FIS is the minimum achievable rate for the users. The second FIS is applied only for the relays that satisfy the minimum data rate requirements. The objective of the second system is to distribute the power fairly between the users. The input parameters of the second FIS are the channel coefficients, SNR, and residual interference between users and relays. The power allocation coefficient for a strong user is obtained as an output of the second FIS. The numerical results obtained by FL are close to the optimum outage probabilities and rate fairness results for all experiments when the number of relays and SNRs are varied. The computationally effective FL may be successfully applied at run time for the power allocation in a cooperative NOMA system, which gives rise to promising outcomes.