LEE- Telekomünikasyon Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Başlık ile LEE- Telekomünikasyon Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
Öge4.5G frekanslarında çok bantlı geniş geliş açısı aralığında etkili yeni bir frekans seçici yüzey tasarımı(Lisansüstü Eğitim Enstitüsü, 2022-07-18) Balta, Şakir ; Kartal, Mesut ; 504132311 ; Telekomünikasyon MühendisliğiDünyada artan nüfüs ve gelişen teknolojiyle birlikte hücresel kablosuz sistemlerinin kullanımı artmakta, kısıtlı miktardaki frekans bantlarının yoğun bir şekilde kullanımı dolayısıyla artan işaretler arası girişimler, birçok hassas elektronik aygıtın çalışmasını etkileyebilmektedir. Bunun yanında, bu frekansları önlemeye yönelik herhangi bir sistem olmaması nedeniyle insanlar, günlük hayatlarında, evde, ofiste, her an her yerde bu frekanslara maruz kalmakta, bunun neticesinde sağlıklarını kaybederek yaşam kaliteleri düşebilmektedir. Bu nedenlerle böylesi sorunlara bir çözüm olabilmesi açısından bu tezde yer verilen çalışmalarla imalatı kolay, maliyeti düşük ve geniş bir kullanım alanına sahip olabilecek frekans seçici yüzey (FSY) kaplama ürünlerinin geliştirilmesi, teknolojinin insan sağlığına verebileceği zararların önlenerek insan yaşam kalitesinin artırılması açısından önemlidir. Günümüzde dünyada mobil haberleşme alanında IMT Advanced, ülkemizde de kısaca 4.5G olarak bilinen ve 800, 900, 1800, 2100 ve 2600 MHz frekans bantları içeren mobil haberleşme sistemi kullanılmaktadır. Tezin ana amacı bu frekans bantlarını engellemektir. Bu frekansların engellenmesi ile radyo dalgalarının insan sağlığına olan etkileri azaltılacak, mobil haberleşmenin olmasının istenmediği yerlerde bir engelleyici olarak kullanılabilecek, bunun yanında farklı frekanslardan gelecek işaretler arası girişimler de engellenebilecektir. Diğer bir amacımız da bir yandan bu frekansları engellerken, bir yandan da belirttiğimiz frekanslar aralığında kalan, ancak günlük hayatta oldukça yoğun kullanım alanı olan, örneğin 2.4 GHz kablosuz ağlar gibi serbest frekans bantlarını da engellememektir. Yakın gelecekte nesnelerin interneti kullanımı ile birlikte kablosuz ağların çok daha yoğun olarak kullanılacağı düşünüldüğünde, sadece ilgili frekansları engelleyen ama kablosuz ağları engellemeyen bu çalışmanın önemi giderek artacaktır. Bu nedenle yürüttüğümüz tez kapsamında, 4.5G frekans bantlarında oluşan radyo dalgalarını engelleyen aynı zamanda diğer frekans bantlarında herhangi bir engelleme yapmayan, bant durduran filtre görevi görecek yapısal yüzey malzemesi tasarlamak, bunun yanında bu frekans bantlarında ortaya çıkan işaret girişim etkilerini en aza indirmek hedeflenmiştir. Bunlara ek olarak çalışmayı yaparken tasarlanan FSY'lerin mümkün olduğunca farklı geliş açılarında etkinliğini koruması amaçlanmıştır. Bu malzemenin, durdurma bandında iletim katsayısının (S_21) minimum -10dB, iletim bandında iletim katsayısının (S_11) 0dB'e yakın bir değerde olması ve ayrıca elektromagnetik dalganın farklı geliş açılarında, ve farklı polarizasyonlarında da amaçlanan frekans karakteristiklerini sağlaması hedeflenmiştir. FSY'lerin frekans karakteristiği yüzeyi oluşturan periyodik eleman geometrilerine bağlı olduğundan çok çeşitli eleman geometrileri literatürde incelenmiştir. Benzer biçimde periyodik eleman geometrilerinin üzerine baskısının gerçekleştirildiği dielektrik tabakaların da yüzeyin frekans karakteristiği üzerine etkileri bulunmaktadır ve bu etkiler literatürde ayrıntılı olarak incelenmiştir. Tez çalışmasında FSY'lerin analiz yöntemleri de incelenmiştir. Dalga denkleminin analitik çözümü sadece bazı basit FSY geometrileri için görülmüştür. Dalga denkleminin diğer bütün FSY geometrileri için çözümü sadece sayısal çözüm yöntemleri ile elde edilebildiği görülmüştür. Bilgisayar teknolojisindeki hızlı gelişmeyle beraber sayısal analiz yöntemleri bu konuda uygulama alanı bulmaya başlamıştır. FSY geometrilerinin analizlerinde Sonlu Farklar Metodu (Finite Difference Time Method), Sonlu Eleman Metodu (Finite Element Method), Momentler Metodu (Method of Moments) gibi sayısal çözüm yöntemlerinin kullanıldığı, bunun yanında Eşdeğer Devre Modeli'nin de FSY yüzey analizlerinde kullanıldıkları literatürde görülmüştür. Yukarıda belirtilen sayısal analiz yöntemleri içinde tasarım aşamasında belirlenen FSY'lerin analizleri "Sonlu Elemanlar Metodu" ile gerçekleştirilmiş ve ilgilenilen frekans aralığında iletim ve yansıma katsayıları hesaplanmıştır. Ansoft HFSS programı "Sonlu Eleman Metodu" ile bu tür yapıların analizlerini yapabilmektedir. FSY'lerin eniyilemesi HFSS programında eşdeğer devre yönteminin yansıması ile, programın parametrik analiz özelliği kullanılarak gerçekleştirilmiştir. Tez aşamasında bu programdan aktif olarak faydalanılmıştır. Tasarımlarda mümkün olan en az sayıda rezonans ile frekans bantları arasındaki girişimler azaltılmaya çalışılarak, birden fazla frekans karakteristiğine sahip olan üç farklı tasarım geliştirilmiştir. Bunun yanında da durdurmak istenilen frekansların haricinde kalan çalışma frekanslarını engellememek amacıyla mümkün olan en dar durdurma bantlarını sağlayan, oldukça keskin kenarlı bant durduran filtreler oluşturulması için çaba harcanmıştır. Tüm bunları yaparken tasarlanan FSY'lerin mümkün olduğunca farklı geliş açılarında ve farklı polarizasyonlarda etkinliğini koruması hedeflenmiş, bu amaçla simetrik ve dalga boyuna göre çok küçük boyutlardaki geometriler kullanılmıştır. Birden fazla bandı durduran FSY tasarımlarında karşılaşılan en büyük problemlerden biri herbir frekans bandı için tasarlanan farklı geometrilerin birbirlerine olan girişim etkileri olmuştur. O nedenle birçok geometri üzerinde araştırmalar yapılmış ve problemin çözümü için farklı yaklaşımlar getirilmiştir. Tasarımlarda düşük maliyetli ürün geliştirmek amacıyla, 1mm kalınlığında, dielektrik sabiti 4.54 ve kayıp tanjant değeri 0.02 olan tek katlı FR4 tabaka üzerinde gerçeklenmiş, radyo frekanslarına FR4 tabakanın tepkisi kötü olmasına rağmen istenilen hedefler gerçekleştirilebilmiştir. Tasarımların analizleri ve eniyileştirme çalışmaları Ansoft HFSS programında yapılmış, yüzey akım yoğunluk grafikleri çıkarılarak herbir frekans bandı için geometrilerin etkinlikleri gösterilmiştir. FR4 tabakalar üzerine gerçeklenen tasarımların ölçümleri alınmış ve benzetim sonuçlarıyla karşılaştırılarak tasarımlar doğrulanmıştır. Geniş bir literatür taraması yapılmış ve 4.5G frekansları üzerinde etkin olan, çoklu rezonans gösteren böyle bir çalışmaya literatürde rastlanmamıştır. Bu çalışma bu alanda yapılmış ilk ve tek başarılı çalışma olması nedeniyle litaratüre katkı sağlamıştır.
-
Öge5G uygulamaları için dairesel polarizasyonlu ve metayüzeyli mikroşerit MIMO anten tasarımı(Lisansüstü Eğitim Enstitüsü, 2023) Koçer, Mustafa ; Günel, Murat Tayfun ; 782952 ; Telekomünikasyon Mühendisliği Bilim DalıBu tez çalışmasında, 6 GHz altı 5G frekans spektrumuna yönelik n78 bandı olarak adlandırılan 3.3-3.8 GHz frekans aralığında 4x4 çok girişli çok çıkışlı bir anten tasarımı hedeflenmiştir. Öncelikle çok girişli, çok çıkışlı anten tasarımında kullanılacak olan mikroşerit anten tasarımı gerçekleştirilmiştir. Mikroşerit antenin dairesel polarizasyonda çalışması için kare yama tercih edilmiştir. Tercih edilen kare yamanın köşelerinde sol el dairesel polarizasyona yönelik kesik daire ve kesik üçgen kullanılmıştır. Yapılan tasarımlara göre köşelerinden kesik daire kullanılarak yapılan mikroşerit anten tasarımı kesik üçgen kullanılarak yapılan tasarıma göre daha iyi sonuç vermiştir. Sol el dairesel polarizasyona yönelik köşelerinde kesik daire oluşturularak tasarlanan mikroşerit antenin performansını artırmak için üzerine hava boşluğu olmadan 4x4 metayüzey yerleştirilmiştir. Böylece metayüzeyin oluşturduğu yüzey dalgalarının kesim frekanslarında mikroşerit anten rezonansa girmiştir. Mikroşerit antenin istenilen yansıma katsayısı ve eksenel oranı aşağı frekansta (3.4 GHz) iken metayüzeyin yüzey dalgaları yukarı frekanslardadır (3.9-4 GHz). Bu sayede anten geniş bantta çalışmaktadır. Kullanılan metayüzey sayesinde FR-4 alttaş malzemesi ile tasarlanan bu antenin verimliliği, kazancı artmıştır ve geniş bantta dairesel polarizasyonda çalışması sağlanmıştır. Ayrıca mikroşerit yama antenin ortasında çapraz yarık oluşturularak metayüzeyli mikroşerit anten daha düşük yansıma katsayısına sahip olmuştur. Metayüzeyli mikroşerit antenin tasarımında kullanılan alttaş malzemesinin kalınlıkları ve kullanılan alttaş malzemesinin performansa etkileri incelenmiştir. Ayrıca tasarımda kullanılan 4x4 metayüzeyin köşelerinden kesik daire şekli oluşturularak antenin dairesel polarizasyon bant genişliği artırılmış ve daha geniş bantta yüksek kazanç elde edilmesi sağlanmıştır. TLC-32 alttaş malzemesi kullanılarak tasarlanan metayüzeyli mikroşerit anten ile 6 GHz altı 5G uygulamalarına yönelik 3.3-3.8 GHz' te dört kapılı (iki kapı sağ el dairesel, iki kapı sol el dairesel polarizasyon) olacak şekilde MIMO anten tasarımı gerçekleştirilmiştir. Tasarlanan MIMO antenin 1. ve 3. kapıları sol el dairesel polarizasyona yönelik iken 2. ve 4. kapıları sağ el dairesel polarizasyona yöneliktir ve her kapı kendi aralarında 90° döndürülerek tasarlanmıştır. MIMO antenin izolasyonunu artırmak için MIMO antenin ortasında alttaşa entegreli dalga kılavuzu yapısı kullanılmıştır. Buna ek olarak mikroşerit anten katmanına ve metayüzey katmanına parazitik elemanlar eklenmiştir. Böylece tasarımı gerçekleştirilen MIMO anten yüksek izolasyonlu ve kapılarının hepsi yüksek kazançlı olacak şekilde dairesel polarizasyonda çalışmaktadır.
-
ÖgeA compact two stage GaN power amplifier design for sub-6GHz 5G base stations(Graduate School, 2023) Türk, Burak Berk ; Savcı, Hüseyin Şerif ; Şimşek, Serkan ; 809122 ; Telecommunication Engineering ProgrammeBoth commercial and military systems use wireless communication networks. The range of applications is wide, including radar, mobile communications, Wi-Fi, SATCOM and many more. They all have different requirements and different solutions to meet their needs. The development of mobile communications began with 1G in the 1970s, and new generations have found their place in the radio communications market. In 2019, 5G New Radio has started to be expanded worldwide with higher data rate, wider frequency bands, lower latency features. Moreover, there are more frequency bands are available for 5G New Radio. These are called sub-6GHz and mmWave. As the name suggests, the sub-6 GHz frequency bands are below the 6 GHz frequency bands, including the bands of the previous generation. On the other hand, mmWave frequency bands are above 24 GHz. With the goal of low latency, engineers are developing new solutions for the next generation of base stations. One solution is to deploy smaller base stations more frequently than traditional macro base stations. These small cell base stations are called Micro, Pico, Femtocells. As the size of base stations has decreased, the transmitters and receivers of the cells require new technological developments. As the transmitters contain power amplifiers, they are known to dissipate significant amounts of DC power and require appropriate thermal protection. Also, with the increasing demand for small cells, the size of the transmitters must also be considered, along with the nuisance of heat. One of the most important component of the transmitters is power amplifiers. They are the last element of the transmitter before the antenna and amplify the RF power using DC power. In this work, the power amplifier is studied. The size of the power amplifiers play important role for the 5G New Radio small base station cells. Also, due to the size of power amplifiers being small, the power density and thermal conductivity managements are examined. GaN transistors gained popularity over GaAs and Si semiconductor technologies since their thermal conductivity is better and their power density is higher. They are also capable of amplifying higher power levels and have broader bandwidths. For these reasons a compact GaN HEMT power amplifier module is designed to meet the requirements of 5G small cell base stations. For thermal reasons, the efficiency of the power amplifier is crucial. The traditional power amplifiers are divided into classes that is determined by their bias points. These are Class A, Class B, Class C and Class AB. Class A is theoretically the least efficient and Class C is the most efficient. Also, the linearity is important factor in telecommunications because of complex modulation systems. Class A is the most linear and Class C is the least linear of all classes. As a result of this compromise, our power amplifier module operates in Class AB, which balances efficiency and linearity. In this work, a compact two-stage power amplifier module is designed with high gain, high linearity and high efficiency. 2 bare die GaN HEMT transistors are used with 0201 packaged lumped components for matching circuits on a laminate PCB. The PA module measures 10x6 mm. Given these dimensions, the alternative design option is MMIC technology, but the cost of a GaN-based wafer is significantly higher than our solution. A large signal model of the transistor is used and simulated with the EM co-simulation. The simulations are resulted as the output power level of 5W with 0.1 dB gain compression at the center frequency 3.5 GHz. The stability of the PA module is secured with series resistors. The designed power amplifier module is manufactured and implemented with the die transistors and components by using die bonder and wire bonder machines. Small signal and large signal measurement setups are prepared and the device is tested. Due to the mesh settings the designed power amplifiers matching circuits are shifted. 18.5 dB gain is measured with 30% PAE at the output power level of 2W. The simulations are repeated with accurate EM simulations and the results are matched.
-
ÖgeA doherty power amplifier for 5G applications(Graduate School, 2023) Konanç, Hasan ; Savcı, Hüseyin Şerif ; Akıncı, Mehmet Nuri ; 900834 ; Telecommunication Engineering ProgrammeThe ever-increasing need for high data rates and low latency steered modern communication systems toward massive MiMo architectures. In this architecture, each antenna element requires its amplifier unit. Smaller cell sizes and more user service have forced a significant increase in antenna elements, so the Power Amplifier units. Each amplifier is a substantial contributor to the power consumption budget. Therefore, efficiency is a primary concern in Power Amplifiers of cellular infrastructure systems. This study focuses on an architectural solution for Power Amplifier efficiency by demonstrating a Doherty PA (DPA) design in a widely used 5G sub-6GHz band of n78. This thesis focuses on the efficient RF output power generation of 5G base stations from an architectural perspective. A Doherty Power Amplifier is designed using for n78 5G band that is prioritized for the upcoming deployment. The design is optimized for an operation between the 3.6 - 3.8 GHz frequency band. The output power of 43 dBm (20 W) was obtained by using two of the 10W GaN HEMT transistors and a drain efficiency of 73% was obtained. The Doherty region starts at 38 dBm output power which allows an efficient operation with 6 dB power back off. A drain efficiency of up to 53% was obtained in the Doherty region and a gain of 10 dB is obtained over the entire band. The requirement of unconditional stability at all frequencies under small and large signal conditions demands a thorough analysis at different phases of design. For multi – stage stability analysis, the Ohtomo approach was also utilized in this study, for its convenience of being based on S – parameters and not requiring access to the transistor's internal components. Nyquist method is used in Ohtomo approach. Nyquist method was applied for frequencies between 10 MHz and 10 GHz. As a result, the proposed DPA is stable because there is no loop gain surrounding the 1+ j∗0 point. The prototype of the proposed DPA was fabricated and real-time small and large signal tests were performed. During the tests, it was found that the prototype DPA did not work only in the desired frequency range of 3.6 - 3.8 GHz during the design period, and after the tuning process, Doherty drain efficiency characteristics were obtained in the frequency range of 3.2 - 4 GHz (n77 5G band). Accordingly, The output power of 41.6 dBm (approximately 15 W) was obtained by using two of the 10W GaN HEMT transistors and a drain efficiency of 67% was obtained. The Doherty region starts at 32 - 38.2 dBm output power, allowing an efficient operation with 0.5 - 7.8 dB power back off. A drain efficiency of up to 67 % was obtained in the Doherty region, and a gain of 3.5 - 7.9 dB was obtained over the entire band.
-
ÖgeA friendly physical layer warden system(Graduate School, 2022) Kumral, Miraç ; Kurt Karabulut , Güneş Zeynep ; 714363 ; Telecommunications Engineering ProgrammeWireless communication technologies have become the focus of attention in recent years. Studies in this field are of great importance as they offer a direct solution to the problems of daily life. Mobile communication, satellite communication, terrestrial communication and maritime communication are the pioneers in the wireless communication sector. Wireless communication has become widespread and popular because it is frequently used in daily life. For example, in today's world, almost everyone has basic knowledge about technologies such as Bluetooth, Wi-Fi, 3G, 4G, 5G. It can be seen that even people who are not technically involved follow the developments of these technologies today. The wireless method of transmitting information brings great comfort to human life. However, with the convenience of wireless communication, some potential dangers may also occur. Wireless transmission takes place by converting the information into a signal and broadcasting it to the open environment with the help of antennas and receiving this broadcast by the desired people. Due to the nature of wireless communication, this broadcast to the open environment may pose a danger because the wireless medium is accessible to all users in the environment. Some of these dangers are interfering with the communication between the communicating devices for malicious purposes or revealing confidential information by decoding a communication signal by someone else. These potentially dangerous interventions are called wireless attacks. Communication architecture is generally designed as a multi-layered structure. The most common structure is the OSI (Open Systems Interconnection) model. This model; It consists of physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer. New wireless transmission techniques are discussed and designed every day in the literature. Thanks to the new methods designed, wireless communication is becoming more secure day by day. Encryption methods developed by the science of cryptography are used to provide information security. These encryption methods are frequently used in the layers other than the physical layer. However, contrary to expectations, the physical layer has not received enough attention in studies against wireless threats. Within the scope of the literature review, studies on the detection of wireless threat elements were investigated. Physical layer approaches have been examined and it has been seen that these studies are generally aimed at receiver systems. A different perspective has been brought in this thesis to detect wireless attackers. In this thesis, a friendly physical layer guard system is proposed. This warden system has the ability to detect if the transmission between two users is under any attack. Additionally, if an attack is taking place, the system can also detect the type of attack. The proposed warden system has a multi-antenna architecture, so it can also determine which direction the attacker is in. In the first stage of the study, it was mentioned why the communication environment should be secure.
-
ÖgeA new antenna design methodology based on performance analysis of MIMO and defining novel antenna parameters(Graduate School, 2024-05-08) Yussuf, Abubeker Abdulkerim ; Paker, Selçuk ; 504122305 ; Telecommunications EngineeringThe rapid growth of wireless technology has created a significant demand for the design of Multiple-Input Multiple-Output (MIMO) antennas for wireless devices. MIMO antennas play a crucial role in meeting the requirements of current and future wireless standards, as they can maximize data rates in wireless communication systems by utilizing multiple channels within the same bandwidth. However, designing MIMO antennas for compact devices presents considerable challenges. The limited space between antennas leads to increased coupling and high correlation, which can negatively impact their performance. To address these challenges, this thesis proposes a new antenna design methodology based on MIMO performance metrics and defining antenna parameters. Existing metrics for conventional antenna systems are insufficient for fully assessing MIMO antenna performance. This methodology provides a systematic approach to optimize antenna configurations, mitigate mutual coupling, and achieve desired performance characteristics, paving the way for enhanced system capacity. The thesis introduces a novel methodology for designing MIMO antennas that relies on crucial performance metrics and defining parameters. These parameters include factors such as antenna spacing, slot dimensions, strip placements, and parasitic element sizes, which are important for meeting the requirements of modern wireless standards within the LTE and sub-6 GHz 5G bands. The research presents five distinct MIMO antenna designs, each optimized for specific requirements and validated through simulations and experimental measurements. Firstly, the dual-band Vivaldi-shaped MIMO antenna covers the 5G NR bands n78 and n79, boasting gains of over 7.63 dBi and 8.5 dBi respectively, while maintaining mutual coupling below -30 dB. Secondly, the concentric octagonal-shaped MIMO antenna is designed for 5G UE applications in the n38 band, achieving a gain of over 5 dBi and mutual coupling below -25 dB. Thirdly, the compact quad-element MIMO antenna is designed for LTE/Wi-Fi applications, exhibiting high isolation exceeding 17 dB and a channel capacity loss lower than 0.6 b/s/Hz. Fourthly, the wideband MIMO antenna is a single-element design with quad-ports, operating in the 2.1/2.3/2.6 GHz and 2.4 GHz bands. It offers an operating bandwidth of 2-3.0 GHz, reflection coefficients below -10 dB, isolation under -25 dB using synthesized pi-networks TL-based decoupling network, and a diversity gain of approximately 10 dB. Finally, a quad-element MIMO antenna utilizing a modified Apollony fractal, designed for 5G wireless communications, achieves S11 below or equal to -10 dB within the impedance bandwidth, with low mutual coupling below -20 dB. The thesis explores various decoupling strategies to mitigate mutual coupling and enhance antenna performance. These strategies include antenna placement and orientation, parasitic elements, neutralization, and synthesized Pi-networks TL-based decoupling network topology. Each design is thoroughly evaluated through simulations and experimental measurements, with performance metrics including S-parameters, envelope correlation coefficient (ECC), channel capacity, total active reflection coefficient (TARC), and diversity gain. The research demonstrates the feasibility and effectiveness of the proposed methodology for designing compact MIMO antennas that offer improved performance metrics, making them well-suited for use in 5G and beyond wireless communication systems.
-
ÖgeA new approach to satellite communication: Harnessing the power of reconfigurable intelligent surfaces(Graduate School, 2024-01-22) Tekbıyık, Kürşat ; Kurt Karabulut, Güneş ; 504192305 ; Telecommunications EngineeringIt is widely accepted that user-centric and ubiquitous connectivity, which are desired by both end users and operators for the 6th generation (6G) and beyond communication technologies, can be achieved through the unique orchestration of terrestrial and non-terrestrial networks (NTNs) in next-generation communication systems. This vision is also described by the 3rd Generation Partnership Project (3GPP) in Technical Report (TR) 38.821 for the operation of New Radio (NR) in NTNs. According to the definition by the 3GPP, an NTN basically consists of unmanned aerial vehicles, high-altitude platform stations (HAPS) systems, and dense satellite deployments. Low-Earth orbit (LEO) satellites and HAPS systems are considered to be the key enablers for NTNs due to their unique features, which include longer operating times and wider coverage areas. The most important pillars of non-terrestrial networks are ultra-dense satellite constellations. Although satellite networks are considered a prominent solution, many challenging open issues remain to be addressed. The most prominent ones are the size, weight, and power (SWaP) constraints, high path loss, and energy efficiency. As known, multi-antenna technologies are used to mitigate high path loss by taking advantage of its beamforming capacity. However, the hardware and signal processing units of multi-antenna systems are quite complex and costly. These costs are much higher in satellite networks. Recently, it was shown that a passive antenna solution with reconfigurable smart surfaces can reduce these costs and help increase communication performance. In this regard, we propose the use of reconfigurable intelligent surface (RIS) to improve coordination between these networks given that RISs perfectly match SWaP restrictions of operating in satellite networks as a main focus of this thesis. A comprehensive framework of RIS-assisted non-terrestrial and interplanetary communications is presented that pinpoints challenges, use cases, and open issues. Furthermore, the performance of RIS-assisted NTNs under environmental effects, such as solar scintillation and satellite drag, is discussed in light of simulation results. First, we propose a novel architecture involving the use of RIS units to mitigate the path loss associated with long transmission distances. These RIS units can be placed on satellite reflectarrays, and, when used in broadcasting and beamforming, it can provide significant gains in signal transmission. This study shows that RIS-assisted satellites can provide a severe improvement in downlink and achievable uplink rates for terrestrial networks. Although RIS has the potential to increase efficiency and perform complex signal processing over the transmission environment instead of transceivers, RIS needs information on the cascaded channel in order to adjust the phase of the incident signal. Consequently, channel estimation is an essential part of RIS-assisted communications. A study presented in the thesis evaluates the pilot signal as a graph. It incorporates this information into the graph attention networks (GATs) to track the phase relation through pilot signaling. The proposed GAT-based channel estimation method investigates the performance of the direct-to-satellite (DtS) networks for different RIS configurations to solve the challenging channel estimation problem. It is shown that the proposed GAT demonstrates a higher performance with increased robustness under changing conditions and has lower computational complexity compared to conventional deep learning (DL) methods. Moreover, based on the proposed method, bit error rate (BER) performance is investigated for RIS designs with discrete and non-uniform phase shifts under channel estimation. One of the findings in this study is that the channel models of the operating environment and the performance of the channel estimation method must be considered during RIS design to exploit performance improvement as far as possible. We show that RIS can improve energy efficiency in ground-to-satellite com munications. To complete the puzzle of overall satellite communications, we investigate RIS-assisted inter-satellite communication performance in terms of BER and achievable rate as well since broadband inter-satellite communication is one of the key elements of satellite communication systems that orchestrate massive satellite swarms in cooperation. Thanks to technological advancements in microelectronics and micro-systems, the terahertz (THz) band has emerged as a strong candidate for inter-satellite links (ISLs) due to its promise of wideband communication. In particular, multi-antenna systems can improve the system performance along with the wideband supported by the THz band. However, multi-antenna systems should be considered due to their SWaP constraints. On the other hand, as a state-of-the-art multi-antenna technology, RIS is able to relax SWaP constraints because of its passive component-based structures. However, as similar reflection characteristic throughout the wideband is challenging to meet, it is possible to observe beam misalignment. In the thesis, we first provide an assessment of the use of the THz band for ISLs and quantify the impact of misalignment fading on error performance. Then, to compensate for the high path loss associated with high carrier frequencies, and to further improve the signal-to-noise ratio (SNR), we propose using RISs mounted on neighboring satellites to enable signal propagation. Based on a mathematical analysis of the problem, we present the error rate expressions for RIS-assisted ISLs with misalignment fading. Also, numerical results show that RIS can leverage the error rate performance and achievable capacity of THz ISLs as long as a proper antenna alignment is satisfied. As the misalignment error seems one of the challenges on the path toward practical RIS-assisted NTN, the acquisition of a reliable direction of arrival (DoA) estimation becomes more of an issue in achieving promised improvements in RIS-assisted communication systems. For that reason, we address DoA estimation problem in RIS-assisted communication systems in the thesis. For this aim, we use a single-channel intelligent surface whose physical layer compression is achieved using a coded-aperture technique, probing the spectrum of far-field sources that are incident on the aperture using a set of spatiotemporally incoherent modes. This information is then encoded and compressed into the channel of the coded-aperture. The coded-aperture is based on a metasurface antenna design and it works as a receiver, exhibiting a single-channel and replacing the conventional multi-channel raster scan-based solutions for DoA estimation. The GAT network enables the compressive DoA estimation framework to learn the DoA information directly from the measurements acquired using the coded-aperture. This step eliminates the need for an additional reconstruction step and significantly simplifies the processing layer to achieve DoA estimation. We show that the presented GAT integrated single-pixel radar framework can retrieve high-fidelity DoA information even under relatively low signal-to-noise ratio (SNR) levels. Along with above work, in this thesis we analyse the performance of the main communication pillars of an end-to-end RIS-assisted satellite communication system and focus on the development of solutions to open problems that are essential in practical application.
-
ÖgeA novel antenna configuration for microwave hyperthermia(Graduate School, 2022-11-28) Altıntaş Yıldız, Gülşah ; Akduman, İbrahim ; Abdulsabeh Yılmaz, Tuğba ; 504182309 ; Telecommunications EngineeringBreast cancer affects approximately 2.5 million women each year and the consequences can be fatal. When treated correctly, however, the survival rates are very high. Surgical operation such as lumpectomy or mastectomy are invasive techniques that remove the partial or the whole breast. With early diagnosed cancers and the post-surgical patients, the most used therapy techniques are the radiotherapy, chemotherapy and the use of other anti-cancer agents. The economic and the psychological repercussions may be minimized by the increase efficiency of the treatments. It has been shown that with the artificial hyperthermia, elevated temperature levels at the cancer regions, the effectiveness of these modalities increase. Microwave breast hyperthermia (MH) aims to increase the temperature at the tumor location over its normal levels. During the procedure, the unwanted heated regions called hotspots can occur. The main aim of the MH is to prevent the hotspots while obtaining the necessary temperature at the tumor. Absorbed heat energy per kilogram at the breast, specific absorption rate (SAR), needs to be adjusted for a controlled MH. The choice of the MH applicator design is important for a superior energy focus on the target. Although hyperthermia treatment planning (HTP) changes for every patient, the MH applicator is required to be effective for different breast models and tumor types. In the first part of the thesis, the linear antenna arrays are implemented as MH applicators. We presented the focusing maps as an application guide for MH focusing by adjusting the antenna phase values. Furthermore, these focusing maps put forward the basic principle of focusing the energy at the breast. Sub-grouping the antenna, we obtained two phase main parameters that control the horizontal and vertical focus change. By adjusting these two phase values, we could focus the energy onto the target locations and we showed that with this simple structure, there is no need for optimization methods. However, the linear applicator performance was not successful for some target points, especially when the target is far away from both of the arrays. In the second part of the thesis, we improved the linear MH applicator. We concluded that the reason for the low performance of the linear applicator is mainly due to non-symmetrical geometry of the applicator and the resulting poor coverage. we proposed to radially re-adjust the position of the linear applicator for a better focusing ability while fixing the breast phantom. This generates multiple different applicator scheme without actually changing the applicator design. Particle swarm optimization (PSO) method is used for antenna excitation parameter selection. For the examined two targets, 135° rotated linear applicator gave 35-84% higher T BRS and 21-28% higher T BRT values than the fixed linear applicator, where T BRS stands for the target-to-breast SAR ratio and T BRT stands for the target-to-breast temperature ratio. Not only the rotated linear applicator gave higher performance, but also the circular array is rotated and the results were improved for one target. One of the main results of this study is that, for one target, the rotated linear applicator gave better results than the circular array, which is the state of the art. For the deep-seated target, 135° rotated linear applicator has 80% higher T BRS and 59% higher T BRT than the circular applicator with the same number of antennas. For the other target, the results of the linear and circular were comparable. However, the results obtained with the PSO were not robust. With different initial values (random in our study), the results were very different from each other, and we did 10 repetitions and took the best performing results. In the third part of the thesis, we presented deep-learning based antenna excitation parameter selection method. This method utilizes the learning ability of convolutional neural networks (CNN), rather than searching the solution space from random initial values as PSO does. The data set for CNN training was collected by superposing the electric fields obtained from individual antenna elements. We implemented a realistic breast phantom with and without a tumor inclusion. We used linear and circular applicators to validate the method. CNNs were trained offline with the data sets created first for the phase and then for the amplitude of the antennas. A mask of 1s and 0s is used to define the target region to be focused. This mask was given as the input to CNN models, and the corresponding phase and the amplitude values are calculated within seconds from the CNN models. The proposed approach outperforms the look-up table results, as the phase-only optimization and phase–power-combined optimization show a 27% and 4% lower hotspot-to-target energy ratio, respectively, than the look-up table results for the linear MH applicator
-
ÖgeA roadmap for breast cancer microwave hyperthermia treatment planning and experimental systems(Graduate School, 2024-07-04) Şafak, Meltem Duygu ; Altıntaş Yıldız, Gülşah ; 504191326 ; Telecommunications EngineeringBreast cancer affects approximately 2.5 million women each year and can be fatal if not treated correctly. However, with proper treatment, survival rates are very high. Common treatments include invasive surgical procedures such as lumpectomy or mastectomy, and non-surgical methods like radiotherapy, chemotherapy, and other anti-cancer agents. Enhancing the efficiency of these treatments can mitigate the economic and psychological impacts on patients. Studies have shown that artificial hyperthermia, which involves elevating the temperature in cancerous regions, can enhance the effectiveness of these modalities. Microwave breast hyperthermia (MH) aims to raise the temperature at the tumor site above normal levels. During this procedure, unwanted hotspots can occur, and the main goal of MH is to avoid these while achieving the necessary temperature at the tumor. The specific absorption rate (SAR), which measures the absorbed heat energy per kilogram of breast tissue, needs to be carefully controlled. The design of the MH applicator is crucial for focusing energy on the target effectively. Despite variations in hyperthermia treatment planning (HTP) for each patient, the MH applicator must be effective across different breast models and tumor types. The optimization and predictive modeling of temperature-dependent dielectric properties in microwave hyperthermia treatments, focusing primarily on breast cancer is investigated. This research aims to enhance the efficacy and precision of hyperthermia therapy through a combination of computational simulations, empirical data analysis, and deep learning techniques. This study is a comprehensive exploration of microwave hyperthermia treatment planning for breast cancer, focusing particularly on the critical consideration of temperature-dependent dielectric properties (TD-DP) within this context. In addition, an experimental study was conducted to realize computational analysis. It delves into multifaceted aspects of microwave hyperthermia treatment, spanning from the optimization of antenna parameters to the prediction of electromagnetic distribution through innovative methodologies like the U-Net architecture. One of the central inquiries is the optimization of antenna parameters concerning temperature-dependent dielectric properties. This study delves into the intricacies of how variations in these properties can influence treatment outcomes and efficacy. By analyzing these relationships, this thesis aims to establish optimized antenna configurations that maximize treatment precision and effectiveness. Deep learning, particularly convolutional neural networks (CNN), emerges as a powerful tool within this framework. By leveraging CNNs, this thesis investigates methods to use as a preliminary step of hyperthermia antenna excitation parameter selection. This integration of cutting-edge artificial intelligence techniques holds promise for streamlining and automating aspects of treatment planning, thereby potentially reducing human error and enhancing overall efficiency. Particularly, the U-Net model's potential is studied in automating the generation of electric field distribution of a particular dielectric distribution such as the breast tissue. By harnessing the capabilities of artificial intelligence, particularly in image analysis and processing, it aims to develop more robust and efficient methodologies for treatment planning. The integration of the U-Net model represents a significant advancement in this regard, promising to streamline processes and enhance treatment precision. To verify the performed computational simulations, an experimental microwave hyperthermia system was built. A circular array of 12 dipole antennas was installed in this system to experiment on tissue-mimicking phantom to gather information on microwave hyperthermia treatment system. Therefore, a significant amount of information on microwave hyperthermia is gathered through this experiment. Ultimately, the overarching objective of the thesis is to advance microwave hyperthermia treatment planning for breast cancer by improving both precision and efficacy. By synthesizing insights from diverse disciplines such as electromagnetics and deep learning, this thesis seeks to push the boundaries of current practices and pave the way for more effective treatment strategies. Through its meticulous analysis and innovative approaches, the thesis contributes valuable knowledge and methodologies to the ongoing quest for improved cancer therapies. To achieve that, COMSOL Multiphysics software is utilized to simulate the electromagnetic and thermal behavior of breast tissue during hyperthermia treatment. These simulations consider both constant and temperature-dependent dielectric properties. Empirical data is collected using phantoms that mimic the dielectric properties of breast tissue. Temperature distributions are recorded and compared with simulated results to validate the models. U-Net architecture, an encoder-decoder model, is used to predict electromagnetic field distributions, significantly reducing the computational workload and enhancing the accuracy of treatment planning. This research underscores the importance of optimizing antenna configurations to achieve targeted heating while minimizing damage to surrounding healthy tissues. Variations in tissue properties with temperature are crucial for effective hyperthermia treatment, and modeling these changes can lead to better treatment protocols. Despite the promising results, the transition of high-precision hyperthermia into clinical practice faces challenges such as technical complexities, high computational costs, and the need for further validation and optimization. Future research should focus on overcoming the remaining technical and computational barriers, refining the proposed methods, and conducting extensive validation studies to facilitate the clinical adoption of high-precision hyperthermia treatments. This thesis represents a significant step towards improving the precision and effectiveness of hyperthermia therapy, offering a comprehensive framework for future advancements in this field.
-
ÖgeAltı port tekniği ile vektör reflektometre tasarımı(Lisansüstü Eğitim Enstitüsü, 2022-12-20) Dinçtürk, Mehmet ; Çayören, Mehmet ; 504201328 ; Telekomünikasyon MühendisliğiRadyo frekans (RF) sistemler savunma sanayi, iletişim şirketleri, güvenlik şirketleri gibi birçok sektörde etkin olarak kullanılmaktadır. Bu sistemlerin yaygınlığı ve önemi yıllar içinde artarak devam etmiştir. Çok önemli olan bu sistemleri test etmek için kullanılan cihazların maliyetleri de ciddi rakamlara ulaşabilmektedir. RF ölçüm metodolojisi birkaç alt kategoriye ayrılabilir. En çok kullanılan ölçüm yöntemi vektör analizi yöntemidir. Bu ölçüm yöntemini yapan cihazlara da vektör devre analizörü (VNA) denmektedir. VNA ile devrelerin s parametreleri ölçülmektedir. Bu cihazlar oldukça kompleks bir mimariye sahiptir. VNA yerine tüm saçılma (s) parametrelerini ölçemese de yansıma katsayısı gibi en önemli parametreyi (s11 parametresi) ölçen reflektometre mimarisi kullanılabilir. Reflektometrenin çalışma prensibi kısaca şu şekilde anlatılabilir. Giriş portundan alınan RF gücün küçük bir kısmı güç bölücüler ve yönlü kuplörler kullanılarak sisteme bağlanan güç dedektörlerine iletilir ve dedektörler yardımı ile sisteme giren güç ölçülür. Sinyalin büyük kısmı ise test altındaki cihaza (DUT) (bu cihazın bağlandığı porta DUT portu da denir) iletilir. DUT'tan yansıyan güç tekrar güç bölücüler ve yönlü kuplörler aracalığıyla güç dedektörlerine iletilerek DUT'tan sisteme geri dönen güç ölçülür. Bu geri dönen gücün sisteme giren güce oranı yapılarak yansıma katsayısı hesaplanır. Literatürde bu ölçüm prensibini sağlayan birçok farklı reflektometre mimarisi bulunmaktadır. Bu mimariler birbirinden farklı yönlerden ayrılmaktadır. Bazı refloktemetre mimarilerinde ölçüm hata payı oldukça düşük olurken gerçeklemesi zor olur, bazı mimarilerde ise gerçeklemesi kolay olurken ölçüm hata payları yüksek olur. Bu mimarilerin gerçeklenmesinde genellikle baskılı devre kartı teknolojisi tercih edilir ancak dalga kılavuzları veya MMIC tasarım şeklinde yapılan reflektometre tasarımları da literatürde mevcuttur. Reflektometre tasarımındaki bir diğer önemli konu çıkış port sayısıdır. Yansıma katsayısının genliğini bulmak için giriş ve DUT portları dışında iki adet çıkış portu yeterliyken, faz bilgisi de öğrenilmek isteniyorsa en az üç çıkış portu olmalıdır. Literatürde farklı port sayılarında reflektometre tasarımları vardır. Çıkış portu sayısı arttıkça ölçüm keskinliğinin artması sağlanmıştır. Ancak port sayısı ne kadar artarsa sistem karmaşıklığı da o kadar artacak ve gerçeklenmesi de zor olacaktır. Sistemdeki portlarla ilgili bir diğer önemli parametre de portlarda uyumlaştırma konusudur. Portlarda uyumlaştırma yapılmazsa dedektörlere gelen güçte yansımalar olacağı için ölçüm hata payı artacaktır. Bu parametre de tasarımı zorlaştıran bir diğer değişkendir. Reflektometre tasarımında mimari kadar önemli bir diğer konu ise tasarlanan sistemin kalibrasyonudur. Literatürde farklı kalibrasyon yöntemleri bulunmaktadır ve kullanılan kalibrasyon metoduna göre oldukça farklı sonuçlar elde edilebilmektedir. Bu kalibrasyon yöntemleri lineer kalibrasyon ve lineer olmayan kalibrasyon yöntemleri şeklinde iki alt başlığa ayrılabilir. Altı portlu bir reflektometre mimarisi düşünülürse lineer olmayan kalibrasyon metoduyla kalibrasyon yapmak için gerekli standart sayısı üçe kadar indirilebilirken matematiksel model oldukça karmaşık hale gelmektedir. Lineer kalibrasyon yöntemiyle kalibrasyon yapmak istenilirse en az beş adet standart kullanılmalıdır. Bu tezde altı portlu mimari kullanılarak reflektometre tasarımı yapılmıştır. Altı portlu bu sistem tasarlanırken son yıllarda ortaya çıkan güç dedektörleri eşleştirmeden aksine güç dedektörlerinin yansımalarını kullanarak mimari tasarlanmıştır. Bu mimari klasik mimarilerin aksine girişten alınan RF gücün büyük kısmını DUT'a iletmek yerine tüm gücü, güç bölücüleri ve kuplörlerin yardımı ile RF güç dedektörlerine iletir. Güç dedektörlerinde uyumlaştırma yapılmaz ve bu sayede gelen sinyalin büyük kısmı yansır ve yansıyan bu sinyal ölçüm portuna gider. Ölçüm portundan yansıyan sinyal ise tekrar güç dedektörlerine gelir ve yansıyan sinyal ölçülür. Bu şekilde tasarımın yapılabilmesine izin veren ana etken gelişmiş güç dedektörlerinin dinamik aralığının yüksek olmasıdır. Yüksek dinamik aralığı sayesinde -50 dB değerlerine kadar doğru güç okumaları yapılabilmektedir. Güç okumaları yapıldıktan sonra kalibrasyon kısmına geçilmiştir. Kalibrasyon kısmında iki adet yöntem denenmiştir. Bunlardan biri simülasyon olarak gösterilmiş, diğerinin ise gerçeklenmesi yapılmıştır. İlk metoda göre dört portun uygun üç tanesi seçilmiş ve bu portların giriş portuna göre saçılma parametreleri ölçülmüştür. Portların bir tanesi referans alınarak ölçülen bu saçılma parametreleri ile kullanılan kalibrasyon standartlarına göre çemberler oluşturulmuştur. Portlardaki güç ölçümleri ile de çemberlerin yarıçapları bulunmuştur. Yarıçapları hesaplarken de çemberler oluşturulurken referans olarak kullanılan port gene referans olarak kullanılmıştır. Daha sonra bu çemberlerin uygun kesişimleri alınmış ve bağlanan yükün yansıma katsayısını ölçmek için kullanılacak kalibrasyon sistemi ortaya çıkarılmıştır. Farklı yükler için yansıma katsayısı ölçümleri yapılmış ve ilk kalibrasyon kısmının sonunda tablo olarak verilmiştir. İkinci kalibrasyon metodunda dört portun tamamı kullanılarak kalibrasyon yapılmaya çalışılmıştır. Beş standartlı kalibrasyon metoduna göre kalibrasyon sadece güç okumalarına dayanmaktadır. Öncelikle kalibrasyon için gerekli matrisin parametreleri bulunmuştur. Daha sonra bu matrisin parametreleri ve güç ölçümleri kullanılarak yansıma katsayısı hesaplanmaya çalışılmıştır. Sonuçlar tablo olarak ilgili bölümün sonunda verilmiştir. Bu çalışmadan anlaşılacağı üzere yansıma katsayısını ölçmek için yüksek maliyetli olan VNA'lar yerine reflektometre sistemleri çeşitli uygulamalar için rahatlıkla kullanılabilir. VNA'lara göre avantajları hafif ve daha rahat taşınır olması olarak söylenebilir. Dezavantajları ise VNA'lar kadar geniş bant ölçüm yapmakta kullanılamazlar. Sistemin sadece yansıma katsayısı ölçümünü yapabilirler. Ayrıca kullanılan mimari ile reflektometre sistemlerinde güç dedektörü eşitlemesi yapılmadan da yansıma katsayısının oldukça düşük bir hata payıyla hesaplanabildiği gösterilmiştir. Ancak bunun için doğru kalibrasyon yöntemini seçmek ve doğru güç dedektörlerini seçmek hayati önem taşımaktadır.
-
ÖgeAnalytical models and cross-layer delay optimization for resource allocation of noma downlink systems( 2020) Gemici, Ömer Faruk ; Çırpan, Hakan Ali ; Hökelek, İbrahim ; 648904 ; Telekomünikasyon Mühendisliği Bilim Dalı5G is introduced by 3rd Generation Partnership Project (3GPP) to satisfy the stringent delay and reliability requirements of 5G services such as industrial automation, augmented and virtual reality, and intelligent transportation. Non-orthogonal multiple access (NOMA) is one of the promising technologies for low latency services of 5G, where the system capacity can be increased by allowing simultaneous transmission of multiple users at the same radio resource. The resource allocation in NOMA systems including user scheduling and power allocation determine the mapping of users to radio resource blocks and the transmission power levels of users at each resource block, respectively. In this thesis, we first propose a genetic algorithm (GA) based multi-user radio resource allocation scheme for NOMA downlink systems. In our set-up, GA is used to determine the user groups to simultaneously transmit their signals at the same time and frequency resource while the optimal transmission power level is assigned to each user to maximize the geometric mean of user throughputs. The simulation results show that the GA based approach is a powerful heuristic to quickly converge to the target solution which balances the trade-off between total system throughput and fairness among users. The most of the resource allocation studies for NOMA systems including our GA based approach assumes full buffer traffic model where the incoming traffic of each user is infinite while the traffic in real life scenarios is generally non-full buffer. As the second contribution, we propose User Demand Based Proportional Fairness (UDB-PF) and Proportional User Satisfaction Fairness (PUSF) algorithms for resource allocation in NOMA downlink systems when traffic demands of the users are rate limited and time-varying. UDB-PF extends the PF based scheduling by allocating optimum power levels towards satisfying the traffic demand constraints of user pair in each resource block. The objective of PUSF is to maximize the network-wide user satisfaction by allocating sufficient frequency and power resources according to traffic demands of the users. In both cases, user groups are selected first to simultaneously transmit their signals at the same frequency resource while the optimal transmission power level is assigned to each user to optimize the underlying objective function. In addition, the GA is employed for user group selection to reduce the computational complexity. When the user traffic rate requirements change rapidly over time, UDB-PF yields better sum-rate (throughput) while PUSF provides better network-wide user satisfaction results compared to the PF based user scheduling. We also observed that the GA based user group selection significantly reduced the computational load while achieving the comparable results of the exhaustive search. The low latency objectives of URLLC services such as industrial control and automation, augmented and virtual reality, tactile Internet and intelligent transportation requires delay analysis which cannot be possible using the rate limited traffic demands. The packet based traffic model with random inter-arrival times and packet sizes have to be utilized. New analytical models using packet based traffic model with random inter-arrival times and packet sizes are of paramount importance to develop high performance resource allocation strategies satisfying the challenging latency requirements of 5G services. As the third contribution, we propose an analytical model to characterize the average queuing delay for NOMA downlink systems by utilizing a discrete time M/G/1 queuing model under a Rayleigh fading channel. The packet arrival process is assumed to be Poisson distributed while the departure process depends on network settings and resource allocation. The average queuing delay results of the analytical model are validated through Monte Carlo simulation experiments. One of the main results is that the ergodic capacity region of NOMA is a superset of OMA indicating that the NOMA can support higher service rate and lower latency using the same resources such as transmission power and bandwidth. Furthermore, the proposed analytical model is applied for the performance evaluation of the 5G NR concept when the NOMA is utilized. The model accurately predicts that the average queuing delay decreases when wider bandwidth and shorter time slot duration are employed in 5G NR. The outage probability becomes an important metric that should be minimized to address the reliability aspect of the URLLC services. We utilize the common outage condition such that the user fails either decoding its own signal or performing SIC for the signals of other users at the receiver when the SINR is lower than a predefined outage threshold. As the fourth contribution, the optimum power allocation for a single resource block that minimizes the system outage probability under Rayleigh fading channel, where a common signal to interference plus noise ratio (SINR) level is utilized as an outage condition, is provided as a closed form expression. The accuracy of the proposed optimum power allocation model is validated by the Monte Carlo simulations. The numerical results show that the outage probability of OMA with the fractional power allocation is lower than NOMA with the optimum power allocation. The results indicate that the trade-off between the outage and spectral efficiency in NOMA should be carefully controlled to meet higher throughput and lower latency objectives of 5G. The last contribution considers the reliability and latency aspects jointly such that the discrete time M/G/1 queuing model of a NOMA downlink system is extended by taking the outage condition into account. The departure process of the queuing model is characterized by obtaining the first and second moment statistics of the service time that depends on the resource allocation strategy and the packet size distribution. The proposed model is utilized to obtain the optimum power allocation that minimizes the maximum of the average queuing delay (MAQD) for a two-user network scenario. The Monte Carlo simulation experiments are performed to numerically validate the model by providing MAQD results for both NOMA and orthogonal multiple access (OMA) schemes. The results demonstrate that the NOMA achieves lower latency for low SINR outage thresholds while its performance is degraded faster than OMA as the SINR outage threshold increases such that OMA outperforms NOMA beyond a certain threshold. Another important result is that the latency performance of NOMA is significantly degraded when the 5G NR frame types having wider bandwidth are utilized. The results provide powerful insights for 5G ultra-reliable low-latency communication (URLLC) services.
-
ÖgeAntenna design for breast cancer detection and machine learning approach for birth weight prediction(Graduate School, 2024-01-03) Kırkgöz, Haluk ; Kurt, Onur ; 504201348 ; Telecommunications EngineeringWith the advancement of technology in the biomedical field, new diagnostic and treatment methods and new devices are being developed day by day. However, although this situation seems mostly advantageous, the development of technology in some areas poses some difficulties for both patients and doctors in terms of diagnosis and treatment. For example, electromagnetic radiation used for diagnostic purposes can be harmful to patients. In addition, the precision and accuracy of the results of the techniques used also contain a margin of error, and it becomes important for doctors to consider these margins of error in the decision-making process. Based on the briefly mentioned problems, alternative methods are proposed for two different fields in this thesis. In the first study, an alternative method different from standard methods for breast cancer diagnosis will be proposed, and in the second study, machine learning approaches that can determine the baby's birth weight with high accuracy will be presented. Breast cancer remains a major global health problem and requires continuous improvements in diagnostic and control methods to achieve better patient outcomes during treatment and early detection of the disease. As breast cancer is one of the most common and dangerous diseases among women worldwide, it is therefore critical to diagnose it quickly. Considering that breast cancer is the second-leading cause of cancer-related mortality in women, the need for efficient and non-invasive diagnostic methods has become greater. The negative consequences of conventional approaches in terms of their operating principles or application methodologies give rise to this demand. In response to the limitations inherent in traditional diagnostic techniques, microwave imaging methods have been developed for effective diagnosis of breast cancer. The feasibility and efficacy of using microstrip patch antennas for breast cancer detection are especially examined in the first section of this thesis, which explores an alternative medical method. These antennas can be considered an important development in the medical industry as they are able to detect small electromagnetic oscillations that are indicative of early-stage cancer. This study introduces the design and simulation of a rectangular microstrip patch antenna on an FR-4 substrate operating at 2.45 GHz in the ISM band for breast cancer detection. Utilizing the Computer Simulation Technology (CST) software, both the proposed antenna and a five-layer breast phantom, with and without a 5 mm-radius tumor, were comprehensively designed. A breast phantom modeled as a hemisphere and an embedded tumor modeled as a sphere with different dielectric characteristics were successfully simulated. The antenna's performance was evaluated at varying distances from the phantom, revealing alterations in parameters such as electric field, return loss, voltage standing wave ratio, efficiency, specific absorption rate, etc., in the presence of a tumor. The simulation results at different antenna locations show discernible differences in values with and without tumors, indicating that a tumor significantly influences power reflection back to the antenna. The VSWR of the antenna, lower than 2, aligns with acceptable VSWR limits. Furthermore, the proposed antenna demonstrates increased electric field strength in the presence of a tumor. In addition, simulation outcomes in free space and with a 3-D breast phantom indicated that the antenna, positioned 20 mm from the breast phantom, is more efficient in tumor identification compared to the one located at 40 mm. Given its tumor detection capability and satisfactory SAR values, the proposed antenna emerges as a promising tool in biomedical applications. Future studies will explore alternative antenna geometries and techniques to enhance performance and increase tumor detection sensitivity. Birth weight is a critical indicator of both pregnancy progress and infant development, exerting a substantial influence on short- and long-term health conditions in newborns. In other words, fetal weight emerges as a pivotal indicator of short- and long-term health problems in newborns, both in developed and developing countries. Understanding the contributing factors to low birth weight (LBW) and high birth weight (HBW) can inform the implementation of optimal interventions for the population's health. In the second study, we present our research on the prediction of birth weight classification through the application of various machine learning algorithms. For this investigation, 913 medical observation units, each characterized by 19 features encompassing actual birth weight information and ultrasound measurements, were employed. In the study, a number of data preprocessing steps were performed on the data set before the data set was directly used to train the classifier models. To address the issue of imbalanced data across classes, we implemented the synthetic minority oversampling technique (SMOTE). Additionally, feature scaling was applied to standardize numerical attributes within a particular range in the dataset, as there are different physiological variables with different units and orders of magnitude. In this work, nine different machine learning classifier models are used. They are decision tree, discriminant analysis, naive bayes, support vector machine, k-nearest neighbor, kernel approximation, ensemble classifier, artificial neural network, and logistic regression. The hyperparameters of each model were kept at default values, and no hyperparameter tuning was made. To evaluate the performance of nine distinct supervised machine learning algorithms, we compared birth weight classification models with and without feature selection, utilizing numerous evaluation metrics. These different metrics are accuracy, sensitivity, specificity, positive predictive value, negative predictive value, F1 score, and area under the receiver operating curve. Referring to the Pearson correlation coefficient technique applied to the data set, abdominal circumference, head circumference, biparietal diameter, femur length, and hemoglobin levels at the 0th and 6th hours are highly correlated with birth weight. The results of our analysis highlight that the subspace kNN-based ensemble classifier outperforms other machine learning models, achieving the best macro-average accuracy of 99.87% without feature selection and 99.75% with feature selection. Additionally, we observed that the bilayered neural network exhibits similar performance to the kNN-based model, with the best macro-average accuracy of 99.62%, irrespective of feature selection. Furthermore, principal component analysis (PCA) was applied to the data set as an unsupervised method for birth weight classification. The outcome clearly demonstrates the successful classification of most data points by PCA. The findings of this study underscore the potency of machine learning as a robust and non-invasive method for accurately predicting the birth weight classification of infants. In light of these factors, a health program could be devised to prevent the occurrence of LBW and HBW since recognition of LBW or HBW in a newborn may signal potential problems that could manifest immediately after birth or later in life. At the end of the thesis, performance improvement methods have been proposed based on the two studies we conducted, and we hope that the results of our research will shed light on future studies.
-
ÖgeBenek gürültüsü gidermeye dayalı veri artırma ile derin ağlarda radar hedef sınıflandırma(Lisansüstü Eğitim Enstitüsü, 2022-02-28) Ceylan, Şakir Hüdai Mert ; Erer, Işın ; 504171333 ; Telekomünikasyon MühendisligiTeknolojinin ilerlemesiyle birlikte donanım kapasitesinin her geçen gün artması, maliyetlerin düşmesi ve etiketlenmiş veri sayısının artışı, yapay zeka uygulamalarının yaygınlaşmasına zemin hazırlamıştır. Bugün birçok alanda yapay zeka uygulamaları yaygın olarak kullanılmaktadır. Bu uygulamalar arasında en popüler alanlardan biri, bilgisayarlı görü sistemleridir. Askeri, ticari ve insansız ulaşım sistemleri gibi birçok alanda faydalanılan bilgisayarlı görü sistemleri içerisinde, özellikle askeri alanda otomatik hedef tanıma teknolojileri büyük önem arz etmektedir. Günün her saatinde ve her türlü hava koşulunda görüntü alabilme imkanı gibi avantajları sebebiyle uzaktan algılama uygulamalarında radar görüntüleme sistemleri, optik görüntüleme sistemlerine üstünlük kurmaktadır. Eskiden daha çok askeri alanda kullanım alanına sahip olan radar görüntüleme sistemleri artık günümüzde jeoloji, arkeoloji ve çevrenin korunması gibi çok farklı alanlarda da hedef tespiti ve sınıflandırma amacıyla kullanılmaktadır. Fakat, radar görüntülerinin doğası gereği var olan ve görüntünün kalitesini düşüren tanecikli girişim olan benek gürültüleri, hedef ve çevresinde görüntüyü bozarak hem insanlar tarafından görüntünün anlamlandırılmasını, hem de bilgisayarlı görü sistemlerindeki otomatik nesne tanıma sınıflandırıcılarının hedefi algılamasını zorlaştırmaktadır. Bu sebeple, YAR (Yapay Açıklıklı Radar) görüntülerinin otomatik nesne tanıma uygulamaları öncesinde iyileştirme sürecinden geçerek benek gürültülerinin ayrıştırılması, sınıflandırıcı performansı için önem arz etmektedir. Bundan dolayı, literatürde de YAR görüntülerinde benek gürültüsünü giderme ile ilgili birçok çalışma yer almaktadır. Bu tez çalışması kapsamında, YAR görüntülerindeki benek gürültülerinin Medyan, BM3D ve EAW filtreler kullanılarak giderilmesi ile derin öğrenme tabanlı radar otomatik nesne tanıma uygulamalarındaki sınıflandırma performansının artırılması, bu sayede çok daha derin ve karmaşık ağlarda elde edilen sınıflandırma başarılarının daha temel ve az karmaşıklığa sahip ağlarda elde edilebilmesi hedeflenmiştir. Ek olarak, klasik sentetik veri artırma tekniklerinden farklı olarak benek gürültüsü gidermeye dayalı veri artırma yaklaşımı önerilmiş, derin öğrenme tabanlı radar otomatik nesne sınıflandırma uygulamalarındaki sınıflandırma performansına etkisi incelenmiştir. Bu kapsamda, askeri hedeflere ait görüntüler içeren 10 sınıflı MSTAR veri seti kullanılmış, YAR görüntülerindeki benek gürültülerinin giderilmesi için bu görüntüler ayrı ayrı Medyan, BM3D ve EAW filtrelerden geçirilmiştir. Ardından elde edilen görüntüler ile farklı iki adet temel evrişimli sinir ağı eğitilmiş, aynı koşullar için benek gürültüsü giderme işleminin sınıflandırma başarısına etkisi gözlemlenmiştir. EAW filtre ile benek gürültüsünün giderilmesi, her iki ağda da derin öğrenme tabanlı otomatik radar hedef sınıflandırma başarısında artış sağlamıştır. Ek olarak, tez kapsamında önerilen benek gürültüsü gidermeye dayalı veri artırma yaklaşımının benimsenmesi ile her iki ağdaki sınıflandırma başarısı, benek gürültüsü giderme sonrasında elde edilen iyileşmenin de üzerine çıkmıştır.
-
ÖgeCalculating radar range profile by time domain processing with physical optics(Graduate School, 2024-07-11) Yazarel, Ece ; Paker, Selçuk ; 504201345 ; Telecommunication EngineeringThis thesis provides an in-depth exploration of the concept of Radar Cross Section (RCS) analysis. RCS is a critical metric in radar technology, used to measure the detectability of a target by quantifying the electromagnetic energy scattered by the target and reflected back to the radar system. This study examines the theoretical foundations, computational methods, and practical applications of RCS, offering an approach that aims to bridge the gap between theoretical knowledge and real-world implementations. The thesis contributes significantly to areas such as radar system design, radar signal processing, and stealth technology evaluation. The study begins with the theoretical foundations of the RCS concept. RCS is influenced by numerous factors, including the size, shape, material properties, and orientation of the object, as well as the radar operating frequency. The scattering mechanisms that affect RCS are categorized into specular reflection, diffuse scattering, edge diffraction, and multiple scattering. Each mechanism impacts RCS differently depending on the geometry and electromagnetic properties of the target. Additionally, the behavior of RCS is described across three main regions: the Rayleigh region (where the object's size is much smaller than the radar wavelength), the Resonance region (where the object's size is comparable to the radar wavelength), and the Optical region (where the object's size is much larger than the radar wavelength). These classifications provide a fundamental framework for understanding how the interaction between geometry and radar frequency affects the visibility of a target. The second part of the thesis focuses on computational methods used for RCS analysis. These methods are divided into two main categories: high-frequency and low-frequency techniques. High-frequency techniques include Physical Optics (PO), Geometric Optics (GO), the Geometric Theory of Diffraction (GTD), and the Shooting and Bouncing Rays (SBR) method. These techniques are based on optical approximations and are computationally efficient for modeling large targets. However, they are limited in accurately modeling diffraction and multiple scattering effects. On the other hand, low-frequency techniques, such as the Method of Moments (MoM) and the Finite Element Method (FEM), provide accurate full-wave solutions for small targets or resonant cases but come with high computational costs for large targets. The choice of method depends on factors such as the target's size, radar frequency, and the desired level of accuracy. To improve the accuracy of RCS computations, this thesis introduces two algorithms: a mesh refinement algorithm and a shadowing algorithm. The mesh refinement algorithm ensures that triangular surfaces in 3D models meet specific size constraints based on the radar wavelength, enhancing the accuracy of RCS predictions for targets with complex geometries. In regions with high curvature or intricate details, surfaces are iteratively subdivided to provide a more detailed representation. The shadowing algorithm accurately identifies and models the shadowed regions of the target, which do not contribute to radar returns. By combining these two algorithms, the thesis provides a more accurate and reliable framework for RCS computations, particularly for targets with complex geometries. One of the key contributions of this thesis is the transition from traditional frequency-domain analysis to time-domain simulations, offering a different perspective for analyzing target-radar interactions. Most conventional methods assume continuous wave (CW) radar operations, which do not accurately reflect the pulse-based structure of modern radar systems. To address this limitation, this study integrates physical optics principles with time-domain simulations. This approach enables more realistic modeling of radar pulse behavior. By storing the reflectivity contributions of illuminated mesh elements in detail, the interaction between radar pulses and the target can be analyzed dynamically and spatially. This transition significantly enhances the ability to simulate real-time radar operations, accounting for target movement and temporal variations in radar returns. The thesis further strengthens this framework through advanced signal processing techniques. Matched filtering maximizes the signal-to-noise ratio (SNR), facilitating the detection of weak targets and improving range resolution. Range normalization compensates for signal attenuation over distance, ensuring consistent detection sensitivity across different ranges. Coherent integration accumulates signal energy across multiple radar pulses, enabling the detection of weaker targets. These techniques allow for the generation of high-resolution range profiles (HRRPs), which provide detailed information about the physical dimensions and reflective properties of targets by isolating the strongest reflections within a predefined range window. The practical applicability of the proposed methodologies has been tested through simulations of different targets. First, a PEC missile target was analyzed at operating frequencies of 2 GHz and 4 GHz. The RCS results were validated against those obtained from the commercial FEKO software, demonstrating a high level of accuracy. The missile's structural features, scattering behavior, and high-resolution range profile were examined from multiple perspectives, and the proposed approach achieved a target dimension estimation with an accuracy of 0.24 meters. Additionally, the F-22 aircraft was also analyzed as part of the validation process. RCS results were compared with FEKO simulations, showing excellent agreement and verifying the accuracy of the proposed techniques. The HRRP analysis accurately estimated the dimensions and range of the F-22, demonstrating the applicability of the framework to complex geometries. Signal processing steps, such as matched filtering, range normalization, and coherent integration, were consistently applied across all targets, ensuring reliable differentiation between target reflections and noise. Lastly, the Chengdu J-20 aircraft, with its larger dimensions and complex geometry, was analyzed at 4 GHz. The RCS results obtained for this aircraft were consistent with FEKO simulations, further validating the robustness of the proposed methodologies. This case study highlights the framework's ability to handle large-scale targets and intricate geometries, as well as its effectiveness in extracting detailed range profiles of the aircraft. The thesis concludes by emphasizing the contributions of these methodologies to RCS analysis and radar signal processing. The integration of mesh refinement and shadowing algorithms with time-domain simulations addresses significant challenges in modeling complex geometries and real-time radar interactions. The proposed techniques have a wide range of applications, including radar system design, stealth technology evaluation, and electromagnetic wave analysis. By combining theoretical principles with computational innovations, this thesis establishes a strong foundation for future research and practical advancements in radar technology.
-
ÖgeClinical assessment of the microwave imaging system forbreast cancer screening and early detection(Graduate School, 2023-04-26) Janjic, Aleksandar ; Çayören, Mehmet ; Akduman, İbrahim ; 504182310 ; Telecommunication EngineeringFemale breast cancer has surpased lung cancer, as the most diagnosed cancer in women population, with around 2.3 million cases arising each year. If diagnosed in late stages, it can be highly lethal, with the survival rate of only 25%. Thus, detecting the cancer in an early stage can have a major impact on decreasing the death rate of the patients. Nowadays, mammography is considered as a gold standard for breast cancer screening and diagnostics. Beside mammography, ultrasound, and magnetic resonance imaging can be used to detect the cancer. However, there are several risk factors that are limiting mentioned imaging modalities, such as: ionizing radiation exposure, pain induced by breast compression, overdiagnosis, false-positive examinations, falsenegativity in dense breasts, operator dependancy, prolonged procedures, high hospital costs, and special facility requirements. Microwave breast imaging emerged as a promising novel imaging technology that can, potentially, contribute to the field of breast cancer early screening and diagnostics, mostly because of its non-ionizing and non-invasive nature. Harmless radiation offers the opportunity of frequent scanning, even for the women of an early age, such as 18. Early-age and routine tests are crucial, especially for women with hereditary genetic mutations, where there is a considerable risk of breast cancer appearance. Beside its non-ionizing, and non-invasive nature, microwave imaging offers fast and painless scans, which can significantly increase the number of breast check-up tests, consequently increasing the number of detected early-stage cancers. Consequently, microwave breast imaging can have can substantially impact on the long-term breast cancer survival rate. The technology itself utilizes the difference in electromagnetic properties of healthy and cancerous tissue, as well as the dielectric difference between different type of cancerous tissues (benign or malignant), to detect the presence of anomalies inside the patient's breast and provide their pathology. In the first part of the thesis, we integrated inverse scattering algorithm to acquire the microwave images, and provide information about breast cancer location (detect the breast cancer), from the data collected with the microwave breast imaging device, namely SAFE, developed by the joint work of Mitos Medikal Technologies A.S. and the Medical Device Research, Development, and Application Laboratory of Istanbul Technical University. Dataset used in the study (scans from 115 patients), was acquired through the clinical trials performed by the Marmara University School of Medicine. In addition to the breast lesion detection, we analyzed the effect of the factors of interest, such as: breast density and size, tumor size, as well as patient's age, on the SAFE clinical capabilities. Results show, that we were able to detect 63% of breast lesions, where the breast size had a high impact on the overall score. Significantly lower number of lesions were detected in smaller breasts (51%), compared to the large ones (74%). Density also influenced our inverse scattering approach, as the overal rate of 76%, we achieved in fatty breasts, decreased to 56% in dense breasts. Second part of the thesis is reserved for the machine learning approach, namely adaptive boosting, we implemented on the SAFE dataset, to classify breast lesions, based on their pathology. We used the same dataset as in the first part of the thesis. As in the previous study, we analyze the effect of breast density and size, tumor size, and patient's age, on the used data. In addition, we perform statistical analysis (two-sample t-test) to determine if the difference between the benign and malignant dataset exists. In the existing dataset, 70 benign, and 43 malignant lesions were present. We exclude two cases, due to the unknown pathology. Our machine learning approach achieved the accuracy of 78%, sensitivity of 79% and specificity of 77%. The results indicate that we were able to classify both, benign and malignant lesions, at similar rate. Participant's age was the only factor that highly affected our approach outcome, where the overall rate (accuracy) of the device in young patient group was 84%, compared to the 76% achieved in older patient group. In the third part of the thesis, we implement another machine learning approach, namely Gradient Boosting, to distinguish benign from malignant lesions, considering new dataset, acquired from latest SAFE clinical trials. Additionally, compared to the previous studies, we changed the measurement unit component of the device. Fiftyfour patients were analyzed, where 29 of them had benign, and 25 malignat findings. As in the previous study, we apply statistical analysis (two-sample t-test), to determine if the difference between the benign and malignant dataset exists. Sensitivity, specificity and accuracy we achieved were 80%, 83% and 81%, showing that, in this study as well, we were able to classify both benign and malignant lesions at similar rate, despite of the hardware and software changes implemented. Contrary to the previous studies, multiple factors (breast size, density and age) affected our approach outcome. We achieved significantly higher accuracy in larger breasts (86%), compared to the smaller ones (78%). Additionally, accuracy acquired in dense breast (67%) was significantly lower than in fatty ones (93%). At the end, our method accuracy was 88% in older patient group, compared to the 71% in younger group.
-
ÖgeCompressive sensing of cyclostationary propeller noise(Graduate School, 2023-09-12) Fırat, Umut ; Akgül, Tayfun ; 504122303 ; Telecommunication EngineeringThis dissertation is the combination of three manuscripts -either published in or submitted to journals- on compressive sensing of propeller noise for detection, identification and localization of water crafts. Propeller noise, as a result of rotating blades, is broadband and radiates through water dominating underwater acoustic noise spectrum especially when cavitation develops. Propeller cavitation yields cyclostationary noise which can be modeled by amplitude modulation, i.e., the envelope-carrier product. The envelope consists of the so-called propeller tonals representing propeller characteristics which is used to identify water crafts whereas the carrier is a stationary broadband process. Sampling for propeller noise processing yields large data sizes due to Nyquist rate and multiple sensor deployment. A compressive sensing scheme is proposed for efficient sampling of second-order cyclostationary propeller noise since the spectral correlation function of the amplitude modulation model is sparse as shown in this thesis. A linear relationship between the compressive and Nyquist-rate cyclic modulation spectra is derived to utilize matrix representations for the proposed method. Cyclic modulation coherence is employed to demonstrate the effect of compressive sensing in terms of statistical detection. Recovery and detection performances of sparse approximation algorithms based on greedy pursuits are compared. Results obtained with synthetic and real data show that compression is achievable without lowering the detection performance. Main challenges are weak modulation, low signal-to-noise ratio and nonstationarity of the additive ambient noise, all of which reduce the sparsity level causing degraded recovery and detection performance. Higher-order cyclostationary statistics is introduced to characterize propeller noise due to its non-Gaussian nature. The third-order cyclic cumulant spectrum, also known as the cyclic bispectrum, is derived and its sparsity is demonstrated for the amplitude modulated propeller noise model. Cyclic modulation bispectrum is proposed for feasible approximation of the cyclic bispectrum based solely on the discrete Fourier transform. Additionally, compressive sensing of the cyclic modulation bispectrum is suggested. Numerical results are presented for acquisition of the propeller tonals using real-world underwater acoustic data. Tonals estimated by third-order cyclic modulation bicoherence are more notable than the ones obtained by second-order cyclic modulation coherence due to latter's higher noise floor. Sparse recovery results show that frequencies of the prominent tonals can be obtained with sampling significantly below the Nyquist rate. The accurate estimation of tonal magnitudes, on the other hand, is challenging even with large number of compressive samples. Compressive sensing can be extended to solve underdetermined system of equations which appears in direction-of-arrival estimation with uniform linear arrays. An estimator is proposed based on the compressive beamformer for cyclostationary propeller noise. Its asymptotic bias is derived, which is inherited from the conventional beamformer when there are multiple sources. Squared asymptotic bias and the finite-sample variance, also derived explicitly, constitute the mean-squared error. Spectral averaging is suggested to mitigate this error by decreasing the adverse effect of the spatial Dirichlet kernel. For low signal-to-noise ratios, averaging enables the proposed estimator to outperform the methods that assume stationarity. This is achieved even under weak cyclostationarity, numerous closely-spaced sources and few sensors. The proposed methods are not only suitable for compressive sensing of propeller cavitation noise but also for general class of cyclostationary signals. Relevant research areas include but are not limited to communication, radar, acoustics and mechanical systems with applications such as spectrum sensing, modulation recognition, time difference of arrival estimation, time-frequency distributions, compressive detection and rolling element bearing fault diagnosis.
-
ÖgeConflict avoidance algorithm between mobility robustness optimization and load balancing functions(Graduate School, 2023-07-10) Demir, Çağrı ; Ergen, Mustafa ; 504191308 ; Telecommunications EngineeringProviding seamless connectivity and mobility to the end users in cellular networks have always been a big challenge for the service providers. With the evolution of the cellular networks and the increased user density, this challenge became more crucial. Operators and vendors are working to enable new features to meet these challenges and provide better quality of service (QoS) to the end user. On the other hand, developments in cellular communication networks increased system complexity and made maintaining, organizing, and sustaining the network infrastructure harder. Additionally, reducing the capital expenditures (CAPEX) and operational expenditures (OPEX) emerged. These changes in the requirements and the conditions, brings out the necessity of more autonomous cellular networks. As a result, self-organizing network (SON) concept introduced to address aforementioned issues. SON is a concept proposed in 3rd generation partnership project (3GPP) to achieve more autonomous cellular network. The idea is to create a cellular network which can be able to configure, optimize, heal, and coordinate itself. For this purpose, various SON functions introduced for different functions of cellular network, such as mobility management, random access (RA) optimization, energy efficiency etc. One of the most common SON functions utilized in cellular communication is mobility management (MM) based solutions and in this thesis, we will analyze and propose a solution to provide seamless mobility management experience. In response to the high demand for being connected anytime and anywhere, mobile networks are being evolved towards sixth generation of mobile networks (6G). At the same time, this brings more complexity to cellular networks. Increased demand also requires the SON concept to be more advanced and self-coordinated. One of the key aspects of accomplishing more advanced and coordinated SONs is conflict avoidance. The central focus of this thesis is to provide conflict-aware SON function to the literature. To accomplish this, first, we provided detailed analysis of SON functions in MM to have deeper understanding of the SON concept with its challenges for 5G and beyond. Additionally, the main SON algorithms related to MM, such as mobility load balancing (MLB) and mobility robustness optimization (MRO) are discussed with references to related literature. On the basis of this analysis and understanding, we proposed a solution to accomplish conflict-avoidance in mobility management related SON functions. Proposed algorithm designed based on the user-specific solution approach. Main reason is to utilize from user-specific approach is to be able to manage each user attached to the network individually. Particularly, the algorithm collects the network key performance indicators (KPIs) of the cell and if it detects anomaly in the KPIs, SON algorithm is triggered automatically to take corrective actions. Once the SON function activated in the cell, the information about the SON activation sent to the neighbor cells to take them necessary actions for the users incoming from SON active cell. In the meantime, SON algorithm collects the load and location information from all users individually in the SON active cell and calculates the specific handover measurement offset for each user. Accordingly, new unique handover control parameter (HCP) configuration sent to each user individually. Thanks to this design approach, algorithm can provide more specific solutions for users specifically and improves the QoS. It also achieves improvement on signaling overhead and handover KPIs. Performance of the algorithm is evaluated by comparing the results with MLB, MRO, and disabled SONs scenarios. Overall, an average improvement of 23% was achieved across all KPIs. Simulation result details are also shared in the thesis. System simulation is performed in C++ based open-source simulation environment which has built-in fourth generation of mobile networks (4G) protocol stack and handover features. To implement the proposed algorithm and other algorithms for comparison purposes, we have modified the source code of the simulation tool and developed extra functions in it. Finally, we have achieved an end-to-end cellular simulation environment to measure the performance of the proposed algorithm. Thanks to the simulation tool, it enables us to simulate the cellular network with the different settings and helps us to perform the measurements in the environment which mimic the real network conditions with extensive feature set. This also enables us to comment confidently on simulation results that are similar to real network implementations. The thesis is concluded with the simulation results and the final comments. Results show that proposed algorithm achieves better performance in terms of service continuity and mobility performance. It also shows that overall system throughput distributed among the cells more evenly. As user-based approach provides specialized solutions for the user equipment (UE) individually, it is improving the system performance of both MLB and MRO functions and eliminates the conflict problem.
-
ÖgeCross-domain one-shot object detection by online fine-tuning(Graduate School, 2024-06-26) Onur, İrem Beyza ; Günsel, Bilge ; 504211318 ; Telecommunication EngineeringObject detection aims to identify and locate objects within an image or video frame. Recently, deep learning-based object detectors have made significant contributions to the community, thanks to their capability of processing and learning from large volumes of data. However, their detection performance heavily depends on large labeled datasets, which are essential for them to generalize effectively across previously unseen (novel) or rare classes. This restriction is addressed by the recent development of few-shot object detection (FSOD) and one-shot object detection (OSOD) techniques, which aim to detect novel classes using a few or a single sample of a previously unseen class, enabling rapid adaptation to novel classes without extensive labeled data. The FSOD and OSOD paradigms which allow models to adapt to novel classes from a small sample size have two main lines: methods based on meta-learning and those based on transfer learning. Meta-learning approaches aim to learn a generalizable set of parameters on data-abundant base classes by applying an episodic training approach. This approach is designed to transfer the base knowledge gained on data-abundant base classes to data-scarce novel classes during the inference phase. The other methods based on transfer learning focus on fine-tuning the model, which has trained on extensive datasets, on the limited number of new examples by applying several methods such as freezing only the specific layers or adaptive learning rate scheduling during the fine-tuning. In FSOD and OSOD techniques, the model is trained on data-abundant base classes and then fine-tuned on both the base and novel classes or on only the novel classes. This methodology makes them well-suited approaches for use with still images where the task is to recognize objects based on limited data. Although FSOD and OSOD methods enable quick adaptation to novel objects, their performance highly depends on the training domain, typically composed of still images. Due to the domain shift, the conventional setup yields significant performance degradation in one-shot or few-shot object detection in cross-domain evaluations. Moreover, most of the recent studies remain limited to focusing on performance gaps between different image domains, rather than those between image and video domains. Differing from the existing work, this thesis particularly focuses on the significant performance gap observed in cross-domain evaluations, from the still image domain to the video domain, which has been largely overlooked in the literature. In video domain evaluations, the main purpose of the detection model is to detect the target object, which is introduced at the beginning of the video sequence, within subsequent frames successfully. In the scope of this thesis, we choose to work with the OSOD models rather than the FSOD models because object detection in video sequences primarily hinges on the model's ability to adapt to the target object using only a single example. Therefore, OSOD models, designed to adapt to the target object using only one example, are more suitable than FSOD models, which necessitate few examples to adapt to the target object. In particular, OSOD models aim to classify and localize a target object in an image using its particular representation known as a query shot. This is achieved through a template-matching algorithm that detects all the instances from the class of this single query shot within the target image. The paradigm enables the model to adapt to the specific appearance of the target object from a single sample, in contrast to FSOD where the model is designed to adapt to novel classes using a small number of samples. In addition to the scarcity of examples of the target object aimed to be detected throughout the video frames, temporal challenges, and motion changes also present a substantial challenge for OSOD methods. OSOD models' ability to handle affine motion changes and maintain temporal consistency is crucial for sufficient performance in video sequences. Moreover, the challenge OSOD brings along arises from the requirement that the model must adapt to novel classes based solely on a single sample, which demands a higher level of generalization and adaptability. The majority of the OSOD models are trained and evaluated on the same domain in which the data distributions and class characteristics are quite similar between the training and evaluation sets. Although recent studies have examined the cross-domain evaluation of OSOD models, they have primarily focused on evaluations within different still-image domains, rather than between still-image and video domains. The models are vulnerable to severe shifts in data distribution due to the domain shift between still-image and video domains. In this thesis, we aim to demonstrate and analyze the reasons behind the performance gap that OSOD models experience in cross-domain scenarios. To do this, we evaluate a state-of-the-art (SOTA) OSOD model, BHRL, which has been trained on the MS COCO dataset from the still-image domain, using the VOT-LT 2019 dataset, which presents the challenging context of the video domain. For a fair evaluation, we include only the video frames where the target object is present. To alleviate the performance degradation, we take BHRL as the baseline OSOD model and propose three different novel OSOD frameworks, based on integrating an online fine-tuning scheme and a query shot update mechanism into the inference architecture of BHRL, the SOTA OSOD model utilizing multi-level feature learning. During the evaluation of the proposed frameworks, the mAP0.5 metric is used and class-based reporting is performed. Classes included in the training phase were classified as base classes, while those that were not included were categorized as novel classes. In the following, the proposed OSOD frameworks are summarised. OSCDA w/o CDQSS: The initial appearance of the target object is taken as the query shot and the model is online fine-tuned only on this query shot once at the beginning of the inference phase. Subsequently, the model tries to detect all instances of the relevant class within the video frames. Although fine-tuning is a conventional approach in transfer learning, integration with the multi-level feature learning of BHRL improves the detection mAP.50 performance by 14% on all classes. OSCDA w/ CDQSS: A major limitation of OSCDA w/o CDQSS in video object detection is its vulnerability to rapid appearance changes of the target object, resulting from the risk of overfitting on the query shot that represents the target's initial appearance. To overcome this drawback, in addition to fine-tuning, CDQSS, which is an adaptive query shot selection module, is integrated into the baseline architecture. This approach enables unsupervised online fine-tuning to deal with the rapid appearance changes of the target object caused by the affine motion throughout the video frames. By CDQSS, the query shot is updated with the model's detections based on their objectness scores and localization consistency across frames. The model is continuously fine-tuned with the query shots chosen by the CDQSS during the inference phase. The fine-tuning process is called unsupervised fine-tuning since it is based solely on the model's detections rather than the ground truth. CDQSS provided an additional 6% improvement in mAP.50 on all classes. SACDA: In order to take advantage of extra shots without leaving the one-shot detection approach, we propose incorporating online fine-tuning into the BHRL using the initial query shot and its synthetically generated variations referred to as augmented shots. In particular, similar to OSCDA w/o CDQSS, SACDA conducts fine-tuning only once at the beginning of the inference, and then the model tries to detect all instances of the relevant class throughout the subsequent frames. SACDA aims to adapt to quick changes in the target object's appearance, such as flipped or rotated versions, without relying on the continuous fine-tuning process suggested in the second framework (OSCDA w/ CDQSS). SACDA improves BHRL's mAP.50 score by 14%, matching the improvement seen with OSCDA (w/o CDQSS). However, SACDA significantly outperforms the previous frameworks in specific sequences such as ballet, group2, and longboard by 14%, 28%, and 46% respectively. These sequences share challenges such as extreme changes in lighting, scale, and rotation of target objects, as well as rapid illumination changes. Given SACDA's design goal to enhance the OSOD model's robustness to variations in target and scene appearances, these significant gains indicate SACDA's potential effectiveness. The achieved performance improvements demonstrate the proposed methods' effectiveness in tackling domain shift challenges faced during the cross-domain evaluations for video object detection.
-
ÖgeCrowd localization and counting via deep flow maps(Graduate School, 2024-06-26) Yousefi, Pedram ; Günsel, Bilge ; 504211329 ; Telecommunications EngineeringUnderstanding the location, distribution pattern, and characteristics of crowds, along with the number of objects within a specific space, constitutes a critical subject known as crowd analysis. The analysis and monitoring of people in crowds hold paramount importance, particularly in areas such as security and management, for practical applications such as urban management, city planning, and preventing catastrophes. Over the years, numerous methods have been developed and introduced to address this challenge. Earlier methods relied on detection-based solutions, where each individual had to be detected and then counted, facing challenges such as occlusion which further complicated the process of detecting individual body parts and counting each individual and high processing time. Other methods that were introduced to remedy problems related with detection-based crowd counting relied on regression-based solutions, attempting to map crowd distribution patterns to the crowd count. Regression-based methods faced problems such as occlusion and low performance in highly crowded scenarios. Both approaches could only report the total number of objects or individuals and not their locations or distribution patterns. However, with advancements in the area of deep neural networks, specifically the introduction of convolutional neural networks (CNNs), CNN-based crowd counting methods have emerged. These methods aim to find a relationship between the extracted features from the input image and the ground-truth data, depicted as a color-coded density map. This density map illustrates the distribution pattern and shape of the target objects within the scene. Ground-truth density maps are generated by convolving object center coordinates with a Gaussian kernel, effectively encoding the average object sizes and the distances between the objects. This approach allows for not only the counting of objects but also the visualization of their distribution patterns. In recent years, many density-based crowd counting networks have been developed and introduced, differing in their accuracy and network architecture. Most of these networks work with single images in the spatial domain; however, a limited number of density-based networks that operate in the temporal domain with video frames have been introduced. The network used in the current research study, named CANnet2s, is among the video-based deep neural networks using density estimation techniques. Aside from extracting features, this network estimates the flow of objects within a pair of video frames at the pixel level, within small image areas called "grids." Displacements of objects to or from these grids are estimated, resulting in the generation of flow maps (maps of objects moving in a certain direction). This process totals in the creation of ten flow maps for ten possible directions. The density maps are then generated by combining these flow maps, and the total crowd count is estimated from these combined maps. The CANnet2s network was originally developed for people crowd counting purposes. Therefore, the initial phase of this study investigates the network's performance on people crowds by conducting experiments on different datasets such as FDST, ShanghaiTech, and JHU-Crowds++. However, motivated by recent developments and the increased usage of autonomous vehicles, the second phase of the study focuses on adapting this network to the domain of vehicle crowd counting and estimation. This phase of the study begins with experiments using the TRANCOS cars dataset, which includes traffic jam images. However, due to limitations in the quality of images and camera positions in this dataset, the comprehensive WAYMO dataset is employed. This dataset includes high-quality real-life video sequences recorded from the point of view of the vehicle driver, making it ideal for autonomous driving purposes. A subset of this dataset, comprising 140 video segments (approximately 28,000 video frames), is annotated and prepared for training and testing purposes of the network, where 25 segments are used for training and the remaining segments are employed for testing. Due to pioneering nature of this study and scarcity of related studies in the field of vehicle counting utilizing the WAYMO dataset, the still-image-based counterpart of CANnet2s, the CANnet network, is also trained and tested for comparative analysis. Throughout this research, CANnet2s consistently demonstrated superior performance. It exhibited a smaller mean absolute error (MAE) rate of 5.46 compared to CANnet, which had an MAE error of 7.74, despite being trained for fewer epochs (150 epochs compared to CANnet's 500 epochs). Additionally, CANnet2s showed a 3 dB increase in peak signal-to-noise ratio (PSNR) value compared to CANnet, which resulted in the generation of density maps with higher levels of detail and enhanced quality. In the second phase of this research, WAYMO dataset segments are meticulously labeled and categorized based on various scene characteristics and features, including weather conditions and vehicle crowds. Attribute-based network performance reports are then generated, highlighting the efficacy of CANnet2s, particularly in challenging scenarios. Once again, CANnet2s demonstrated its superiority, reaffirming its effectiveness across diverse conditions and environments. To further boost the performance of CANnet2s, transfer learning techniques are employed. A pre-trained model from the TRANCOS cars dataset served as the baseline for training the CANnet2s network with the WAYMO dataset. This approach halved the required training time, achieving the desired network performance after just 35 epochs of training. The outcome was an enhancement in network performance in terms of MAE error rate, particularly evident in one of the most challenging segments of the WAYMO dataset, depicting a blurry, highly occluded scene, where the MAE error rate decreased by 98 percent and the output density maps closely mirrored the ground-truth data. Furthermore, the study examines the impact of modifications to the CANnet2s architecture and network elements on network performance by experimenting with different kernel sizes and investigating the effect of input video frame dimensions on processing time. By using kernel modification, specifically by adjusting the kernel sizes of the pyramid pooling section of the CANNet2S architecture, the network's performance on the TRANCOS dataset improved both in terms of learning speed and error rate. This modification decreased the required training time from 90 epochs to 10 epochs while reducing the MAE error rate from 2.4 to 2.1, making CANNet2S's performance on the TRANCOS dataset the second best in the benchmark table. This study explores the feasibility of multi-object crowd estimation, with a focus on simultaneously detecting and counting both vehicles and people in video frames. This is crucial for identifying these objects as the main obstacles from the driver's viewpoint. This exploration represents the early stages of research in this area. The results of this research study show promising outcomes for the practical application of these methods in areas such as a pre-processing step in autonomous vehicles, road and urban transportation management by city authorities, and general crowd estimation purposes.
-
ÖgeDeep image prior based high resolution isar imaging for missing data case(Graduate School, 2023-06-06) Bayar, Necmettin ; Erer, Işın ; 504201334 ; Telecommunication EngineeringRadio detection and ranging or Radar as an abbreviation form, is a system that aims to detect the location, shape, and speed of objects that are named as targets. Earlier Radar systems were used for high level applications such as defence systems, airplanes, air surveillance and traffic control, etc. Later, it took place in daily life applications like smart cars, smart home devices, vital sign detection and a lot more to satisfy the needs of human life. Basically, radar sends electromagnetic waves from its transmitter and these waves reflect from the surface of objects, then the receiver of the radar collects these backscattered signals to process. Such a basic way, target speed and range can be extracted by applying 1-D signal processing on backscattered waves. Apart from the 1-D application, 2-D radar signal processing can extract the target shape on cross-range domain. In order to generate a radar image, electromagnetic waves, which are in different frequencies are sent to target and target is observed from different angles. Frequency sweeping can be done by some well known methods like stepped frequency or linear frequency modulation, thus the signals that have variable frequency can be generated by the same antenna. For moving targets, inverse synthetic aperture is used, which uses the relative motion of the target to use it as an observation angle. Synthetic Aperture Radar (SAR) is the case when radar is moving and the target is stationary whereas in Inverse Synthetic Aperture Radar (ISAR), radar is stationary and target is moving. As previously noted, such a manipulation on relative speed is used to generate ISAR/SAR data and polar format algorithm is used for Polar to Cartesian Coordinates conversion. Later 2D inverse Fourier transform can be applied to raw data to extract a radar image, which is also named as the Range Doppler (RD) image of the target. Besides good imaging performance capability, various challenges have to be handled in ISAR imaging. Some serious problems may arise during measurements that are challenging and this phenomenon affects the quality of the ISAR image. One of the well known problems is missing data case. Undesirable interference, an external jamming signal, beam blockage, or some other technical problem may lead to the missing data when receiving backscattered electromagnetic waves that are reflected from the target. There is also Compressive Sensing (CS) method that aims to generate radar images with less samples. For both cases, the conventional RD imaging method will perform poor imaging result. Missing data is a common problem for many radar related fields. In order to overcome missing data problem 1-D signal reconstruction algorithms are proposed such as Matching Pursuit (MP) and Basis Pursuit (BP). These approaches represent signals with dictionaries instead of conventional Fourier based superpositioned sinusoids. Although they are useful, 1-D reconstruction algorithms can not be applied directly to the 2-D signals, thus, Kronocker product based solutions are proposed to reconstruct 2-D signals with 1-D reconstruction algorithms. Such a process has a high computational cost in addition to the excessive memory requirement, so that 2-D sparse signal reconstruction algorithms are proposed. 2-D Smoothed L0 norm (2-D SL0) is the 2-D form of the 1-D Smoothed L0 norm sparse signal reconstruction algorithm and it proposed to reconstruct 2-D signal with low computational cost and low memory requirement by comparing to the 1-D signal reconstruction methods. Many successful studies have been done with the proposed 2-D SL0. There are also other methods available which are proposed to recover missing entries by exploiting the low rank feature of the matrix. Go Decomposition (GoDec), Low Rank Matrix Fitting (LMAFIT) and Nuclear Norm Minimization (NNM) are used to recover missing data on many applications that are focused on real data, so these are also not directly applicable to the ISAR raw data. There is also the Augmented Lagrangian Multiplier (ALM) for constrained optimization problems. ALM can also be applied to matrix completion problems, but primal variables of the algorithm can only be solved inexactly; thus, Inexact Augmented Lagrangian Multiplier (IALM) is proposed for matrix completion. Apart from the well known matrix completion methods, it can be directly applied to the complex data. Recently, deep learning based approaches are quite famous to recover missing parts of real images. Deep learning based approaches usually require a high amount of training data that contains corrupted images as input and original images as target to train deep convolutional neural networks to achieve tasks such as denoising, inpainting, and super-resolution. Previously, some studies trained deep networks to do such tasks on ISAR images. As it was mentioned before, the ISAR image is generated by the traditional RD algorithm. Deep learning based approaches use the amplitude of the 2-D IFFT result so that they neglect the imaginary part of 2-D IFFT result. In this study, a novel deep learning based ISAR data reconstruction method is proposed. Unlike existing studies, the proposed model uses complex data instead of the conventional RD image. Deep Image Prior (DIP) is used as a deep learning model that does not require a pre-training process to complete missing cases on input data. DIP directly can perform iteratively on single occluded data thanks to its hand crafted prior feature. In order to reconstruct ISAR raw data, the occluded matrix is separated into its real and imaginary parts, and missing entries in the backscattered field matrix are completed sequentially and separately. Thus, ISAR raw data construction is done by a deep learning model that does not need pre-training. In order to check the validity and robustness of the proposed model, three different comparison methods are used, such as IALM, 2-D SL0 and NNM. NNM performs on real data normally so that same separation process applied to raw ISAR data for NNM. In the experimental results, two simulated and one real ISAR data are tested under four different missing scenarios such as pixel-wise, equal random missing in each column, column-wise and compression cases. For all four missing scenarios, three different missing ratios are applied to the test data, like %30, %50 and %70, respectively. The results show that the proposed method outperforms existing ones both visually and quantitatively.