LEE- Bilgisayar Mühendisliği-Doktora

Bu koleksiyon için kalıcı URI

Gözat

Son Başvurular

Şimdi gösteriliyor 1 - 5 / 45
  • Öge
    An affective framework for brain computer interfaces using transfer learning in virtual environments
    (Graduate School, 2024-12-13) Sarıkaya, Mehmet Ali ; İnce, Gökhan ; 504172504 ; Computer Engineering
    The significance of emotion recognition through physiological signals has been increasingly acknowledged in the context of its applications in diverse fields such as psychology, healthcare, and human-computer interaction. Physiological signals including ElectroEncephaloGraphy (EEG), ElectroMyoGraphy (EMG), ElectroOculoGraphy (EOG), ElectroDermal Activity (EDA), Galvanic Skin Response (GSR), SKin Temperature (SKT), RESPiration (RESP), Blood Volume Pulse (BVP), Heart Rate (HR), and eye movements offer a viable alternative to facial recognition systems in Virtual Reality (VR) environments, where traditional methods fall short due to the obtrusive nature of VR headsets. This has led to a growing interest in utilizing these signals to discern the emotional states of users, thereby enhancing their interaction within virtual environments. Despite the promising prospects of physiological signal-based emotion recognition, there are considerable challenges associated with the development of affective systems. One major issue is the high variability of these signals, which can be influenced by individual-specific factors such as mood and stress levels. This variability necessitates the collection of large amounts of data, which is both time-consuming and costly, making the process tedious and inefficient. Furthermore, psychological patterns are known to be transient, leading to a decline in the performance of classifiers over time and necessitating frequent recalibrations. To mitigate these issues, this thesis proposes the adoption of transfer learning strategies, which have been successful in other domains such as image recognition. Transfer learning allows for the leveraging of pre-existing models and datasets, thereby reducing the need for extensive new data collection and enabling the adaptation of models to new tasks with minimal additional training. This approach not only saves time but also enhances the accuracy and efficiency of emotion recognition systems. One of the focal points of this thesis is the calibration process in current Brain-Computer Interface (BCI) systems, particularly those based on EEG. These systems typically require long calibration times, as they depend heavily on data accumulated across numerous training sessions. This thesis argues for the development of adaptive algorithms that can significantly cut down the calibration time, thereby making BCI systems more practical and accessible for real-world applications. The thesis discusses the distinction between subject-specific and subject-independent models in emotion recognition. Subject-specific models, while offering high accuracy, tend to overfit to limited data, which can severely restrict their generalization capabilities. On the other hand, subject-independent models, which are designed to be more general, often fail to capture individual nuances that are crucial for personalized emotion recognition. This dichotomy underscores the challenges inherent in using subject-specific data alone to model complex emotion-recognition mechanisms. The need for specialized algorithms that can handle the unique dynamics of 3D immersive virtual environments is another critical area addressed in this thesis. Traditional 2D emotion recognition systems do not provide the sense of immersion, presence, and depth that are integral to VR applications, necessitating the development of algorithms that are specifically tailored for these environments. In response to these challenges, the thesis introduces a novel Heterogeneous Adversarial Transfer Learning (HATL) module, designed to synthesize EEG data from multimodal non-EEG inputs. This module significantly reduces the calibration durations and enhances the adaptability and performance of the system across different VR settings, paving the way for more agile and responsive emotion recognition systems. Concurrently, the thesis implements a Knowledge Distillation (KD) strategy to effectively amalgamate and utilize multimodal data. This approach significantly improves the accuracy and generalization capabilities of emotion recognition models, making them suitable for both subject-specific and subject-independent applications. By leveraging the strengths of both EEG and non-EEG data, the KD method facilitates a deeper understanding of emotional states, transcending individual variances. The novel framework proposed in this thesis integrates the HATL and KD modules to optimally address the dual needs of rapid calibration in subject-specific scenarios and enhanced model generalizability in subject-independent applications. This dual-module setup is a core component of the thesis and represents a significant advancement in the field of emotion recognition. The efficacy of this framework is demonstrated through extensive empirical testing, which confirms that the models not only perform well in controlled environments but also adapt effectively to real-world VR scenarios. These results are crucial for applications that require rapid and precise emotion assessments, such as personalized therapeutic interventions and adaptive educational systems. The integration of adversarial learning and knowledge distillation in a unified framework has the potential to revolutionize emotion recognition technology, especially in VR environments. The ability to quickly and accurately assess emotional states in VR enhances user interaction and system responsiveness, making it applicable across a broad range of practical scenarios. Furthermore, this thesis provides a comprehensive analysis of the effectiveness of the proposed models in both 2D and 3D environments. By conducting extensive comparisons, it establishes the superior performance of these models in immersive VR settings compared to traditional 2D setups. This analysis not only validates the effectiveness of the proposed approaches but also highlights their potential to bridge the gap between traditional emotion recognition methods and the requirements of immersive VR technologies. In conclusion, the thesis presents a robust and adaptable framework that sets a new benchmark for the practical application of BCIs in immersive virtual environments. By addressing the limitations of current systems and harnessing the capabilities of advanced machine learning techniques, the proposed framework significantly advances the field of emotion recognition. This comprehensive approach not only overcomes the challenges posed by high variability and transient psychological patterns but also opens new avenues for future research and development in this domain. The thesis concludes with a discussion on the implications of these findings for future research, suggesting areas where further advancements in technology and methodology could enhance the robustness and applicability of emotion recognition systems. The potential for integrating these systems with other technologies, such as mixed reality providing a vision for a more interconnected and responsive future in human-computer interaction. The contributions of this thesis are expected to have a lasting impact on the field of emotion recognition, particularly in the context of VR. By improving both the theoretical understanding and practical applications of BCIs in virtual environments, this work paves the way for more personalized and immersive user experiences. The proposed models offer a promising direction for future research, with the potential to further refine and expand the capabilities of emotion recognition systems for a wide range of applications.
  • Öge
    Derin öğrenme yöntemleri ile Türkçede bağlılık ayrıştırma
    (Graduate School, 2023-09-13) Altıntaş, Mühacit ; Tantuğ, A. Cüneyd ; 504142518 ; Bilgisayar Mühendisligi
    İnsan-makine etkileşiminin artmasıyla birlikte, doğal dilin anlaşılması, yorumlanması ve üretilmesine yönelik geliştirilen araçlara olan gereksinim de artmışdır. Sözdizimsel analizin amacı, cümledeki unsurların yapısal veya biçimbilimsel ilişkilerini inceleyerek cümleyi oluşturan unsurlar arasındaki ilişkileri tespit etmektir. Bu tespit, cümlenin anlamsal analizi açısından büyük önem taşır. Bağlılık ayrıştırma bir sözdizimsel analiz yaklaşımıdır. Dilbilgisi kurallarını kullanarak gerçeklenebileceği gibi veriden örüntüler çıkarılararak da gerçeklenebilmektedir. Bilindiği üzere Türkçe Ural Altay dil ailesinden sondan eklemeli bir dildir. Bu dil ailesine mensup dillerde ekler kelime yığınları arasında bir çeşit harç görevi görerek, cümleyi meydana getirir. Anlam ilişkileri ekler aracılığıyla kurulduğu için söz dizimi olarak esnek bir yapıya sahiptirler. %Ekler sözcük kökünden uzaklaştıkça diğer kelimelerle anlam ilişki kurma yetileri artmaktadır. Esnek söz dizimine sahip dillerde bağlama bağlı kural sayısı çok fazla olabileceğinden veya belirsizlik içerebileceğinden, veriye dayalı yöntemler bağlılık ayrıştırması için daha verimli sonuçlar vermektedir. Veriye dayalı bağlılık ayrıştırma yöntemi olarak literatürde geçiş tabanlı ve çizge tabanlı olmak üzere iki temel yaklaşım bulunmaktadır. Geçiş tabanlı yaklaşımlar, ayrıştırma sürecini yönlendirmek için ayrıştırıcının mevcut durum yapılanmasına dayalı özelliklere göre adım adım olası eylemleri derecelendirerek bağlılık ağacını oluşturur. Öte yandan, çizge tabanlı yaklaşımlar, kelimeler arasındaki olası her bir bağlılığı puanlayarak en yüksek dereceli bağlılık ağacını arar. Çizge tabanlı teknikler problemi doğrudan ele alırken, geçiş tabanlı yöntemler dolaylı çözümler kullandığı için daha fazla adım gerektirebilir. Geçiş tabanlı ayrıştırma, her adımda mevcut yapılandırma durumlarını ve önceki geçişleri dikkate alır. Özellikle önceki eylemlere dayalı kapsamlı özellik temsillerinden faydalanabilir. Geçiş tabanlı ayrıştırıcılar aç gözlü karar verme yetilerinden dolayı, hızlı ve verimlidirler, ancak hata yayılımı nedeniyle doğruluktan ödün vermektedirler. Öte yandan, çizge tabanlı bağlılık ayrıştırma yaklaşımları, hata yayılımına maruz kalmadıkları için geçiş tabanlı yöntemlere kıyasla daha iyi bir performans sergileyebilir, ancak özellik alanları geçiş tabanlı yöntemlere kıyasla sınırlıdır. Son dönemde yapılan geçiş tabanlı ayrıştırma çalışmalarının odak noktası, öğrenme ve çıkarım performansını artırmakken, çizge tabanlı ayrıştırma çalışmaları özellik kapsamının nasıl genişletileceği üzerine yoğunlaşmıştır. Bu çalışmada, bağlılık ayrıştırmanın temelleri görseller ve matematiksel ifadeler kullanılarak anlatılmıştır. Türkçenin sözdizimsel özellikleri ve önceki çalışmalara ilişkin bilgiler ele alınmıştır. Önde gelen çalışmalar incelenmiş ve kritik detaylar not edilmiştir. Ayrıca, bağlılık ayrıştırması için kullanılan veri kümeleri tanıtılmış ve her bir özelliğin bağlılık ayrıştırması açısından taşıdığı önem incelenmiştir. Türkçe ve diğer önde gelen dillerde izdüşümsel olmayan bağlılık oranları çıkarılmıştır. Türkçe için izdüşümselliği bozan ilişki çiftleri tespit edilmiştir. Derlemlerden izdüşümsel olmayan bağlılık içeren cümle örnekleri verilmiştir. Önde gelen derin sinir ağı yöntemleri kullanılarak çeşitli bağlılık ayrıştırma modelleri geliştirilmiş ve başarımları değerlendirilmiştir. Karakter, hece, kelime parçacığı gibi kelime altı özelliklerin bağlılık ayrıştırma başarımına yaptıkları katkılar incelenmiştir. Türkçe için başarıma katkı sağladığı görülen kelime parçacığı tabanlı kelime temsilinin diğer çekimli dillerde ayrıştırma başarımına yaptığı katkı raporlanmıştır. Türkçenin yanı sıra Fince, Macarca, Endonezce, Japonca, Korece ve Uygurcada kelime parçacığı başarıma pozitif yönde etki ettiği gözlenmiştir. Şartlı rastgele ağlar ve bi-affine tabanlı sınıflandırıcılar kıyaslanmış, topluluk öğrenmesi kullanılarak farklı sınıflandırıcıların artı yönlerinden faydalanılmaya çalışılmıştır. Hata yayılımı ve dengesiz veri sorunlarına duyarsız, izdüşümsel olmayan bağlılıkları çözebilen bir bağlılık ayrıştırıcı tasarlanmıştır. Çizge tabanlı bağlılık ayrıştırıcıların özellik uzayı, insan beyninin cümleleri sentezlerken kullandığı bilgi kaynaklarından esinlenerek genişletilmiştir. Genel anlam bilgisini içeren cümle temsili ek bir özellik olarak kullanılmıştır. Ayrıca, yerel sözcük işbirliklerini yakalamak için evrişimli sinir ağı katmanları kullanılarak alt ağaç yapılarının temsil kapasitesi artırılmıştır. Elde edilen sonuçlar, önerilen geliştirmelerin bağlılık ayrıştırma performansını arttırdığını göstermektedir. Yakın zamanda yayılanan; Türkçe KeNet, Türkçe Penn, Türkçe GB ve Türkçe Tourism derlemleri ilk kez bu çalışmada bağlılık ayrıştırıcı geliştirmek için kullanılmıştır. İlgili derlemlerde elde edilen bağlılık ayrıştırma skorları raporlanmıştır. Çalışma kapsamında geliştirdiğimiz bağlılık ayrıştırıcı ile şimdiye dek Türkçe için raporlanan en iyi bağlılık ayrıştırma başarımları; %82.64 UAS ve %76.35 LAS elde edilmiştir. Ayrıca, İngilizce, Macarca, Korece, Fince ve Estonca gibi dillerde sırasıyla %91.34, %87.39, %89.58, %92.85 ve %88.38 etiketli bağlılık ayrıştırma başarımları (LAS değerleri) elde edilmiştir. Elde ettiğimiz başarımlar, bahse konu diller için literatürde raporlanan LAS değerlerini geride bırakmıştır.
  • Öge
    Energy efficient resource management in cloud datacenters
    (Graduate School, 2023-07-11) Çağlar, İlksen ; Altılar, Deniz Turgay ; 504102501 ; Computer Engineering
    We propose an energy efficient resource allocation approach that integrates Holt Winters forecasting model for optimizing energy consumption while considering performance in a cloud computing environment. The approach is based on adaptive decision mechanism for turning on/off machines and detection of over utilization. By this way, it avoids performance degradation and improves energy efficiency. The proposed model consists of three functional modules, a forecasting module, a workload placement module along with physical and virtual machines, and a monitoring module. The forecasting module determines the required number of processing unit (Nr) according to the user demand. It evaluates the number of submitted workloads (NoSW), mean execution time of submitted workloads in interval and mean CPU requirements of them to calculate approximately total processing requirement (APRtotal). These three values are forecasted separately via forecasting methodologies namely Holt Winters (HW) and Auto Regressive Integrated Moving Average (ARIMA). The Holt Winters gives significantly better result in term of Mean Absolute Percentage Error (MAPE), since the time series include seasonality and trend. In addition, the interval is short and the long period to be forecasted, the ARIMA is not the right choice. The future demand of processing unit is calculated using these data. Therefore, the forecasting module is based on Holt Winters forecasting methodology with 8.85 error rate. Therefore, the forecasting module is based on the Holt Winters. Workload placement module is responsible for allocation of workloads to suitable VMs and allocation of these VMs to suitable servers. According to the information received from forecasting module, decision about turning a server on and off and placement for incoming workload is making in this module. The monitoring module is responsible for observing system status for 5 min. The consolidation algorithm is based on single threshold whether to decide that the server is over utilized. In other words, if the utilization ratio of CPU exceeds the predefined threshold, it means that the server is over utilized otherwise, the server is under load. If the utilization of server equals the threshold, the server is running at optimal utilization rate. Unlike other studies, overloading detection does not trigger VM migration. Overloading is undesirable since it causes performance degradation but, it can be acceptable under some conditions. To decide allocation of incoming workloads, this threshold is not only and enough parameter. Beside the threshold, the future demands are also considered as important as systems current state. The proposed algorithm also uses different parameters as remaining execution time of a workload, active number of servers (Na), required number of servers (Nr) besides efficient utilization threshold. The system can be instable with two cases; (1) Na is greater than Nr that means there are underutilized servers and it causes energy inefficiency (2) Nr is greater than Na, if new servers are not switched on, it causes over utilized servers and performance degradation. The algorithm is implemented and evaluated in CloudSim which is commonly preferred in the literature since, it provides a fair comparison between the proposed algorithm with previous approaches and it is easy to adapt and implementation. However, workloads come to the system in a static manner and the usage rates of the works vary depending on time. Our algorithms provide dynamically submission. Therefore, to make fair comparison, the benchmark code is modified to meet dynamic requirement by working Google Cluster Data via MongoDB integration. The forecasting module is based on Holt Winters as described before. Therefore, the approach is named Look-ahead Energy Efficient Allocation – Holt Winters (LAA-HW). If we knew the actual values instead of forecasted values, the system would give the result as Look-ahead Energy Efficient Allocation –Optimal (LAA-O). The proposed model uses Na and Nr parameters to decide the system's trend whether the system has active servers than required. If Na is greater than the Nr, incoming workloads are allocated on already active servers. It causes bottleneck for workloads with short execution time and less CPU requirement as the Google Tracelog workloads. The mean cpu requirement of a day and the mean execution time of a day are 3% and 1,13 min 32 respectively. It gives the small Nr value and it causes less number of received workload than Local Regression-Minimum Migration Time (LRMMT). The number of migration is zero in our approach. The energy consumption for switching on and off in our model is less in comparison with the migration model.
  • Öge
    Novel centrality, topology and hierarchical-aware link prediction in dynamic networks
    (Graduate School, 2023-09-05) Sserwadda, Abubakhari ; Yaslan, Yusuf ; Özcan, Alper ; 504182516 ; Computer Engineering
    The increasing availability of social network data has given rise to research devoted to solving problems associated with social network-related applications. However, the hugeness and complexity of relationships among social network elements render the prediction of links between the entities a challenging task. The previous research often focuses primarily on investigating local node connectivity data while ignoring other important network-characterizing properties. The key network-characterizing properties that are often underrated include network topology, node structural centrality roles, and network hierarchical information. Furthermore, whereas many real-world graphs change over time, several works assume static networks. In order to overcome these challenges, first, we compute several topological similarity-based convolution feature matrices by using various topological similarity metrics such as Common Neighbour, Jaccard Index, Adamic Adder, Salton Index, Resource Allocation, and Sørensen Index. We then utilize the resulting topological feature matrices to capture the prevailing topological information in the input graphs efficiently. Second, we leverage the strength centrality, a stronger variant of node degree, to conserve the node's centrality and the structural connectivity information in the network. In addition, we systematically aggregate such diverse features to yield quality higher-level feature representations. Lastly, we leverage an LSTM layer to capture the prevailing temporal information in the graph sequences. To learn the low dimensional node representations, first, we deployed a fully connected variational autoencoder that efficiently explores variations in the input graphs to learn high-quality node embeddings. Furthermore, we imposed centrality and topological constraints on the learning model to further enforce the preservation of the centrality and topological ınformation of input graphs in the learned embeddings. However, variational autoencoders have large computational time and memory requirements due large number of parameters characterizing the fully connected encoders and decoders, especially when they are applied on large networks. In order to extend our implementations to large datasets while minimizing the computational time and memory requirements, we adopted a Graph Convolution Network (GCN)-based implementation. The proposed Structural and Topological based geometric deep learning approach was evaluated on five real-world temporal social networks. Based on experimental results, on average, they yield a 4\% link prediction AUC improvement in link prediction accuracy, a small increment in training for each epoch (0.2s (10\%)), and a 56\% MSE reduction in centrality prediction when compared to the best benchmarks. The proposed end-to-end centrality and topological guided link prediction framework for dynamic networks preserve not only the centrality node roles and the topological information in the learned embeddings but also captures the prevailing temporal information in the dynamic networks. The models utilize node centrality and topological features to capture and conserve the network topology and the structural roles of nodes during embedding learning. Thus, obtaining pretty-quality embeddings that enhance the link prediction and centrality prediction accuracies. For all our proposed methods, we assess the impact of the various modules of the proposed models by comparing them with their variants that lack such modules, and we present and explain the results accordingly. In other related work, we introduce a Hierarchical and Centrality aware Polypharmacy Side Effect Prediction (HC-POSE) Model. We model side effect prediction as a link prediction task problem and leverage core decomposition to explore the prevailing hierarchical information in the heterogeneous protein-protein, protein-drug, and drug-drug interaction datasets. Following k-core decomposition, for each k-core subgraph produced, a node strength matrix is computed to store the centrality information of each subgraph. Then we systematically aggregate the obtained centrality with the k-core adjacency matrix to have higher-level diverse feature representations. We deployed a GCN-based auto-encoder to learn low-dimensional representations for the homogeneous sub-graphs and an RCGN-based auto-encoder for the heterogeneous subgraphs. Based on the experimental results, HC-POSE exhibited a 3\% accuracy improvement in POSE prediction as compared to the best baseline.
  • Öge
    Rl based network deployment and resource management solutions for opportunistic wireless access for aerial networks in disaster areas and smart city applications
    (Graduate School, 2023-08-09) Ariman, Mehmet ; Canberk, Berk ; 504162503 ; Computer Engineering
    The growth in the mobile communication area changed the data traffic profile. In addition, the requirement for the deployed infrastructure has significantly changed. The available bandwidth and IP transformation in the mobile backend increased the peak traffic requirements, while the mobile nature of the users changed the required infrastructure over time. The commercial availability of unmanned aerial vehicles potentially addresses requirements changes within the infrastructure. However, its three-dimensional nature and operation range limitations due to limited battery introduce new problems. Topology control is a significant problem for unmanned aerial vehicle networks. The optimization of the network size for coverage is identical to the minimum set-cover problem. The minimum set-cover problem is NP-hard, even without the service-level agreements enforced within the communication networks. The solution sets provided for tailor-made applications prevent the scalability of aerial networks. The tailor-made solutions require the exact development cost for each new application target. Reinforcement learning provides an ideal solution for addressing requirements for multiple applications with a single development effort. The integration cost depends on data availability for training in reinforcement learning-based deployments. To this end, reinforcement learning is integrated into a central software-defined networking-based control entity to demonstrate the deployment cycle of the aerial network. In addition, the solution's effectiveness is proved by comparing the quality of service, coverage, and power consumption results with existing literature. Furthermore, the application area of the reinforcement learning is extended to wireless channel selection to address the physical resource assignment problem. The development cost of the model has been the availability of the data. The integration of the new application is demonstrated in the simulation tool to measure the cost. In addition, smart-city application for the aerial network in distributed architecture is simulated with this implementation. Overall, this thesis conducts a survey of the existing literature on the challenges of aerial networks. In addition, the reinforcement learning integration tool is developed in a simulation format. Finally, the disaster area and smart-city applications are implemented to measure the applicability of the hypothesis. The comparison results revealed that reinforcement learning-based aerial network topology control provides scalable performance for power consumption while satisfying the quality of service and coverage requirements of the network. In addition, the improvements in the physical resource allocation for opportunistic access on the wireless medium is proved in wireless channel selection deployment for the smart-city application.