LEE- Bilgisayar Mühendisliği-Doktora
Bu koleksiyon için kalıcı URI
Gözat
Yazar "Altılar, Deniz Turgay" ile LEE- Bilgisayar Mühendisliği-Doktora'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeDeveloping a novel artificial intelligence based method for diagnosing chronic obstructive pulmonary disease(Graduate School, 2023-11-06) Moran, İnanç ; Altılar, Deniz Turgay ; 504072504 ; Computer EngineeringToday, research on machine learning and deep learning continues intensively due to their success in data classification and applications used in practice and their capacity to accurately reveal the information in the data. Since the beginning of the 21st century, especially deep learning, has produced very successful results by leaving traditional learning models behind and revolutionizing the latest technology. In this context, the detection of a fatal and global disease using deep learning has been researched in this thesis. The motivation of this research is to introduce the first research on automated Chronic Obstructive Pulmonary Disease (COPD) diagnosis using deep learning and the first annotated dataset in this field. The primary objective and contribution of this research is the development and design of an artificial intelligence system capable of diagnosing COPD utilizing only the heart signal (electrocardiogram, ECG) of the patient. In contrast to the traditional way of diagnosing COPD, which requires spirometer tests and a laborious workup in a hospital setting, the proposed system uses the classification capabilities of deep transfer learning and the patient's heart signal, which provides COPD signs in itself and can be received from any modern smart device. Since the disease progresses slowly and conceals itself until the final stage, hospital visits for diagnosis are uncommon. Hence, the medical goal of this research is to detect COPD using a simple heart signal before it becomes incurable. Deep transfer learning frameworks, which were previously trained on a general image data set, are transferred to carry out an automatic diagnosis of COPD by classifying patients' electrocardiogram signal equivalents, which are produced by signal-to-image transform techniques. Xception, VGG-19, InceptionResNetV2, DenseNet-121, and "trained-from-scratch" convolutional neural network architectures have been investigated for the detection of COPD, and it is demonstrated that they are able to obtain high performance rates in classifying nearly 33.000 instances using diverse training strategies. The highest classification rate was obtained by the Xception model at 99%. Although machine learning and deep learning generate accurate results, until a certain date, these techniques were subject to "black box" discourse. Recently, explainability has become a crucial issue in deep learning. Despite the exceptional performance of deep learning algorithms in various tasks, it is difficult to explain their inner workings and decision-making mechanisms in a way that is understandable. Explainable AI methods enable the accurate prediction of the outcomes of an AI model or the comprehension of the decision-making process. The LIME and SHAP methods, which are among the models that make it possible to interpret the results of deep learning and machine learning models, have been investigated for the purpose of interpreting the classifications made in the thesis. This research shows that the newly introduced COPD detection approach is effective, easily applicable, and eliminates the burden of considerable effort in a hospital. It could also be put into practice and serve as a diagnostic aid for chest disease experts by providing a deeper and faster interpretation of ECG signals. Using the knowledge gained while identifying COPD from ECG signals may aid in the early diagnosis of future diseases for which little data is currently available.
-
ÖgeEnergy efficient resource management in cloud datacenters(Graduate School, 2023-07-11) Çağlar, İlksen ; Altılar, Deniz Turgay ; 504102501 ; Computer EngineeringWe propose an energy efficient resource allocation approach that integrates Holt Winters forecasting model for optimizing energy consumption while considering performance in a cloud computing environment. The approach is based on adaptive decision mechanism for turning on/off machines and detection of over utilization. By this way, it avoids performance degradation and improves energy efficiency. The proposed model consists of three functional modules, a forecasting module, a workload placement module along with physical and virtual machines, and a monitoring module. The forecasting module determines the required number of processing unit (Nr) according to the user demand. It evaluates the number of submitted workloads (NoSW), mean execution time of submitted workloads in interval and mean CPU requirements of them to calculate approximately total processing requirement (APRtotal). These three values are forecasted separately via forecasting methodologies namely Holt Winters (HW) and Auto Regressive Integrated Moving Average (ARIMA). The Holt Winters gives significantly better result in term of Mean Absolute Percentage Error (MAPE), since the time series include seasonality and trend. In addition, the interval is short and the long period to be forecasted, the ARIMA is not the right choice. The future demand of processing unit is calculated using these data. Therefore, the forecasting module is based on Holt Winters forecasting methodology with 8.85 error rate. Therefore, the forecasting module is based on the Holt Winters. Workload placement module is responsible for allocation of workloads to suitable VMs and allocation of these VMs to suitable servers. According to the information received from forecasting module, decision about turning a server on and off and placement for incoming workload is making in this module. The monitoring module is responsible for observing system status for 5 min. The consolidation algorithm is based on single threshold whether to decide that the server is over utilized. In other words, if the utilization ratio of CPU exceeds the predefined threshold, it means that the server is over utilized otherwise, the server is under load. If the utilization of server equals the threshold, the server is running at optimal utilization rate. Unlike other studies, overloading detection does not trigger VM migration. Overloading is undesirable since it causes performance degradation but, it can be acceptable under some conditions. To decide allocation of incoming workloads, this threshold is not only and enough parameter. Beside the threshold, the future demands are also considered as important as systems current state. The proposed algorithm also uses different parameters as remaining execution time of a workload, active number of servers (Na), required number of servers (Nr) besides efficient utilization threshold. The system can be instable with two cases; (1) Na is greater than Nr that means there are underutilized servers and it causes energy inefficiency (2) Nr is greater than Na, if new servers are not switched on, it causes over utilized servers and performance degradation. The algorithm is implemented and evaluated in CloudSim which is commonly preferred in the literature since, it provides a fair comparison between the proposed algorithm with previous approaches and it is easy to adapt and implementation. However, workloads come to the system in a static manner and the usage rates of the works vary depending on time. Our algorithms provide dynamically submission. Therefore, to make fair comparison, the benchmark code is modified to meet dynamic requirement by working Google Cluster Data via MongoDB integration. The forecasting module is based on Holt Winters as described before. Therefore, the approach is named Look-ahead Energy Efficient Allocation – Holt Winters (LAA-HW). If we knew the actual values instead of forecasted values, the system would give the result as Look-ahead Energy Efficient Allocation –Optimal (LAA-O). The proposed model uses Na and Nr parameters to decide the system's trend whether the system has active servers than required. If Na is greater than the Nr, incoming workloads are allocated on already active servers. It causes bottleneck for workloads with short execution time and less CPU requirement as the Google Tracelog workloads. The mean cpu requirement of a day and the mean execution time of a day are 3% and 1,13 min 32 respectively. It gives the small Nr value and it causes less number of received workload than Local Regression-Minimum Migration Time (LRMMT). The number of migration is zero in our approach. The energy consumption for switching on and off in our model is less in comparison with the migration model.