LEE- Bilgisayar Mühendisliği-Doktora
Bu koleksiyon için kalıcı URI
Gözat
Yazar "Buzluca, Feza" ile LEE- Bilgisayar Mühendisliği-Doktora'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeMeasuring and evaluating the maintainability of microservices(Graduate School, 2024-09-03) Yılmaz, Rahime ; Buzluca, Feza ; 504172519 ; Computer EngineeringMicroservice Architecture (MSA) is a popular architectural style that emphasizes decomposing monolithic applications into independent and modular functional services. This architectural approach provides several benefits, including maintainability and scalability, making large and complex software systems more manageable and flexible. Developing a system as a set of microservices with expected benefits requires a quality assessment strategy focused on measurements of the system's properties. This thesis proposes two methods for predicting the maintainability level of microservices: one rule-based and the other learning-based. The rule-based evaluation employs a fuzzy logic-based hierarchical quality model, whereas the learning-based evaluation utilizes deep learning techniques for quality assessment. This thesis provides a software quality model for the specification and evaluation of software quality characteristics maintainability and a new approach to predict the low-level maintainability of microservices. The first part of the research emphasizes the potential use of fuzzy logic-based systems in microservice quality assessment, particularly in predicting maintainability during software development. Since the qualitative bounds of low-level quality attributes are inherently ambiguous, we use a fuzzification technique to transform crisp values of code metrics into fuzzy levels and apply them as inputs to our quality model. This model generates fuzzy values for the quality sub-characteristics of the maintainability, i.e., modifiability and testability, converted to numerical values through defuzzification. In the last step, using the values of the sub-characteristics, we calculate numerical scores indicating the maintainability level of each microservice in the examined software system. This score was used to assess the quality of the microservices and decide whether they need refactoring. To evaluate our approach, we created a test set with the assistance of three developers, who reviewed and categorized the maintainability levels of the microservices in an open-source project based on their knowledge and experience. They labeled microservices as low, medium, or high, with low indicating the need for refactoring. Our method for identifying low-labeled microservices in the given test set achieved 94% accuracy, 78% precision, and 100% recall. These results indicate that this approach can assist designers in evaluating the maintainability quality of microservices. The second part of the research presents a learning-based solution to the problem addressed in the first study, along with the experiments conducted to evaluate this approach. In that study, we developed a learning-based evaluation method that employs a transfer learning method as a novel approach, emphasizing the assessment of microservices' quality, particularly focusing on maintainability. Similarly to the first study, this approach predicts the maintainability levels of microservices into the same three categories: low, medium, and high, with a low category indicating the need for refactoring. The maintainability level is assessed using transfer learning, a deep learning technique, by feeding source-code metric values of open-source microservice projects as inputs and obtaining results directly through transfer learning. The proposed transfer learning method aims to accurately predict low-quality microservices by assessing their maintainability level. This method involves a series of structured steps, including data collection as code metrics of microservices, outlier elimination, augmentation, and balancing the dataset, followed by the application of supervised learning techniques. These steps allowed us to derive a predictive model, which was then tested using test sets labeled by human evaluators. For the validation process, we utilized 5-fold stratified cross-validation to maintain the original dataset's group ratios within each fold and to ensure an unbiased evaluation at the end of training. In each fold, we first set aside a test set while using the remaining data as the training set; this procedure was repeated so that each subset served as the test set once. After isolating the test set, we augmented the training data to increase its size and diversity for use in the pretraining phase of the transfer learning process. Subsequently, the model was fine-tuned using the training data, which was oversampled to address class imbalances. Finally, the model's generalization capability was assessed on the isolated test set. According to these results, the proposed method achieved 69.67% F1 score on unseen test data obtained from open-source projects for predicting microservices requiring refactoring in the three-class categorization.Although the accuracy is not yet optimal, it is a promising outcome, particularly given the low-labeled limited data available in test data. These findings demonstrate that the learning-based evaluation holds potential for assessing microservice quality and predicting the need for refactoring. However, the lack of sufficient test data has impacted overall performance. To improve results and evaluate the model's performance more objectively, further data collection is necessary. This initial experiment provides a strong foundation for future advancements in software quality assessment within the MSA and motivating continued exploration and refinement of the methodology.. In summary, this research aims to address emerging challenges related to microservice architecture by specifically measuring maintainability as a key quality evaluation. Our research proposes an extensive quality assessment designed to enhance quality assurance practices for MSA-based applications, thereby making a significant contribution to the field of software engineering. This research aims to lead to the development of more sustainable and robust software systems. By providing valuable insights, the proposed approaches have great potential to assist software engineers in making informed decisions regarding maintenance and refactoring activities. As software engineering continues to evolve, these methodologies and insights could serve as fundamental guides for the development and maintenance of microservices, supporting future advancements in the field. We also conclude that systematic quality assessment is essential for ensuring the long-term functionality and performance of software systems. This highlights the need for ongoing innovation and adaptation in software engineering practices.