LEE- Bilgisayar Mühendisliği-Doktora
Bu koleksiyon için kalıcı URI
Gözat
Son Başvurular
1 - 5 / 12
-
ÖgeHybridization of probabilistic graphical models and metaheuristics for handling dynamism and uncertainty(Graduate School, 2021-06-30)Solving stochastic complex combinatorial optimisation problems remains one of the most significant research challenges that cannot be adequately addressed not only by deterministic methods but also by some metaheuristics. Today's real-life problems in a broad range of application domains from engineering to neuroimaging are highly complex, dynamic, uncertain, and noisy by nature. Such problems cannot be solved in a reasonable time because of some properties including noisy fitness landscape, high non-linearities, large scale, high multi-modality, computationally expensive objectives functions. The environmental variabilities and uncertainties may be occurred in the problem instance, the objective functions, the design variables, the environmental parameters, and the constraints. Thus, the variations and uncertainties may be due to a change in one or more of these components over time. It is commonly informed that the environmental dynamism is classified based upon the change frequency, predictability, and severity as well as whether it is periodic or not. Different types of variations and uncertainties may arise over time due to the dynamic nature of the combinatorial optimisation problem, and hence an approach chosen at the start of the optimisation may become inappropriate later on. It is expected that such search methodologies for the time-variant problems would be capable of adapting to the change not only efficiently but also quickly, as well as handling the uncertainty such as noise and volatility. On the other hand, it is crucial to identify and adjust the values of numerous parameters of the metaheuristic algorithm while balancing two contradictory criteria: exploitation (i.e., intensification) and exploration (i.e., diversification). Therefore, the self-adaptation is a critical parameter control strategy in metaheuristics for time-variant optimisation. There exists lots of study concerning time-variant problem to handle dynamism and uncertainty, yet a comprehensive approach to address different variations at once still seems to be a task to accomplish. The ideal strategies should take into consideration both environmental dynamism and uncertainties, whereas conventional approaches; however, problems are postulated as time-invariant and disregard this variability and uncertainties. Meanwhile, each real-world problem exhibits different types of changes and uncertainties. Thus, solving such complex problems remains extremely challenging due to the variations, dependencies, and uncertainties during the optimisation process. Probabilistic graphical models are the principal probabilistic model for which a graph expresses the conditional dependence structure to represent complex, real-world phenomena in a compact fashion. Hence, they provide an elegant language to handle complexity and uncertainty. Such properties of probabilistic graphical models have led to further developments in metaheuristics that can be termed probabilistic graphical models-based metaheuristic algorithms. Probabilistic graphical model-based metaheuristic algorithms are acknowledged as highly self-adaptive, and thus able to handle different types of variations. There is a range of probabilistic graphical model-based metaheuristic approaches, e.g., variants of estimation of distribution algorithms suggested in the literature to address dynamism and uncertainty. One of the remarkable state-of-the-art continuous stochastic probabilistic graphical model-based metaheuristic approaches is the covariance matrix adaptation evolution strategy. The covariance matrix adaptation evolution strategy approach and its variants (e.g. covariance matrix adaptation evolution strategy with the increasing population; Ipop-CMA-ES) have become a sophisticated adaptive uncertainty handling scheme. The characteristics of these approaches make them more plausible for handling uncertainty and rapidly changing variations. In recent years, the concept of semi-automatic search methodologies called hyper-heuristics has become increasingly important. Many metaheuristics operate directly on the solution space and utilize problem domain-specific information. However, hyper-heuristics are general methodologies that explore over the space formed by a set of low-level heuristics that perturb or construct a (set of) candidate solution(s) to make self-adaptive decisions for dynamic environments to deal with computationally difficult problems. Besides several impressive research studies that have been carried out on variants of probabilistic graphical model-based metaheuristic algorithms, there also exist many extensive research studies that have been working on machine learning-based optimisation approaches. One of the most popular such methods is the expectation-maximization algorithm, which is a widely used scheme for the optimisation of likelihood functions in the presence of latent (i.e., hidden) variables models. Expectation-maximization is a hill-climbing approach to finding a global maximum of a likelihood function that required achieving convergence to global optima in a reasonable time. One of the extremely challenging dynamic combinatorial optimisation problems is the unit commitment problem, which in the engineering application domain. The unit commitment problem is considered as an NP-hard, non-convex, continuous, constrained dynamic combinatorial optimisation problem in which turn-on/off scheduling of power generating resources is utilized over a given time horizon to minimize the joint cost of committing and de-committing. Another such problem is effective connectivity analysis, which is one of the neuroimaging application areas. The predominant scheme of inferring (i.e., estimating) effective connectivity is dynamic causal modelling, provides a framework for the analysis of effective connectivity (i.e., the directed causal influences between brain areas) and estimating their biophysical parameters from the measured blood oxygen level-dependent functional magnetic resonance responses. However, although, different kinds of metaheuristic- or machine learning-based algorithms have become more satisfying within different types of dynamic environments, neither metaheuristic- nor machine learning-based algorithms are capable of consistently handle the environmental dynamism and uncertainty. In this sense, it is indispensable to hybridize metaheuristics with probabilistic or statistical machine learning to utilize the advantages of both approaches for coping with such challenges. The main motivation of hybridization is to exploit the complementary aspect of different methods. In other words, hybrid frameworks are expected to benefit from the synergy effect. The design and development of hybrid approaches are considered to be promising due to their success in handling variations and uncertainties, and hence, increased attention in recent years has been focused on the fields of metaheuristics and their hybridization. Intuitively, the central idea behind such an approach is based on the two principal theories of the "no free lunch theorem" perspectives: one for supervised machine learning, and one for search/optimisation. Within the context of no free lunch theorem perspective, the following hybrid frameworks are addressed: (i) In the case of no free lunch theorem for search/optimisation, utilize machine learning approaches to enhance metaheuristics; (ii) In the case of no free lunch theorem for machine learning, utilize metaheuristics to improve the performance of machine learning algorithms. Within the scope of this dissertation, each proposed hybrid framework is built on the corresponding "no free lunch theorem" perspective. The first introduced hybrid framework is designed on the no free lunch theorem for search/optimisation concept, referred to as hyper-heuristic-based, dual population estimation of distribution algorithm (HH-EDA2). Within this notion, especially probabilistic model-based schemes are employed to enhance probabilistic graphical model-based metaheuristics that utilize the synergy of selection hyper-heuristic schemes and dual population estimation of distribution algorithm. HH-EDA2 is the form of a two-phase hybrid approach that performs offline and online learning schemes to handle uncertainties and unexpected variations of combinatorial optimisation problems regardless of their dynamic nature. The important characteristic feature of this framework is to integrate any multi-population estimation of distribution algorithms with any probabilistic model-based approach selection hyper-heuristic into the proposed approach. The performance of the hybrid HH-EDA2 along with the influence of different heuristic selection methods was investigated over a range of dynamic environments produced by a well-known benchmark generator as well as over unit commitment problem, which is known as NP-hard constrained combinatorial optimisation problem as a real-life case study. The empirical results show that the proposed approach outperforms some of the best-known approaches in the literature on the non-stationary environment problems dealt with. The second proposed hybrid framework is designed on the no free lunch theorem for machine learning, referred to as Bayesian-driven covariance matrix adaptation evolution strategy with an increasing population (B-Ipop-CMA-ES). Within this notion, especially probabilistic model-based metaheuristics are employed to enhance probabilistic graphical models that utilize the synergy of covariance matrix adaptation evolution strategy algorithm and expectation-maximization schemes. This hybrid framework performs the estimation of biophysical parameters of effective connectivity (i.e., dynamic causal modelling) that enable one to characterize and better understand the dynamic behaviour of the neuronal population. The main attestation of the B-Ipop-CMA-ES is to get rid of crucial issues of dynamic causal modelling, including prior knowledge dependence, computational complexity, and a tendency of getting stuck on local optima. B-Ipop-CMA-ES is capable of performing physiologically plausible models while converging to the global solution in computationally feasible time without relying on initial prior knowledge of biophysical parameters. The performance of the B-Ipop-CMA-ES framework was investigated on both synthetic and empirical functional magnetic resonance imaging datasets. Experimental results demonstrate that B-Ipop-CMA-ES framework outperformed the reference (expectation-maximization/Gauss-Newton) and other competing methods.
-
ÖgeHeuristic algorithms for solving chemical shift assignment problem in protein structure determination(Lisansüstü Eğitim Enstitüsü, 2021)Heuristic algorithms have been widely used in several different hard optimization problems not only in computer science but also in several other disciplines, including natural sciences, bioinformatics, electronics, and operational research, where computational methods are needed. Heuristic algorithms search for optimal solutions by maximizing or minimizing the given objectives depending on the need while satisfying the given conditions. Heuristic algorithms find solutions in a huge search space where many different possible solution candidates exist. Due to these conditions of the search space, systematic search techniques are not feasible for such kinds of problems. In this thesis, we applied several different heuristic approaches and their combinations on the chemical shift assignment problem of the Nuclear Magnetic Resonance (NMR) spectroscopy. NMR spectroscopy is one of the methods to determine the three-dimensional structure of proteins. The three-dimensional structure of proteins provides crucial information to detect the shape, structure and function of biological macromolecules. The protein structure also demonstrates the function of proteins by illustrating the interactions of the macromolecules with other proteins or small ligands. Therefore, the three-dimensional structure of a protein can form a basis for drug design against human diseases. NMR has many advantages compared to other techniques; however, NMR spectroscopy needs very advanced computational techniques for providing the protein structure. The chemical shift assignment of the atoms is one of the most challenging problems in NMR spectroscopy. It needs a considerable amount of time by an experienced spectroscopist if the determination is done manually or by a semi-automated method. Additionally, even if the remaining parts of the structure determination methods work perfectly, it is impossible to create the protein structure if the chemical shift assignments are not done correctly. Due to this complexity, the total number of protein structures obtained from NMR spectroscopy is very few compared to its alternative methods, such as X-ray crystallography. Due to its importance in NMR experiments, the chemical shift assignment problem has recently become one of the most critical research areas in the computational techniques of NMR spectroscopy. There have been many types of research on this problem; however, they are far from perfect. Some of these techniques can provide only partial solutions by assigning only the backbone atoms or only the sidechain atoms. Some of these methods require a very long computation time. Additionally, the results of many of the existing methods have a great area for improvement. In this thesis, we developed a novel method with the heuristic algorithms that provides a fully automatic assignment of the chemical shift values of NMR experiments. First, we studied the background of the problem along with the existing methods. Secondly, we proposed our methods that solve the problem with evolutionary algorithms. Thirdly, we performed experiments on several different datasets, compared the success of our methods against the state-of-the-art solutions of the problem, and continuously improved our methods. Finally, we performed further analysis on the results and proposed further work. First, the background of the chemical shift assignment problem is comprehensively studied from the computer science point of view. The optimization processes in heuristic algorithms, stochastic local search methods, iterative improvement, simple stochastic local search methods, hybrid, and population-based stochastic local search methods are discussed in detail. The ant colony optimization and the evolutionary algorithms are analyzed as the population-based stochastic local search methods. After these evaluations, the evolutionary algorithms appeared to be a suitable candidate for solving this problem since they already work with a population, which is a set of solution candidates. We also analyzed the NMR spectroscopy hardware, principles, and experiment steps in detail because the problem is a real application from NMR spectroscopy in natural sciences. Furthermore, we had a deep dive into the chemical shift assignment problem and into the protein structure and peptide formation areas, which are the basis for the NMR spectroscopy calculations. Afterwards, the existing methods for solving this problem are discussed with their drawbacks. Secondly, we proposed our methods for solving the problem with heuristic algorithms. Our method comprises several different evolutionary algorithms and their combinations with hill climbing, with each other, and constructive heuristic methods. More conventional approach genetic algorithm, GA, and multi-objective evolutionary algorithms, NSGA2 and NSGA3, are applied to the problem. The multi-objective evolutionary algorithms investigated each objective parameter separately, whereas the genetic algorithm followed a conventional way, where all objectives are combined in one score function. While defining the methods, we first defined the problem model, along with the existing conditions and the score function. We modeled the problem as a combinatorial optimization problem, where expected peaks are mapped onto the measured peaks. The chromosome of the algorithm is an array of the expected peaks and the values inside represent their mapped measured peaks. The objectives of the problem are defined in a score function. The constraints are not separately evaluated because they are already fulfilled by the problem model implicitly. Additional fine-tuning and changes are implemented on the algorithms to apply the NMR-specific behaviors to the problem model. Then, the following improvements are realized on the algorithms: We optimized the probability of applying crossover and mutation in the methods. The population initialization is optimized with a constructive initialization algorithm, which minimizes the search space to find better initial individuals. Furthermore, we optimized the population's diversity to find the optimum solutions by escaping from local optima. We also implemented hybrid algorithms by combining a hill-climbing algorithm with our proposed algorithms. Thirdly, we performed experiments on several datasets with a set of commonly used spectra. We also compared the results of our methods with the two state-of-the-art algorithms: FLYA and PINE. In almost all of these datasets, our algorithm GA yielded better results than PINE. Our algorithm NSGA2 produced better results than PINE in almost half of the datasets. Our NSGA3 algorithm yielded less than 10% correct assignments because only two objectives out of four objectives of our problem model create trade-off. NSGA3 algorithms are known to be successful on problems with more than three objectives. Additionally, our algorithms had better runtime performance than FLYA in more than half of the datasets. Our algorithms could assign all of the atoms in all datasets, which creates a huge completeness success of the problem, whereas FLYA and PINE algorithms could not provide a complete assignment. Furthermore, we observed in our results that splitting a large protein into smaller fragments improved our algorithms' results dramatically. Finally, we performed further analysis on our results. These analyses showed us that our algorithms often assigned different atoms than FLYA and PINE. Primarily the GA algorithm can provide good results on some parts of datasets where the state-of-the-art algorithms cannot make any assignment. In order to leverage this success of our algorithms, we proposed a hierarchical method. This method combines FLYA and our algorithm GA to benefit from the different success factors of each algorithm. The results showed that this approach improved the overall success of the algorithms. In future work, the three algorithms could be combined to achieve better results. Additionally, one can focus on distinguishing atoms that can be assigned consistently and more reliably than others. The assignment is only tentative so that fewer wrong assignments are done. Furthermore, the objective function of the problem can be remodeled to improve the performance of the algorithms. Additionally, our method can be extended in further work so that large proteins are split into smaller fragments before applying our algorithms, which will improve the overall results. In this thesis, we successfully implemented a fully automatic algorithm for solving the chemical shift assignment problem of NMR spectroscopy. Our method can automatically assign a significant part of the sidechain and backbone atoms without any parameter changes or manual interactions. We produced results that are comparable to the two very well know state-of-the-art algorithms. Our approaches could provide around a 70% success rate on these datasets and assign many atoms that other methods could not assign. Our algorithm outperformed at least one of these two state-of-the-art methods almost in all of our experiments. Additionally, the whole methods are implemented on the MOEA framework, enabling the further implementation of new algorithms easily.
-
ÖgeClassification of melanoma malignancy in dermatology(Lisansüstü Eğitim Enstitüsü, 2021)Cancer has become one of the most common diseases all over the world in recent years. Approximately 40% of all incidences is skin cancer. The frequency of sightings of skin cancer has increased by 10 times in the last 50 years, and the risk of developing skin cancer is about 20%. Skin cancer has symptoms such as abnormal tissue growth, redness, pigmentation abnormalities and nonhealing wounds. Melanoma is a rare type of skin cancer with higher mortality compared to other types of skin cancers. Melanoma can be defined as a result of uncontrolled division and proliferation of melanocytes. Worldwide, melanoma is the 20th most common cancer and there are an estimated 287,723 new cases (1.6% of all cancers). In USA, more than two hundred thousand new cases of melanoma were diagnosed in 2021 and it increases more rapidly than other forms of cancer. Melanoma incidence increased up to 237% in the last 30 years. In our country, Turkey, melanoma is relatively rare compared to the other countries. Cancer cells display a rapid grow and systematic spread. As in all types of cancer, early diagnosis is of great importance for the treatment of skin cancer. Early diagnosis improves treatment success and prognosis. To detect a melanoma, changes in color, shape and structure of the skin, swelling and stains on the skin are carefully examined by the physicians. Besides the physician investigation, computer aided diagnosis (CAD) mechanisms are recommended for early diagnosis. In this thesis, deep learning models have been used to determine whether skin lesions are benign or malignant melanoma. The classification of the lesions is considered from two different points of view. In the first study, effect of objects in the image and image quality on classification performance was examined by using four different deep learning models. In addition, sensitivity of these models was tested. In the second study, it was aimed to establish a pre-diagnosis system that could help dermatologists by proposing a binary classification (benign nevi or malignant melanoma) mechanism on the ISIC dataset. In clinical settings, it is not always possible to capture flawless skin images. Sometimes skin images can be blurry, noisy, or have low-contrast. In other cases, images can have external objects. The aim of the first study is to investigate the effects of external objects (ruler, hair) and image quality (blur, noise, contrast) using widely used Convolutional Neural Networks (CNN) models. Classification performance of frequently used ResNet50, DenseNet121, VGG16 and AlexNet models are compared. Resilience of the mentioned models against external objects and image quality was examined. Distortions in the images are discussed under three main headings: Blur, noise and contrast changes. For this purpose, different levels of image distortions were obtained by adjusting different parameters. Data sets were created for three different distortion types and distortion levels. Firstly, the most common external object in skin images is hair on skin. In addition, rulers are commonly used as a scale for suspicious lesions on skin. In order to determine the effect of external objects on lesion classification, three separate test sets were created. These sets consist of images containing a ruler, hair and no external object (none). The third dataset consists only of mole (lesion) images. With the three datasets, four models were trained and their classification performances were analyzed. In fact, the best result was expected to be classified with a higher accuracy of the dataset that did not contain any object except the lesion. However, when the results are analyzed, since the image set containing hair had the highest number of images in the total dataset, the best classification performance in our system was measured by using DenseNet model on this subset. As a result of these tests, ResNet model showed a better classification performance compared to other models. Melanoma images can be better recognized under contrast changes unlike the benign images, we recommend ResNet model whenever there is low contrast. Noise significantly degrades the performance on melanoma images and the recognition rates decrease faster compared to benign lesions in noisy set. Both classes are sensitive to blur changes. Best accuracy is obtained with DenseNet model in blurred and noisy datasets. The images contain ruler has decreased the accuracy and ResNet has better performance in this set. Hairy images have the best success rate in our system since it has the maximum number of images in total dataset. We evaluated the accuracy as 89.22% for hair set, 86% for ruler set and 88.81% for none set. We can infer that DenseNet can be used for melanoma classification with image distortions and degradations. As a general result of the first study, we can conclude that DenseNet can be used for melanoma classification since it is more resistant to image distortion. In recent years, deep learning models with high accuracy values in computer aided diagnosis systems have been used frequently in biomedical image processing research area. Convolutional neural networks are also widely used in skin lesion classification to increase classification accuracy. In another study discussed in this thesis, five deep learning models were discussed in order to classify the images in the specially created skin lesions dataset. The dataset used in this study consists of images from ISIC dataset. In the dataset which is available in 2020, there are two classes of benign and malignant and three diagnosis consist of nevus, melanoma and unknown. We only considered images with nevus and melanoma diagnosis. Dataset had 565 melanoma and 600 benign lesion images in total. We separated the 115 images for the class of malignant melanoma and 120 images for the benign nevi class as our test set. The rest of the data was used for model training. With pre-processing methods such as flipping and rotation, the training dataset has divided into 5 parts and the number of images in the train set was increased. DenseNet121, DenseNet161, DenseNet169, DenseNet201, ResNet18, ResNet50, VGGNet19, VGGNet16_bn, SqueezeNet1_1, SqueezeNet1_0 and AlexNet models were trained with each subset. Using these models an ensemble system was designed. In this system, results the models were combined with the majority voting method. The accuracy of the proposed model is 95.76 % over the data set.
-
ÖgeIdentification of object manipulation anomalies for service robots(Lisansüstü Eğitim Enstitüsü, 2021)Recent advancements in artificial intelligence have resulted in an increase in the use of service robots in many domains. These domains include households, schools and factories to facilitate daily life in domestic tasks. Characteristics of such domains necessitate the intense interaction of robots with humans. These interactions necessitate extending the abilities of service robots to deal with safety and ethical issues. Since service robots are usually assigned to complex tasks, unexpected deviations of task state are highly probable. These deviations are called anomalies, and they need to be continually monitored and handled for robust execution. After an anomaly case is detected, it should be identified for effective recovery. For the identification task, a time series analysis of onboard sensor readings is needed since some anomaly indicators are observed long before the detection of the anomaly. These sensor readings need to be fused effectively for correct interpretations as they are generally taken asynchronously. In this thesis, the anomaly identification problem of everyday object manipulation scenarios is addressed. The problem is handled from two perspectives by considering the feature types that are processed. Two frameworks are investigated: the first one takes into account domain symbols as features while the second framework considers convolutional features. Chapter 5 presents the first framework to address this problem by analyzing symbols as features. It combines and fuses auditory, visual and proprioceptive sensory modalities with an early fusion method. Before they are fused, a visual modeling system generates visual predicates and provides them as inputs to the framework. Auditory data are fed into a support vector machine (SVM) based classifier to obtain distinct sound classes. Then, these data are fused and processed within a deep learning architecture. The architecture consists of an early fusion scheme, a long short-term memory (LSTM) block, a dense layer and a majority voting scheme. After the extracted features are fed into the designed architecture, the occurred anomaly is classified. Chapter 6 presents a convolutional three-stream anomaly identification (CLUE-AI) architecture that fuses visual, auditory and proprioceptive sensory modalities. Visual convolutional features are extracted with convolutional neural networks (CNNs) from raw 2D images gathered through an RGB-D camera. These visual features are then fed into an LSTM block with a self-attention mechanism. After attention values for each image in the gathered sequence are calculated, a dense layer outputs the attention-enabled results for the corresponding sequence. Mel frequency cepstral coefficients (MFCC) features are extracted from the auditory data gathered through a microphone in the auditory stage. This is followed by feeding these auditory features into a CNN block. The position of the gripper and the force applied by it are also fed into a designed CNN block. These resulting sensory modalities are then concatenated with a late fusion mechanism. Afterward, the resulting feature vector is fed into fully connected layers. Finally, the anomaly type is revealed. The experiments are conducted on real-world everyday object manipulation scenarios performed by a Baxter robot equipped with an RGB-D head camera on top and a microphone placed on the torso. Various investigations including comparative performance evaluations, parameter and multimodality analyses are studied to show the validity of the frameworks. The results indicate that the presented frameworks have the ability to identify anomalies with f-scores of 92% and 94%, respectively. As these results indicate, the CLUE-AI framework outperforms the other in classifying occurred anomaly types. Due to the requirements that the frameworks necessitate, the CLUE-AI framework does not require additional external modules such as a scene interpreter or a sound classifier as the other one does and provides better results compared to the symbol-based solution.
-
ÖgeSoftware defect prediction with a personalization focus and challenges during deployment(Lisansüstü Eğitim Enstitüsü, 2021)Organizations apply software quality assurance techniques (SQA) to deliver high-quality products to their customers. Developing defect-free software holds a critical role in SQA activities. The increasing usage of software systems and also their rapidly evolving nature in terms of size and complexity raise the importance of effectiveness in defect detection activities. Software defect prediction (SDP) is a subfield of empirical software engineering that focuses on building automated and effective ways of detecting defects in software systems. Many SDP models have been proposed in two decades, and current state-of-the-art models mostly utilize artificial intelligence (AI) and machine learning (ML) techniques, and product, process, and people-related metrics which are collected from software repositories. So far now, the people aspect of the SDP has been studied less compared to the algorithm (i.e., ensembling or tuning machine learners) and data aspects (i.e., proposing new metrics). While the majority of people-focused studies incorporate developer or team related metrics into SDP models, recently personalized SDP models have been proposed. On the other hand, the majority of the SDP research so far now focuses on building SDP models that produce high rates of prediction performance values. Real case studies in industrial software projects and also the number of studies that research the applicability of SDP models in practice are relatively few. However, for an SPD solution to be successful and efficient, its applicability in real life is as important as its prediction accuracy. This thesis focus on two main goals: 1) assessing people factor in SDP to understand whether it helps to improve the prediction accuracy of SDP models, and 2) prototyping an SDP solution for an industrial setting and assessing its deployment performance. First, we made an empirical analysis to understand the effect of community smell patterns on the prediction of bug-prone software classes. The ''community smell'' term is recently coined to describe the collaboration and communication flaws in organizations. Our motivation in this part is based on the studies that show the success of incorporating community factors, i.e., sociotechnical network metrics, into prediction models to predict bug-prone software modules. Also, prior studies show the statistical association of community smells with code smells (which are code antipatterns) and report the predictive success of using code smell-related metrics in the SDP problem. We assess the contribution of community smells on the prediction of bug-prone classes against the contribution of other state-of-the-art metrics (e.g., static code metrics) and code smell metrics. Our analysis on ten open-source projects shows that community smells improve the prediction rates of baseline models by 3% in terms of area under the curve (AUC), while the code smell intensity metric improves the prediction rates by 17%. One reason for that is the existing ways of detecting community smell patterns may not be rich in terms of capturing communication patterns of the team since it only mines patterns through mailing archives of organizations. Another reason is that the technical code flaws (code smell intensity metric) are more successful in representing defect related information compared to community smells. Considering the challenging situation in extracting community patterns and the higher success of the code small intensity metric in SDP, we direct our research to focus on the code development skills of developers and the personalized SDP approach. Second, we investigate the personalized SDP models. The rationale behind the personalized SDP approach is that different developers tend to have different development patterns and consequently, their development may have different defect patterns. In the personalized approach, there is an SDP model for each developer in the team which is trained with the developer's own development history solely and its predictions target only the developer. Whereas in the traditional approach, there is a single SDP model that is trained with the whole team's development history, and its predictions target anyone in the team. Prior studies report promising results on the personalized SDP models. Still, their experimental setup is very limited in terms of data, context, model validation, and further explorations on the characteristics that affect the success of personalized models. We conduct a comprehensive investigation of personalized change-level SDP on 222 developers from six open-source projects utilizing two state-of-the-art ML algorithms and 13 process metrics collected from software code repositories that measure the development activity from size, history, diffusion, and experience aspects. We evaluate the model performance using rigorous validation setups, seven assessment criteria, and statistical tests. Our analysis shows that the personalized models (PM) predict defects better than general models (GM), i.e., increase recall by up to 24% for the 83% of developers. However, PM also increases the false alarms of GM by up to 12% for 77% of developers. Moreover, PM is superior to GM for those developers who contribute to the software modules that have been contributed by many prior developers. GM is superior to PM for the more experienced developers. Further, the information gained from various process metrics in prediction defects differs among individuals, but the size aspect is the most important one in the whole team. In the third part of the thesis, we build prototype personalized and general SDP models for our partner from the telecommunication industry. By using the same empirical setup that we use for the investigation of personalized models in open-source projects, we observe that GM detects more defects than PM (i.e., 29% higher recall) in our industrial case. However, PM gives 40% lower false alarms than GM, leading to a lower code inspection cost than GM. Moreover, we observe that utilizing multiple data sources such as semantic information extracted from commit descriptions and latent features of development activity and applying log filtering on metric values improve the recall of PM by up to 25% and lowers GM's false alarms by up to 32%. Considering the industrial team's perspective on prediction success criteria, we pick a model to deploy that produces balanced recall and false alarm rates: the GM model that utilizes the process and latent metrics and log filtering. Also, we observe that the semantic metrics extracted from the commit descriptions do not seem to contribute to the prediction of defects as much as process and latent metrics. In the fourth and last part of the thesis, we deploy the chosen SDP prototype into our industrial partner's real development environment and share our insights on the deployment. Integrating SDP models into real development environments has several challenges regarding performance validation, consistency, and data accuracy. The offline research setups may not be convenient to observe the performance of SDP models in real life since the online (real-life) data flow of software systems is different than offline setups. For example, in real life, discovering bug-inducing commits requires some time due to the bug life cycle, and this causes a data label noise in the training sets of an online setup. Whereas, an offline dataset does not have that problem since it utilizes a pre-collected batch dataset. Moreover, deployed SDP models need a re-training (update) with the recent commits to provide consistency in their prediction performance and to keep up with the non-stationary nature of the software. We propose an online prediction setup to investigate the deployed prototype's real-life performance under two parameters: 1) a train-test (TT) gap, which is a time gap between the train and test commits used to avoid learning from noisy data, and 2) model update period (UP) to include the recent data into the model learning process. Our empirical analysis shows that the offline performance of the SDP prototype reflects its online performance after the first year of the project. Also, the online prediction performance is significantly affected by the various TT gap and UP values, up to 37% and 18% in terms of recall, respectively. In deployment, we set the TT gap to 8-month and UP to 3-day, since those values are the most convenient ones according to the online evaluation results in terms of prediction capability and consistency over time. The thesis concludes that using the personalized SDP approach leads to promising results in predicting defects. However, whether PM should be chosen over GM depends on factors such as the ML algorithm used, the prediction performance assessment criteria of the organization, and developers' development characteristics. Future research in personalized SDP may focus on profiling developers in a transferable way instead of building a model for each software project. For example, collecting developer activity from public repositories to create a profile or using cross-project personalized models would be some options. Moreover, our industrial experience provides good insights regarding the challenges of applying SDP in an industrial context, from data collection to model deployment. Practitioners should consider using online prediction setups and conducting a domain analysis regarding the team's practices and prediction success criteria and project context (i.e., release cycle) before making deployment decisions to obtain good and consistent prediction performance. Interpretability and usability of models hold a crucial role in the future of SDP studies. More researchers are becoming interested in such aspects of SDP models, i.e., developer perceptions of SDP tools and actionability of prediction outputs.