LEE- Mekatronik Mühendisliği-Doktora

Bu koleksiyon için kalıcı URI

Gözat

Son Başvurular

Şimdi gösteriliyor 1 - 5 / 11
  • Öge
    Modeling of dynamic systems and nonlinear system identification
    (Graduate School, 2023-03-24) Abedinifar, Masoud ; Ertuğrul, Şeniz ; 518162013 ; Mechatronics Engineering
    One of the primary goals of science is to identify and describe the structures and physical laws of nature. When the data corresponding to the input and output of a physical system is available, but the underlying rules and the structure of the system are unknown, it is essential to employ various approaches to determine these rules and structures. Determination of the underlying rules and structure of a system, particularly in some operation regions, is a difficult task because of the existence of some nonlinearities in the structure of the model. Therefore, choosing a reliable approach to identify the structure of the model in the different working regions of the system is crucial. For this purpose, system identification has been established as a critical technique for assisting in the modeling of complex engineering systems. System identification includes all processes of establishing a mathematical model of the systems by measured input-output datasets. The developed mathematical models using system identification methods are commonly used for monitoring, controller design, fault detection, system response prediction, optimization, and other purposes. The procedure of system identification could be classified into three steps: First, the structure of the mathematical model has to be determined. The structure of the mathematical model could be represented with linear or nonlinear models. Second, the unknown coefficients of the mathematical model should be determined by simulation or experimental input-output datasets. Finally, the model with the identified parameters has to be validated with the new input-output datasets. The major aims of this research could be listed as: In the first step, it is planned to develop transparent nonlinear mathematical models of the mechanical systems in a way that each term of the model could be physically interpreted. These models are called "white-box" models, which are developed using physical rules like Kirchhoff's and Newton's rules. Second, the thesis aims to properly determine the nonlinear models of the physical systems utilizing an appropriate system identification methodology. Third, it aims to investigate the existence of the identified physical phenomena, like nonlinear frictional terms, and dead-zone using different statistical methods. To fulfill these purposes, the following steps are performed: First, the general mathematical models of some physical systems are developed. The mathematical models of the physical systems include linear and various nonlinear equations. The linear equations of the model are developed utilizing some physical rules like Kirchhoff's and Newton's rules, etc. For the nonlinear part of the models, the nonlinear equations of some physical phenomena, like nonlinear friction equations and dead-zone, along with time-delay, are compiled and added to the general mathematical model of the physical systems. Then, the appropriate input signals are generated to stimulate all the dynamics of the physical systems in their different working regions. This is performed to capture the effect of all the possible existing nonlinearities in the system's output. In the next step, the output of the mathematical models is collected, and input-output data sets are established. Then, the Particle Swarm Algorithm (PSO) algorithm is coded to determine the unknown parameters of the general mathematical model of the system using input-output datasets. The PSO algorithm's results are evaluated by utilizing the conventional Nonlinear Least Squared Errors (NLSE) estimation method. Afterward, various statistical tests, including the confidence interval test and the null hypothesis test, are executed to investigate the identification results' validity. Finally, using some model evaluation criteria such as Mean Squared Errors (MSE) and coefficient of determination (R2), the capability of the determined models in computing the output of the real systems is evaluated. The framework suggested in this thesis is implemented for four case studies as benchmark problems, ranging from simple to complex in two steps. Initially, two case studies, namely a Direct Current (DC) motor, and a solenoid actuator are chosen, and their mathematical models with various combinations of nonlinearities are constructed in the first stage. The simulation data for both the DC motor and solenoid actuator models are established by utilizing the nonlinear models. First, all kinds of friction nonlinearities are incorporated into the real mathematical models of these components, followed by adding some likely friction nonlinearities to check the effectiveness of the identification algorithms. After that, the identification and validation frameworks are utilized to ascertain the model parameters and verify the credibility of the outcomes. Furthermore, a PSO algorithm with multiple cost functions is used to optimize the design parameters of a solenoid actuator to improve its performance. The second stage involves obtaining actual experimental data from real mechanical systems, which is then utilized to examine the framework developed in the simulation studies. The initial benchmark problem involves collecting real data from the experimental apparatus of the ball and beam mechanism by providing appropriate input signals. Moreover, the identification algorithm's effectiveness is tested for various experimental conditions for the mechanism of the ball and beam. In the second benchmark problem, real data is acquired from a 6-degree-of-freedom (DOF) UR5 robotic manipulator by providing appropriate trajectories. Then, the model parameters are determined, and the reliability of the outcomes is examined using the identification and validation frameworks.
  • Öge
    Applications of multi-agent systems in transportation
    (Graduate School, 2023-03-21) Tunç, İlhan ; Söylemez, Mehmet Turan ; 518152012 ; Mechatronics Engineering
    Traffic density is a growing drawback of the crowding of cities in contemporary societies. As a consequence of financial and technological innovations, the living standards of people are improving yet this increases the number of cars and traffic density accordingly. Thus, the density of traffic is reducing the quality of life for individuals in metropolitan areas in particular. Traffic is an important factor affecting human life quality in crowded cities. The increasing population and increasing individual vehicle ownership lead to an increase in traffic density. This causes an increase in loss of time and pollution. Traffic density in big cities is an important factor that reduces the quality of human life. Due to the growing population in metropolitan areas and the inadequate infrastructure to accommodate this density, traffic problems are on the rise. As a result, passengers waste more time in traffic, and the amount of emissions, and hence air pollution, also increases. The issue of traffic congestion is a significant concern for numerous metropolitan areas across the globe, as it causes delays, increases commuting time, and contributes to air pollution. Controlling the flow of traffic is problematic in terms of many complexities and uncertainties. Despite this situation, this problem needs to be solved as it reduces productivity and living standards. Modern traffic control methods offer a more effective solution, unlike traditional methods. As traffic congestion continues to increase rapidly in the world, the need to research and apply more effective methods of traffic control than the traditional method is increasing. Solving traffic congestion is one of the most important and complex problems, as it causes chaos in metropolitans, especially during heavy traffic hours. Traditional methods that continue to be used have proven to be inadequate, and as a result, the developing technology has affected all areas as well as the solutions to the traffic control problem. With the emergence of Intelligent Transportation Systems (ITS), utilizing artificial intelligence and communication technologies, a more effective and efficient solution to traffic congestion is possible. Transportation techniques are improving day by day with the pace of growing technology. Intelligent Transportation Systems (ITS) provide advanced services such as high-tech traffic controllers and various transportation modes, reducing the burden on drivers and thus enabling them to meet the need for complex decision-making while on the road. Intelligent transportation solutions have enabled an unprecedented level of data collection within the industry, leading to significant advancements in transportation system management and operation. With the increasing demand and rate of data collection, ITS has also been advancing day by day and increasing the speed of progress of smart transportation systems. ITS can be described as systems consisting of technologies such as electronics, data processing and wireless networks that provide security and efficiency in the transportation network. ITS provides communication and information exchange between each transport unit. These units can be centres that provide information to pedestrians, vehicles, infrastructure, transportation and other peripherals such as traffic lights and other communication and control units. The application of MAS (Multi-Agent Systems) techniques, as a new development in information technology, can help to increase interest in traffic and promote energy-efficient transportation. ITS-based multi-agent technology is an important approach to solving complex traffic problems. The complexity of the elements of the traffic makes them convenient for multi-agent structures. ITS-based multi-agent technology provides us with safer controllers and makes us feel more comfortable in our daily lives. It increases the quality of our lives by decreasing the amount of time spent in traffic and by lowering the amount of emission gases released by our vehicles. The structurally dispersed nature of components in heterogeneous environments causes application difficulties, such as interoperability between agents forming a demand for a unified software platform as an underlying infrastructure. Therefore, it is preferable to use centralized solutions for relatively simple problems such as the one considered in this paper. For both transport decision-makers and drivers, ITS have a great potential for efficient and intelligent traffic management, threat identification, driving comfort and safety. ITS can also provide a flexible approach for the effective management of complex networked transportation systems letting traffic management decision-makers to control signal changes, regulate route flows, and broadcast real-time traffic information. In addition to providing route scheduling, weather forecasting, and emergency services for drivers, ITS (Intelligent Transportation Systems) can also help to reduce driving loads and improve safety. The implementation of ITS (Intelligent Transportation Systems) can generate positive outcomes across a range of areas, spanning from environmental and national security issues to emergency management and transportation. ITS applications can reduce time spent on the road. Short travel times provide economic savings for both individual and commercial vehicles and usually mean less environmental pollution. Intelligent Intersection Management (IIM) technology has started to develop in traffic intersections as part of Traffic Light Control (TLC) systems. Intersections are some of the busiest parts of roads. Therefore, the control of traffic lights plays an important role in decreasing the density. In this thesis, particular attention is given to the control of intersections in order to find solutions to decrease traffic density leading to an increased quality of life in big cities. Intelligent traffic control methods, the use of which is increasing with the development of new methods, are used especially in traffic intersections with high traffic density in order to provide efficient solutions. Control of a single intersection with traffic lights is considered first in the thesis. Various methods, including Fuzzy Logic Control (FLC), Proportional Integral (PI) control and State Space Model Control techniques, have been proposed and compared for a better traffic light controller architecture so as to increase the traffic flow and reduce the overall waiting time of the cars and the emissions released by them. It is demonstrated that the proposed architectures give better results compared to the traditional fixed-time traffic light control method. Different types of traffic intersections are considered in the study. Initially, a simple single-lane traffic intersection with no left or right turn is taken into consideration. Later on, intersections on which three-lane (or four-lane) roads meet with vehicles turning left and right are considered. Finally, a realistic case study, in which the Altunizade Region of Istanbul, is examined to demonstrate the efficiency of some of the proposed methods. The results of simulations indicate that the FLC, in which the positions of the vehicles are used as the state variables, gives superior results in comparison to the other classical methods. In order to increase the efficiency of the FLC further, a built-in learning algorithm is proposed to be used in addition to the FLC. A deep Q-learning method is employed for this purpose as a part of the agent-based traffic light controller. Hence, the resulting intelligent traffic light controller is named DQ-FLSI. In this method, a state matrix which divides the arms of the traffic intersection into cells is used. The traffic light durations are determined using fuzzy logic, and traffic light actions are determined by the help of deep Q-learning. A stability analysis is also carried out for this newly proposed method. Another important traffic problem is route planning. This is particularly important in large cities with complex traffic networks. In order to address this problem, an agent-based traffic route planning method has also been proposed as part of this thesis with the motivation of vehicles choosing the fastest route. In this method, route planning is made by deciding at traffic intersection points. Vehicle agents make decisions when they reach traffic intersections. In this way, dynamic route planning becomes possible for the vehicles. Another solution for the traffic intersection problem is multi-agent reservation-based traffic intersection control. With this method, all vehicles (called agents) can pass the intersection without the need for a traffic light thanks to a traffic intersection agent. A platoon method, which can work in harmony with reservation-based traffic intersection management, is proposed as an improvement in this part of the study. The proposed method aims to reduce the slowdowns that occur when approaching the traffic intersection by properly lining up the vehicles approaching the traffic intersection. It is shown by simulations that the proposed platoon method reduces energy consumption and gas emissions while increasing the average speed of the vehicles, especially as the density of the traffic increases. Work environments for all studied traffic problems are designed and simulated using the SUMO program. Simulation of Urban MObility (SUMO) is an open-source simulation package that works on networks imported from maps, provides various workspaces at micro levels, also allows pedestrian simulation, and has a sufficient set of tools that makes it more reachable.
  • Öge
    Classification of ten different motor imagery eeg signals by using deep neural networks
    (Graduate School, 2023-08-19) Korhan, Nuri ; Dokur, Zümray ; 518152011 ; Mechatronics Engineering
    Brain-Computer Interface (BCI) is a research area that aims at establishing a sustainable communication infrastructure between the brain and machines. The primary purpose of BCI is to restore functionality to paralyzed individuals, but it can also be used for gaming applications. Various modalities such as Electroencephalogram (EEG) and Functional Magnetic Resonance Imaging (fMRI) can be employed in this field. This thesis focuses on EEG-based BCI and specifically explores the classification of ten different motor imagery (MI) tasks using deep neural networks. Motor imagery is a BCI method that aims to detect imagined movements through potential changes on the scalp, which are measured by electrodes during the imagined motor movement. Increasing the number of recognizable tasks in BCI systems, specifically in the field of mechatronics, holds considerable importance. The limited scope of a four-command system significantly inhibits the versatility of these applications, particularly as they become more complex. To illustrate, imagine the operational demands of a drone, which requires absolute control over direction, altitude, speed, and elaborate maneuvers to navigate obstacles in three-dimensional space. The limitations of a four-command system decrease the number of controllable actions, thus undermining the efficacy and the scope of BCI applications. A substantial increase in the number of recognizable tasks in a BCI system signifies not only the expansion of its capabilities, but also a progression in advancing its applicability and versatility. In the first chapter, the problems of BCI are introduced, and the relevant literature is reviewed. In the second chapter, the concepts related to MI, the specific BCI area of interest, are explained. The third chapter examines methods to increase the number of commands in the MI paradigm, discussing previous approaches and the proposed methods. In the fourth chapter, deep learning tools commonly used in the field and employed in this research are introduced and discussed. The fifth and final chapter discusses the obtained results, their implications, and potential future research directions. The findings contribute to the advancement of BCI and demonstrate the feasibility of classifying ten different motor imagery EEG signals using deep neural networks, alongside augmentation, and divergence-based feature extraction. In summarizing the research conducted in this study, emphasis must be placed on the success rates achieved through the application of the developed methods. The techniques of artificial EEG signal generation, data augmentation, and regularization have been utilized, resulting in enhancements in the performance and efficiency of the BCI tasks. The methods employed have demonstrated promising results in various test scenarios. The success rates exceed those observed in traditional approaches documented in the literature. These rates are expanded upon in their respective sections and numerically illustrated in tables within the fifth chapter. Looking at the classification of both simple and combined MI-EEG signals across various studies, mean accuracy rates of around 51.6% and 54.2% were reported using different techniques for feature extraction and classification on three simple and one combined MI-EEG signals across a varied number of subjects. When increasing the number of classes used, as in four simple and four combined MI-EEG signals, a trend of increased mean accuracy was observed. Studies reported accuracy rates of 55% (four simple and four combined classes, dataset 3) and a substantial 70% (four simple and three combined classes) using different methods. The methods developed in this study demonstrate a significant improvement. For dataset 1, the proposed approach achieved an 85% mean accuracy with only DivFE on four simple and six combined classes across three subjects. Dataset 2 shows a 78.6% accuracy across nine subjects. Lastly, for the dataset 3 (four simple and four combined), the model achieved a 77.8% accuracy across seven subjects. These success rates not only validate the effectiveness of the proposed methods but also highlight the potential for future enhancements in BCI applications.
  • Öge
    Social behavior learning for an assistive companion robot
    (Graduate School, 2023-01-26) Uluer, Pınar ; Köse, Hatice ; 518132005 ; Mechatronics Engineering
    Designing robots having the ability to build and maintain interpersonal relationship with humans by exchanging social and emotional cues has attracted much attention recently because robots are vastly in use in a wide variety of places with a diverse range of tasks from healthcare to edutainment industry. In order to interact with humans in a natural way, emotion recognition and expressive behavior generation are crucial for these robots to act in a socially aware manner. This requires the ability to recognize the affective states, actions and intentions of their human companions. The robot should be able to interpret the social cues and its human companion's behavioral patterns to be able to generate assistive social behaviors in return. There are several popularly known robotic platforms such as Kismet, Nao and Pepper which are able to recognize its human partner's emotional state based on facial expressions, vocal cues and body postures and express simple human emotions. However these robots do not have an emotional character or an affective state, they are just capable to mirror the human's expressions independently from the social context of the interaction. For the robots to be accepted as social entities, they should be endowed with the capability to interpret the human's mood as well as his/her emotions and the social context of the interaction. In order to achieve this purpose, it is not enough to treat expressive behaviors of humans as only a mirror of their internal state. Therefore, it is crucial to incorporate a generative account of expressive behavior into an emotional architecture. This requires perceiving and understanding others' affective states and behaving accordingly, it corresponds to the most generic definition of empathy. Empathy is a focal point in the study of interpersonal relationships therefore it should also be considered as one of the major elements in the social interaction between the robots and humans. Humans have the ability to feel empathy not only for other humans but also for fictional characters and robots. But the robots are not able yet to display any empathic behavior in return. The motivation of this thesis study is to design and implement a computational emotion and behavior model for a companion robot. The proposed model is designed to process multi-channel stimuli in order to interpret the affective state of its human companion and to generate in-situ affective social behaviors based on the processed information coming from the human companion and the environment, that is the social context of the interaction. This dissertation attempts to explore the following research objectives: - Would the robot be able to display basic emotions? Could the human companion identify correctly the emotions displayed by the robot? - How could a social robot infer and interpret its companion's emotional states? - How can we computationally model an artificial emotional mechanism and implement it on a social robot to provide a natural social interaction? - Which learning techniques should we use for the affective robot assistant to learn which emotional behavior to express during the interaction? - Could a social robot designed with emotional understanding and expression foster the interaction gain in a HRI scenario based on assistance? In order to explore the answers of these research questions, user studies with children having different developmental profiles and two affective robots, Pepper and Kaspar, were conducted in coordination with two research projects. The first project, titled as RoboRehab, we aimed to use an affective social robot to support the auditory tests of children with hearing disabilities in clinical settings. During their hospital visits, children take several tests to determine the level of their hearing and to adjust their hearing aid or cochlear implant, if necessary. The audiologists mention that the children usually get stressed and tend to be in a negative mood, when they are in the hospital during their consultation. This affects children's' performance in the auditory perception tests negatively, and their cooperation decrease significantly. In Roborehab, we used machine learning-based emotion recognition approaches to detect the children's emotional state and adjust the robot's actions accordingly. We designed a feedback mechanism to reduce the stress of children and help the children to improve their mood during the audiometry tests. We used a socially assistive humanoid robot Pepper, enhanced with emotion recognition, and a tablet interface, to support children in these tests. We investigated the quantitative and qualitative effects of the different test setups involving a robot, a tablet app and the conventional method. We employed traditional machine learning techniques and deep learning approaches to analyze and classify the physiological data (blood volume pulse, skin temperature, and skin conductance) of children collected by E4 smart wristband. The second project, entitled as "Affective loop in Socially Assistive Robotics as an intervention tool for children with autism (EMBOA)", was a research project with the aim of combining affective computing technologies with the social robot intervention in children with autism spectrum disorder (ASD). Children with ASD are known to display limited social and emotional skills in their routine interactions. Inspired by the promising results presented in the social robotics field, we aimed to investigate affective technologies with a social robot, Kaspar, to assist children with ASD in the development of their social and emotional skills, help them to overcome social obstacles and make the children more involved in their interactions. Interaction studies with Kaspar were conducted in 4 collaborating country; Poland, North Macedonia, United Kingdom and Turkey; with more than 65 children with ASD and interaction data collection by different sensor modalities (visual, audio, physiological and interaction-specific data) was performed within a longitudinal study design. In this dissertation, a computational model for emotional understanding and emotional behavior generation and modulation was designed and implemented for a companion robot based on the collected data and findings through The RoboRehab and EMBOA projects. The presented models were designed: (1) to process multi-channel stimuli, i.e. vision-based facial landmarks, physiological data-based signals, in order to detect the affective states of the human companion; (2) to generate in-situ affective social behaviors for the robot based on the interaction context; (3) to adapt the intensity of the robot's emotional expressions based on the preferences of the human companion. Despite challenging situations, with Covid-19 outbreak at the top of the list, and computational limitations, the previously mentioned research objectives were investigated in detail and the results were presented in this dissertation. We were able to answer the five research questions on the affective social behavior of a companion robot. The user studies conducted with hearing-impaired children in RoboRehab showed that Pepper robot was able to display emotional behaviors and the children could correctly identify and interpret them. Moreover, the RoboRehab findings revealed that the affective robot was able to trigger some emotional changes in children and cause difference in their physiological signals. These signals were used to distinguish the emotions of children with machine learning and deep learning approaches in different setups. On the other hand, the results from the EMBOA studies with children with ASD showed that a multimodal approach based on behavioral metrics, i.e. facial action units, physiological signals, interaction-specific metrics, can be used to understand the emotional state and infer the emotional model of the human companion. Different frameworks were presented in order to model an artificial emotional mechanism for a social robot. The first one was a simple linear affect model based on the two-dimensional valence and arousal representation. The second one, a vision-based model where the robot detects the affective state of its companion and develops a behavioral strategy to adapt its emotional behaviors and improve the negative mood of its companion. And finally, a behavioral model were presented for the robot to predict the preferences of its companion and regulate the intensity of its emotional behaviors accordingly. The results of the user studies and the findings revealed that an emotional model can be computationally modeled and implemented for a social robot to adapt its emotional behaviors to the preferences and needs of its companion, consequently, to make the interaction between the human companion-robot pair more natural. The RoboRehab and EMBOA user studies with different groups of children (with typical development, with hearing impairment, with autism) showed that independently from their profile the children were more involved and paid attention to the social robot. Even though the objective evaluation metrics did not point a significant difference, the subjective evaluation of the audiologists, therapists, pedagogues complied with the presented hypothesis. Furthermore, the self-report surveys with children and their parents showed that the children accepted the affective robot as an intelligent and funny social being. These results demonstrate that an affective social robot with personalized emotional behaviors based on the preferences of its companion can foster the interaction gain in an assistive human-robot interaction scenario.
  • Öge
    Human factor based advanced driver-assistance system (ADAS) design for electric vehicle
    (Graduate School, 2022-07-06) Doğan, Dağhan ; Estrada, Ovsanna Seta ; 518122006 ; Mechatronics Engineering
    Every year, thousands of traffic accidents occur and thousands of people die or are injured in these accidents. Considering the causes of accidents, it can be said that most of them are human errors. For this reason, studies focus on advanced driver assistance systems, increasing vehicle autonomy levels and driver behavior in traffic, and aim to prevent possible accidents. For a similar purpose, in this study, I aimed to collect data and analyze some human factor technologies that will support advanced driver assistance systems (ADAS) and to produce suggestions on how researchers and manufacturers producing ADAS can use these technologies. Our study focuses on the data of the galvanic skin response (GSR) sensor, which is a wearable sensor and aims to contribute to human factor studies by analyzing the GSR sensor and other sensor data collected from the drivers and the prototype electric vehicle. The study is experimental and requires a realistic vehicle and realistic driver data. Thus, first of all, we aim to design a novel, low-cost and open to development embedded data collection system for the research and education in human factor technologies and ADASs. Equipment used in this simultaneous data acquisition system: an electric vehicle with the power of 750W, Arduino Mega 2560 electronic card, a 10-turn Vishay 860 potentiometer used for steering angle data, the Tamura 300 A AC/DC hall-effect current sensor used for current (torque) data, Pololu force-sensing resistor (FSR) to detect force on the steering wheel and brake pedal, Seeedstudio GSR sensor to detect stress, MinIMU-9 v3 inertial measurement unit (IMU) to detect gyro, accelerometer, and compass, GY-NEO6MV2 global positioning system (GPS) to detect chassis velocity and position, Scancon 2RM 200 encoder to detect wheel velocity and Techsmart dashcam to record the environment and driver behavior. After designing the data collection system and implementing it for the prototype electric vehicle, we are looking for an answer to the question of whether the driver's stress in traffic can be detected with the GSR and FSR sensor data. We collect the GSR and FSR sensor data for 38 drivers using the designed data collection system in the Istanbul Technical University campus and analyze the GSR and FSR sensor data. In addition, a post-driving stress survey is used to improve the reliability and consistency of the stress level analysis and to validate the results. According to analysis results, the GSR sensor detects stress level-gender, stress level-driving experience, stress level-driving frequency and stress level-representative of normal driving behavior relationship, and the FSR sensor determines only gender stress level. Here, stress level-gender results for the GSR and FSR sensor, stress level-driving experience results for the GSR sensor and stress level-driving frequency results for the GSR sensor are consistent with the results of the survey with an accuracy of 100 %. Stress level-representative of normal driving behavior results for GSR sensor are consistent with the results of the survey with an accuracy of 50 %. As a result, the GSR sensor stress results are consistent with the results of the survey with a total accuracy of 87.5 %. The FSR sensor gender stress results are consistent with the results of the survey with an accuracy of 100 %. After the stress level detection study, we collect the IMU, FSR, GSR, current sensor, potentiometer, encoder and GPS data from 38 drivers along a route. Drivers are divided into 2 (risky and normal) classes according to their Euclidean distance from expert driver data for each sensor. The best classification methods are determined along this way for each sensor. Accordingly, all data are classified with the highest accuracy of 92.1% using the Medium Gaussian Support Vector Machine (SVM) method. IMU data is classified with the highest accuracy of 89.5% using the Artificial Neural Network (ANN) method. FSR data is classified with the highest accuracy with 94.7% accuracy using the Medium Gaussian SVM method. GSR data is classified with the highest accuracy with 97.4% accuracy using the Fine K-nearest Neighbors (KNN) method. Current data is classified with the highest accuracy with 100% accuracy using the ANN method. Potentiometer data are classified with the highest accuracy of 97.3% using the ANN method. Encoder data is classified with the highest accuracy with 92.1% accuracy using the Medium Gaussian SVM method. GPS chassis velocity data is classified with the highest accuracy with 94.7% accuracy using the Medium Gaussian SVM method. Thus, we can say that driver behavior is highly predictable for the batch data along a road. Secondly, it is tried to reveal whether the driver behavior we obtained along the above road can be detected instantly. GSR data of the drivers is analyzed individually because the GSR sensor gives the driver instant excitement and stress information. The driving videos of the drivers are shown to the expert driver. The faults and fault moments of the drivers are labeled by the expert driver. On the other hand, the data obtained by the GSR sensor are used to determine when the drivers are excited (stressed) and the reasons for stress are identified by the expert driver. In the analysis, driver-4 (male) and driver-7 (female) data are examined for individual classification. Stress moments are considered class-2 as dangerous situations. Others are considered class-1. In this way, classification methods are applied. As a result, it is found that the fault moments of the drivers are a subset of the stressful moments of the drivers for all drivers. For driver-4, all sensor data which is tagged by stress moments are classified with the highest accuracy with 97.4% accuracy using the ANN method. For driver-7, all sensor data is classified with the highest accuracy with 98.6% accuracy using the Bagged Tree method. Thirdly, we validate the driver status/behavior analysis above by detecting anomalies using Local Outlier Factor (LOF) values for GSR sensor data as a different method. This analysis provides the detection of a driver status with LOF anomaly values of GSR sensor data and other sensor support (camera and GPS) without the need for machine learning. Lastly in the chapter, we analyze the driving confidence in turns. The excitement increases obtained from the GSR sensor on turns have defined the unconfidence of the driver. The velocity and current data of the drivers determined by the GSR sensor in turns are examined and thus drivers are analyzed individually. When the first junction maneuvering data of drivers are analyzed based on the GSR sensor data, drivers numbered 7, 9, 20, 23, 27 and 34 fail at authorizing driving confidence on the first turn. When the second turn data are analyzed, drivers with an ID 9, 20, 23 and 38 could not show a confident drive on the second turn. Skin conductivity information, including abnormal, risky, and unconfident driving information, can be used for torque control of an autonomous electric vehicle. We transform the semi-autonomous electric vehicle into an electric vehicle with longitudinal autonomy. To improve the study, the distance sensor is also used simultaneously with the GSR sensor to detect collisions and intervene. It means that the GSR data is used to control a vehicle with closed-loop and longitudinal autonomy, depending on the driver's condition. Stress data of 38 drivers along a road are obtained above and averaged. These averages are used as input to the system via radio frequency identification (RFID) cards. Thus, an autonomous vehicle with GSR sensor-based torque control is designed. The purpose of this transformation in this study is to integrate our work into autonomous vehicles as well as semi-autonomous vehicles. This shows that biosensors can also be used as input for autonomous vehicles. In the takeover request (TOR) study, different TOR times are tested on five different driving cases with 18 participating drivers. Three of these cases are used to detect the TOR time of drivers, while the other two scenarios are used sequentially to increase the effects of other participating vehicles such as a vehicle approaching the intersection and then is stopped on a lane along the test route causing a possible hazard situation. According to the analysis, drivers do not prefer the authority transition that is very close to the critical situation (TOR 6 s). Because in TOR 6 s, due to the approaching the critical situation, the (g) (pulse deviation) and (fxa) (current deviation multiplied by the average of five consecutive acceleration or deceleration values during manual driving) values are higher and a smooth transition does not occur. It has been observed that most of the drivers make the comfortable and smooth authority transition for TOR 4 s and TOR 2 s. The experienced drivers prefer TOR 4 for authority transition. Since the TOR 0 s authority transition is also a sudden transition, the driver is unready and causes a higher (g) and (fxa). In other words, even if the drivers are far from the critical situation, they do not prefer a sudden authority transition. Take-over request time is evaluated for each driver and three driver categories such as experienced, semi-experienced and inexperienced, and validated by a questionnaire. The TOR time is extracted and personalized for each driver, which may improve the current conditional automated driving technologies' penetration and acceptance. As the experience of the driver increases, more stable results are obtained. The TOR time for inexperienced drivers varies for each case. As a result of data analysis, the wearable biosensor GSR sensor data can be used in different human factor technologies to support ADAS. Because, as seen in our study, we detected the stress and status of the driver using the GSR sensor, detected the driver's fault cluster in traffic and trained it with machine learning methods, transformed a semi-autonomous vehicle into a GSR-based torque-controlled vehicle with longitudinal autonomy, and finally, evaluated takeover request performance using the GSR sensor.