Step length estimation using sensor fusion for indoor positioning
Step length estimation using sensor fusion for indoor positioning
Süreli Yayın başlığı
Süreli Yayın ISSN
Institute of Science and Technology
People use navigation applications to go to one place from another. Especially, if people traveling go to a place for the first time or if there is not enough information about the place, people traveling can get help from navigation applications. Navigation applications detect the person's location from Global Positioning System (GPS) or base station signals. However, the quality of these signals is not sufficient to use navigation applications in closed areas. In closed areas such as shopping centers, map information is usually provided through kiosk devices. The person looking for a store finds the store using the signs on the map at the mall. However, this is not possible for visually impaired people. Visually impaired individuals can reach their destination in open areas with navigation applications that use voice guidance. However, it is not possible to use these applications indoors, since they do not work properly in these locations. This study aims to provide navigation in indoor locations by using wearable and mobile devices. Various studies have been carried out to provide navigation in indoor locations. These studies generally used wireless networks such as WiFi, Bluetooth, Radio Frequency Identification (RFID). The basic principle of these systems is the calculation of the distance to the network devices that emit signals. If the distance to three or more devices is known, the position of the person can be obtained. However, in these studies, some technical arrangements should be made in the building for the system to function properly. Therefore, indoor areas where navigation is provided with the help of signals are not widely available. In robotic studies, different methods have been developed for indoor location tracking. In these studies, robots generally find their positions according to predefined objects or Quick Response (QR) codes in the field. However, the robot needs to know the position of the objects in the environment to find its position. In the method presented in this thesis, it is aimed that there is no need for any installation inside the building to follow the position of the person. For the study of the method presented in the thesis, the following elements were used: 1) textile-based capacitive sensors, 2) smart mobile phone, 3) WeWALK smart cane developed for the visually impaired. The data collection that is required for training of machine learning models were carried out with five different subjects walking on the established course. This track consists of walking paths with different stride lengths. Data taken from the sensors while the subjects walked in these areas were recorded by the system for further processing. Textile-based capacitive sensors are placed in both knees of the subject. These sensors measure the angle changes in the subject's knee joint. Thus, the system obtains information about the steps that the subject takes while walking. Information about the characteristics of walking is obtained by using the accelerometer, gyroscope, and the compass sensor in the smart cane and mobile phone. The direction of the walking subject was obtained from the compass sensor inside the mobile phone. As the first stage of the study, the data on the sensors were transmitted via Bluetooth connection. The system includes acceleration, gyroscope, and compass signals on x, y, and z axes for each of the smart canes and mobile phones. It also collects signals from the textile-based capacitive sensor. In total, the system collects nineteen signals. Firstly, the data collected from the sensors are cleaned from noise and outlier data using signal processing methods. An algorithm has been developed to detect the onset and offset points of the steps in the signal received from textile-based capacitive sensors. This algorithm calculates the local maximum and minimum points in the sensor signal and treats the interval between these points as steps. The signals in all sensors are segmented according to the determined start and finish points. In order to use sensor signals in regression models, feature extraction is performed on these signals. To improve the performance of the system, the extracted features are simplified by different feature selection methods. The performance of different methods were compared and regression models were trained with the best selection method. The system was trained with 1) Linear Regression, 2) Vector Support Regression, 3) Random Forest, 4) k-Nearest Neighbor models to find the best-fitted regression for the collected data set. The results obtained with these trained models were compared and the model with the best results was used for step detection. Sensor fusion was used to better determine stride lengths. In order to determine the contributions of the three different sensors in the system, different fusion alternatives have been tested separately and in pairs. As a result of these tests, it was observed that the fusion of three different sensors together provided the highest accuracy for step detection and lowest localization error. An Android application using Google Maps has been developed to perform localization. First of all, the plan prepared with the actual dimensions of the test environment was loaded on the map of the application. The initial location of the person in the test environment is defined in the application and this location is shown on the map with a marker. The application uses the regression models developed to determine the step length. Sensor data collected via Bluetooth is subjected to the same signal processing methods as in the learning phase of the model. Thus, when the person starts walking, the application determines the step length of the person with the help of the model and it updates the marker on the map according to the person's direction information. In order to test the system performance, a track with the same origin and destination points was determined in the test environment. When the person completed the entire track starting from the initial point, the distance between their actual final position and the projected final position was calculated. The method developed in this thesis aims to enable the visually impaired to reach their desired destinations in indoor locations by using the smart cane, textile-based capacitive sensor, and smartphones. With the data collected in walking tests, the development of regression models, and the Android application showing the position on the map, this study contributes to literature in the indoor localization and navigation.
Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2020
step dedection, indoor navigation