LEE- Bilgisayar Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Yazar "Altan, Doğan" ile LEE- Bilgisayar Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeIdentification of object manipulation anomalies for service robots(Lisansüstü Eğitim Enstitüsü, 2021) Altan, Doğan ; Uzar Sarıel, Sanem ; 709912 ; Bilgisayar MühendisliğiRecent advancements in artificial intelligence have resulted in an increase in the use of service robots in many domains. These domains include households, schools and factories to facilitate daily life in domestic tasks. Characteristics of such domains necessitate the intense interaction of robots with humans. These interactions necessitate extending the abilities of service robots to deal with safety and ethical issues. Since service robots are usually assigned to complex tasks, unexpected deviations of task state are highly probable. These deviations are called anomalies, and they need to be continually monitored and handled for robust execution. After an anomaly case is detected, it should be identified for effective recovery. For the identification task, a time series analysis of onboard sensor readings is needed since some anomaly indicators are observed long before the detection of the anomaly. These sensor readings need to be fused effectively for correct interpretations as they are generally taken asynchronously. In this thesis, the anomaly identification problem of everyday object manipulation scenarios is addressed. The problem is handled from two perspectives by considering the feature types that are processed. Two frameworks are investigated: the first one takes into account domain symbols as features while the second framework considers convolutional features. Chapter 5 presents the first framework to address this problem by analyzing symbols as features. It combines and fuses auditory, visual and proprioceptive sensory modalities with an early fusion method. Before they are fused, a visual modeling system generates visual predicates and provides them as inputs to the framework. Auditory data are fed into a support vector machine (SVM) based classifier to obtain distinct sound classes. Then, these data are fused and processed within a deep learning architecture. The architecture consists of an early fusion scheme, a long short-term memory (LSTM) block, a dense layer and a majority voting scheme. After the extracted features are fed into the designed architecture, the occurred anomaly is classified. Chapter 6 presents a convolutional three-stream anomaly identification (CLUE-AI) architecture that fuses visual, auditory and proprioceptive sensory modalities. Visual convolutional features are extracted with convolutional neural networks (CNNs) from raw 2D images gathered through an RGB-D camera. These visual features are then fed into an LSTM block with a self-attention mechanism. After attention values for each image in the gathered sequence are calculated, a dense layer outputs the attention-enabled results for the corresponding sequence. Mel frequency cepstral coefficients (MFCC) features are extracted from the auditory data gathered through a microphone in the auditory stage. This is followed by feeding these auditory features into a CNN block. The position of the gripper and the force applied by it are also fed into a designed CNN block. These resulting sensory modalities are then concatenated with a late fusion mechanism. Afterward, the resulting feature vector is fed into fully connected layers. Finally, the anomaly type is revealed. The experiments are conducted on real-world everyday object manipulation scenarios performed by a Baxter robot equipped with an RGB-D head camera on top and a microphone placed on the torso. Various investigations including comparative performance evaluations, parameter and multimodality analyses are studied to show the validity of the frameworks. The results indicate that the presented frameworks have the ability to identify anomalies with f-scores of 92% and 94%, respectively. As these results indicate, the CLUE-AI framework outperforms the other in classifying occurred anomaly types. Due to the requirements that the frameworks necessitate, the CLUE-AI framework does not require additional external modules such as a scene interpreter or a sound classifier as the other one does and provides better results compared to the symbol-based solution.