LEE- Bilgisayar Mühendisliği Lisansüstü Programı
Bu topluluk için Kalıcı Uri
Gözat
Konu "Autonomous robots" ile LEE- Bilgisayar Mühendisliği Lisansüstü Programı'a göz atma
Sayfa başına sonuç
Sıralama Seçenekleri
-
ÖgeEnhancing smart environments through an ai-assisted IORT agent(Graduate School, 2025-02-05) Kayataş, Yakup ; Kabadayı, Sanem ; 504201583 ; Computer EngineeringThe development of Internet technologies has led to the Internet of Things (IoT), going beyond connecting only traditional computers and smart phones to also connecting gadgets we use in our daily lives. Devices in various areas such as homes, industry, health care, and transportation can get more integrated and collaborate to answer different needs. Artificial intelligence (AI) powers this transformation by enhancing IoT devices' decision-making and automation capabilities. The synergy between AI and IoT creates smart environments. Within the broader landscape of IoT devices and smart spaces, the Internet of Robotic Things (IoRT) is a relatively new concept that integrates robotics with IoT technology. This integration provides new opportunities and enhances the capabilities of both technologies. IoT can enable remote access to robotic systems and allow robots to communicate and interact with external sensor data. Robots deployed in smart spaces (such as cities, hospitals, warehouses, or industry) make up IoRT. Robotic systems can make real-time decisions by accessing data from external IoT devices, enhancing operational efficiency, interconnectivity, spatial awareness, and adaptability to changing environments and tasks. This study presents a modular IoRT system that applies AI and IoT to enhance the autonomy, spatial awareness, and decision-making of mobile robots. The system is applicable to several areas, such as surveillance, manufacturing, health, warehousing, agriculture, industry, and transportation. We validated this research through a differential drive autonomous robot in a smart transportation context. IoT sensors integrated into a smart station monitor their conditions and provide intelligent information in real time to improve the reliability of the transportation infrastructure. The autonomous mobile agent receives and makes use of this information for its navigation. This system uses a differential drive autonomous robot as the mobile agent, equipped with a Raspberry Pi 4 running ROS (Robot Operating System) for localization, navigation, and mapping along with an orchestrator for interoperability with IoT devices and processes environmental sensor data to dynamically assist the robot's actions. The system integrates affordable IoT modules, such as NodeMCU and ESP32-CAM. The IoT modules (along with several sensors connected to them) are integrated within the smart station that serves as an external data acquisition hub for the overall system. The smart station includes an IoT ultrasonic sensor for detecting the presence of entities inside the smart station and an IoT camera (Esp32-CAM) for capturing images of the entity. When the system detects human presence from captured images, the robot receives navigation instructions to autonomously move to the station. The robot also has a Lidar sensor, an Arduino Nano, DC motors with quadrature encoders, and an onboard camera for live video provisioning during navigation. To enhance the robot's autonomy and decision making, artificial intelligence (AI) is used in two areas: (i) real-time human detection using AI computer vision to improve spatial awareness, and (ii) determining and following the shortest path for navigation. A web application (implemented using the WebSocket protocol) serves the real-time communications of system nodes and live video feedback from the onboard robot camera. The novel contributions of this work are the integration of IoT devices, an autonomous robot, and AI-based methods in an IoRT system, modularity that supports adaptation to different applications, and real-time system monitoring through WebSockets. We evaluated the system's performance based on specific criteria. First, we assessed object detection accuracy by testing the ability of the detection module at the smart station to correctly identify human figures under different lighting conditions. This was crucial for initiating the robot's navigation process. Second, we analyzed the autonomous navigation performance of the robot. This included its ability to move safely to the target upon detecting a human figure (planning routes, avoiding obstacles, and reaching the target). Third, we analyzed the communication delays between the IoT modules, the orchestrator, and the robot by measuring the time that elapsed from the moment a component sent data to when it was received and processed by the respective system element. Last, we examined the efficiency of information flow and transfer on the web interface. We found that the WebSocket protocol was effective for real-time data transmissions, and it ensured messages and visual information from system components got delivered successfully. Our IoRT system demonstrates the potential for various smart domains, including cities, warehousing, health care, smart industry, agriculture, surveillance, and transportation. Key contributions include the modular design for easy adaptation to different applications, seamless integration of AI and IoT for improved autonomy, and real-time system monitoring using WebSockets. This research validates the effectiveness of the IoRT approach through a practical implementation, showcasing its impact on enhancing robotic operations in smart spaces.
-
ÖgeA social navigation approach for mobile assistant robots(Lisansüstü Eğitim Enstitüsü, 2020) Kıvrak, Hasan ; Köse, Hatice ; 657666 ; Bilgisayar Mühendisliği Bilim DalıRobots are becoming a part of our lives and we expect robots to act in a similar way to avoid interference with our safety and social being. Robots which are employed in human-robot interactive social areas such as malls or hospitals should benefit from a compliant navigation approach that is built upon a level of human aware and socially intelligent behavior. This compliance is more than mere avoidance and requires legible robot motion so that rational agents as humans are should understand and predict the robot motion (eliminate uncertainty in robot behavior) to adapt their motions accordingly. Furthermore, the robot requires understanding social etiquettes and rules and anticipates social/ethical interactions as much as humans. Otherwise, no matter how efficiently the robot navigates from one point to another, it will be realized as an unsocial individual because of the possibility of violating people's social zones or blocking their way. Hence, the robot behavior will be realized as inhuman-like and affect the interaction quality with the humans negatively. Mobile robots with enhanced social skills by considering to interact with humans verbally or non-verbally (e.g. sign language) should have unified trajectory planning algorithms that not only calculate the shortest path while avoiding obstacles to the defined goal while navigating, but also have human awareness not to annoy any human. A large number of researchers are currently proposing socially aware navigation approaches. It is an active research field combining navigation, perception, and social intelligence. The primary motivation of all these approaches is increasing psychological safety and comfort in human-robot interactive social environments as much as possible. ROS is the de facto standard in research robotics and offers us the ability to use multiple platforms and languages and to incorporate standard solutions to robot problems. Therefore, we first integrated the Mantaro TeleMe2 telepresence robot into the ROS ecosystem to drive the robot autonomously through the newly proposed hardware architecture. Then, the robot is made ready to provide all the necessary nodes to perform social navigation by developing Teleme2 ROS packages from scratch. Robot navigation in an unknown dynamic environment prefers to solve localization and mapping problem concurrently. As a result, the robot uses simultaneous localization and mapping(SLAM) technique to localize itself (pose estimation) and map the environment as well as our socially-aware motion planning algorithm. For online motion planning, potential fields are a common approach for static environments. This approach is first adopted as a social force model (SFM) to describe the motion of pedestrians in very crowded escape scenarios. According to this model, human behavior is affected by some forces (think of a vector field over the space) for acceleration, deceleration and directional changes. The idea behind the model makes it a good candidate for local path planning and expected to generate more human-like trajectories for the robot. That enables a robot to imitate the comprehensibility of the inner dynamics of human motion efficiently dependent on its motion constraints. SFM-based motion planner is computationally light which is appropriate in an uncertain dynamic environment to re-plan frequently. The algorithm does not directly find a collision-free path for the robot. The technique outputs the desired acceleration vector through the dynamic interactions of the robot at each time step and integrates the acceleration into its motion in order to obtain the collision-free path. At every point in time, the robot looks at the resultant total force at the point and imposes/applies as a control law to determine the direction of travel and speed. SFM may be a good choice since we don't need to enforce that the robot exactly follows a reference path, but instead stays within limits guaranteeing people's safety and comfort. In the thesis, we propose a social navigation system under unknown environments by integrating SFM and SLAM. Except for SFM computational time efficiency, the application of conventional SFM to social robot navigation problems present shortcomings and limitations. One problem of the integration of two technologies is the noise of SLAM that causes undesired navigation of the social force model. We introduce the idea of multi-level mapping to filter the noise within reasonable computational cost. The other problem is that the robot may oscillate because it has no incentive not to do so due to sudden changes in force lengths, discontinues at some points and sensor noise. To this end, one solution is to ensure smoothing by constraining the change in forces. That way, you impose continuity in the steering. In addition, SFM-based local motion planner is used with A* global planner not to be stuck on local minima situations. The whole plan is not directly assigned to the robot since the global path has too many grid nodes and it is infeasible to follow the path in such a dynamic human uncertain environment. Therefore, the key path points of the global path are extracted by proposed subgoals selection algorithm. Extracted points are incrementally passed to the robot for smooth and legible robot navigation behavior. Finally, we conduct simulation and user experiments as well as evaluate the effect of the proposed idea. We verify the results in real environments as simulation environments have limitations with quantitative and qualitative evaluations. This study has been developed as a part of TUBITAK project 118E214. In the future, we will continue to develop the study further, for the social navigation of assistive robots in crowded environments such as hospitals and schools in accordance with the safety and social distance rules.