Deep reinforcement learning approach in control of Stewart platform- simulation and control

thumbnail.default.alt
Tarih
2023-06-08
Yazarlar
Yadavari,Hadi
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Graduate School
Özet
As named, this work approaches the Stewart platform's controlling task with reinforcement learning methods, presenting a new simulation environment. The Stewart platform, having a broad range of applications that span from flight and driving simulators to structural test platforms, is a fully parallel robot. Exact control of the Stewart platform is challenging and essential in its applications to deliver the desired performance. The fundamental aim of artificial intelligence is to address complex problems by utilizing sensory information with a high number of dimensions. Reinforcement learning (RL) is a specific area of Machine Learning (ML) that incorporates an agent interacting with its surrounding environment according to some policies to maximize the sum of the future rewards as an objective function. The agent's learning process is based on a reward-penalty scheme according to the quality of the selected action from the policy space. In this manner, RL tries to solve many problems and tasks. The primary focus of this work revolves around acquiring the ability to control a sophisticated model of the Stewart platform through the utilization of cutting-edge deep reinforcement algorithms (DRL) and model-based reinforcement learning algorithms. The question is that why do we need a simulation environment? To learn an optimal policy, reinforcement learning necessitates a multitude of interactions with the environment. Experiences with real robots are expensive, time consuming, hard to replicate, and even dangerous. To safely implement the RL algorithms in real-time applications, a reliable simulation environment that considers all the nonlinearities and uncertainties of the agent environment is inevitable. Therefore, an agent could be trained in the simulation through sufficient trials without concerns about the actual hardware issues. After having accurate parameters of the controller learned by the simulation, they can be transferred to a physical real-time system. With the objective of improving the reliability of learning performance and creating a comprehensive test bed that replicates the system's behavior, we introduce a precisely designed simulation environment. For our simulation environment, we opted for the Gazebo simulator, which is an open-source platform utilizing either Open Dynamic Engine (ODE) or Bullet physics. Integrating Gazebo with ROS can pave the way for efficient complex robotic applications due to the ability to simulate different environments involving multi-agent robots. Although some Computer-Aided Design (CAD-based) simulations of the Stewart platform exist, we choose ROS and Gazebo to benefit from the latest reinforcement learning algorithms with high yield and performance, compatible with the last developed RL frameworks. However, despite many robotic simulations in ROS, it lacks parallel applications and closed linkage structures like the Stewart platform. Consequently, our initial step involves creating a parametric representation of the Stewart platform's kinematics within the Gazebo and Robot Operating System (ROS) frameworks. This representation is then seamlessly integrated with a Python class to facilitate the generation of structures.
Açıklama
Thesis(Ph.D.) -- Istanbul Technical University, Graduate School, 2023
Anahtar kelimeler
artificial intelligence, yapay zeka, deep learning, derin öğrenme, simulation, similasyon, Stewart platform, Stewart Platformu, reinforcement learning, pekiştirmeli öğrenme, machine learning, makine öğrenimi
Alıntı