Deep reinforcement learning approach in control of Stewart platform- simulation and control

dc.contributor.advisor İkizoğlu, Serhat
dc.contributor.advisor Aghaei Tavakol, Vahid Yadavari,Hadi
dc.contributor.authorID 518162002
dc.contributor.department Mechatronics Engineering 2023-12-15T11:00:40Z 2023-12-15T11:00:40Z 2023-06-08
dc.description Thesis(Ph.D.) -- Istanbul Technical University, Graduate School, 2023
dc.description.abstract As named, this work approaches the Stewart platform's controlling task with reinforcement learning methods, presenting a new simulation environment. The Stewart platform, having a broad range of applications that span from flight and driving simulators to structural test platforms, is a fully parallel robot. Exact control of the Stewart platform is challenging and essential in its applications to deliver the desired performance. The fundamental aim of artificial intelligence is to address complex problems by utilizing sensory information with a high number of dimensions. Reinforcement learning (RL) is a specific area of Machine Learning (ML) that incorporates an agent interacting with its surrounding environment according to some policies to maximize the sum of the future rewards as an objective function. The agent's learning process is based on a reward-penalty scheme according to the quality of the selected action from the policy space. In this manner, RL tries to solve many problems and tasks. The primary focus of this work revolves around acquiring the ability to control a sophisticated model of the Stewart platform through the utilization of cutting-edge deep reinforcement algorithms (DRL) and model-based reinforcement learning algorithms. The question is that why do we need a simulation environment? To learn an optimal policy, reinforcement learning necessitates a multitude of interactions with the environment. Experiences with real robots are expensive, time consuming, hard to replicate, and even dangerous. To safely implement the RL algorithms in real-time applications, a reliable simulation environment that considers all the nonlinearities and uncertainties of the agent environment is inevitable. Therefore, an agent could be trained in the simulation through sufficient trials without concerns about the actual hardware issues. After having accurate parameters of the controller learned by the simulation, they can be transferred to a physical real-time system. With the objective of improving the reliability of learning performance and creating a comprehensive test bed that replicates the system's behavior, we introduce a precisely designed simulation environment. For our simulation environment, we opted for the Gazebo simulator, which is an open-source platform utilizing either Open Dynamic Engine (ODE) or Bullet physics. Integrating Gazebo with ROS can pave the way for efficient complex robotic applications due to the ability to simulate different environments involving multi-agent robots. Although some Computer-Aided Design (CAD-based) simulations of the Stewart platform exist, we choose ROS and Gazebo to benefit from the latest reinforcement learning algorithms with high yield and performance, compatible with the last developed RL frameworks. However, despite many robotic simulations in ROS, it lacks parallel applications and closed linkage structures like the Stewart platform. Consequently, our initial step involves creating a parametric representation of the Stewart platform's kinematics within the Gazebo and Robot Operating System (ROS) frameworks. This representation is then seamlessly integrated with a Python class to facilitate the generation of structures. Ph. D.
dc.language.iso en_US
dc.publisher Graduate School
dc.sdg.type Goal 9: Industry, Innovation and Infrastructure
dc.subject artificial intelligence
dc.subject yapay zeka
dc.subject deep learning
dc.subject derin öğrenme
dc.subject simulation
dc.subject similasyon
dc.subject Stewart platform
dc.subject Stewart Platformu
dc.subject reinforcement learning
dc.subject pekiştirmeli öğrenme
dc.subject machine learning
dc.subject makine öğrenimi
dc.title Deep reinforcement learning approach in control of Stewart platform- simulation and control
dc.title.alternative Stewart platformunun kontrolünde derin pekistirmeli öğrenme yaklaşımıc- simülasyon ve kontrol
dc.type Doctoral Thesis
Orijinal seri
Şimdi gösteriliyor 1 - 1 / 1
13.73 MB
Adobe Portable Document Format
Lisanslı seri
Şimdi gösteriliyor 1 - 1 / 1
1.58 KB
Item-specific license agreed upon to submission