skip to main content
Primo Search
Search in: Busca Geral

Distributed Learning of Decentralized Control Policies for Articulated Mobile Robots

Sartoretti, Guillaume ; Paivine, William ; Shi, Yunfei ; Wu, Yue ; Choset, Howie

IEEE transactions on robotics, 2019-10, Vol.35 (5), p.1109-1122 [Periódico revisado por pares]

IEEE

Texto completo disponível

Citações Citado por
  • Título:
    Distributed Learning of Decentralized Control Policies for Articulated Mobile Robots
  • Autor: Sartoretti, Guillaume ; Paivine, William ; Shi, Yunfei ; Wu, Yue ; Choset, Howie
  • Assuntos: Articulated robots ; Decentralized control ; distributed learning ; Educational robots ; reinforcement learning (RL) ; Robot kinematics ; robotic learning ; Shape ; Snake robots ; Training
  • É parte de: IEEE transactions on robotics, 2019-10, Vol.35 (5), p.1109-1122
  • Descrição: State-of-the-art distributed algorithms for reinforcement learning rely on multiple independent agents, which simultaneously learn in parallel environments 1 while asynchronously updating a common, shared policy. Moreover, decentralized control architectures (e.g., central pattern generators) can coordinate spatially distributed portions of an articulated robot to achieve system-level objectives. In this paper, we investigate the relationship between distributed learning and decentralized control by learning decentralized control policies for the locomotion of articulated robots in challenging environments. To this end, we present an approach that leverages the structure of the asynchronous advantage actor-critic (A3C) algorithm to provide a natural means of learning decentralized control policies on a single articulated robot. Our primary contribution shows individual agents in the A3C algorithm can be defined by independently controlled portions of the robot's body, thus enabling distributed learning on a single robot for efficient hardware implementation. We present results of closed-loop locomotion in unstructured terrains on a snake and a hexapod robot, using decentralized controllers learned offline and online, respectively, as a natural means to cover the different key applications of our approach. For the snake robot, we are optimizing the forward progression in unstructured environments, but for the hexapod robot, the goal is to maintain a stabilized body pose. Our results show that the proposed approach can be adapted to many different types of articulated robots by controlling some of their independent parts in a distributed manner, and the decentralized policy can be trained with high sample efficiency.
  • Editor: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.