Reinforcement Learning of Contact Preferability in Multi-Contact Locomotion Planning for Humanoids
Abstract
In this paper, we propose the multi-contact locomotion planning framework for humanoid robots that leverages the target contact selection considering its long-term preferability by reinforcement learning (RL) in optimization-based motion generation with feasibility constraints. It is difficult to predict how the next target contact will affect the motion of the robot over the future in multi-contact locomotion, where humanoid robots are required to perform complex motions with kinematic constraints and static equilibrium. To solve this problem, we evaluate the preferability of the motion planned by the optimization-based motion planner to reach the target contact, which we define as contact preferability, as the reward for RL. This idea enabled us to train the policy to provide a contact with the large future preferability without explicitly designing its future promise by ourselves. We also propose the design of action space for RL based on the robot's reachability. We construct sets of feasible joint angles for each limb of the robot as successors and use them as the action space instead of directly managing contact poses. By defining the deterministic mapping from the successor to the target contact, the proposed framework can manage acyclic multi-contact motion where the number of contacts can be changed. We evaluate the proposed framework in three scenarios and prove that it can plan a preferable contact sequence for multi-contact locomotion with a high success rate and short computational time.
Origin | Files produced by the author(s) |
---|