Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Multi-expert learning of adaptive legged locomotion

Abstract

Achieving versatile robot locomotion requires motor skills that can adapt to previously unseen situations. We propose a multi-expert learning architecture (MELA) that learns to generate adaptive skills from a group of representative expert skills. During training, MELA is first initialized by a distinct set of pretrained experts, each in a separate deep neural network (DNN). Then, by learning the combination of these DNNs using a gating neural network (GNN), MELA can acquire more specialized experts and transitional skills across various locomotion modes. During runtime, MELA constantly blends multiple DNNs and dynamically synthesizes a new DNN to produce adaptive behaviors in response to changing situations. This approach leverages the advantages of trained expert skills and the fast online synthesis of adaptive policies to generate responsive motor skills during the changing tasks. Using one unified MELA framework, we demonstrated successful multiskill locomotion on a real quadruped robot that performed coherent trotting, steering, and fall recovery autonomously and showed the merit of multi-expert learning generating behaviors that can adapt to unseen scenarios

Similar works

This paper was published in Edinburgh Research Explorer.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.