Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Generalization and Transferability in Reinforcement Learning

Abstract

Reinforcement learning has proven capable of extending the applicability of machine learning to domains in which knowledge cannot be acquired from labeled examples but only via trial-and-error. Being able to solve problems with such characteristics is a crucial requirement for autonomous agents that can accomplish tasks without human intervention. However, most reinforcement learning algorithms are designed to solve exactly one task, not offering means to systematically reuse previous knowledge acquired in other problems. Motivated by insights from homotopic continuation methods, in this work we investigate approaches based on optimization- and concurrent systems theory to gain an understanding of conceptual and technical challenges of knowledge transfer in reinforcement learning domains. Building upon these findings, we present an algorithm based on contextual relative entropy policy search that allows an agent to generate a structured sequence of learning tasks that guide its learning towards a target distribution of tasks by giving it control over an otherwise hidden context distribution. The presented algorithm is evaluated on a number of robotic tasks, in which a desired system state needs to be reached, demonstrating that the proposed learning scheme helps to increase and stabilize learning performance

Similar works

Full text

thumbnail-image

TUbiblio

redirect
Last time updated on 12/11/2023

This paper was published in TUbiblio.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.