We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
RoboCup soccer competitions are considered among the most
challenging multi-robot adversarial environments, due to their high dynamism
and the partial observability of the environment. In this paper
we introduce a method based on a combination of Monte Carlo search
and data aggregation (MCSDA) to adapt discrete-action soccer policies
for a defender robot to the strategy of the opponent team. By exploiting
a simple representation of the domain, a supervised learning algorithm is
trained over an initial collection of data consisting of several simulations
of human expert policies. Monte Carlo policy rollouts are then generated
and aggregated to previous data to improve the learned policy over
multiple epochs and games. The proposed approach has been extensively
tested both on a soccer-dedicated simulator and on real robots. Using
this method, our learning robot soccer team achieves an improvement
in ball interceptions, as well as a reduction in the number of opponents’
goals. Together with a better performance, an overall more efficient positioning
of the whole team within the field is achieved
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.