Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Designing and Implementing a Platform for Collecting Multi-Modal Data of Human-Robot Interaction

Abstract

This paper details a method of collecting video and audio recordings of people inter- acting with a simple robot interlocutor. The interaction is recorded via a number of cameras and microphones mounted on and around the robot. The system utilised a number of technologies to engage with interlocutors including OpenCV, Python, and Max MSP. Interactions over a three month period were collected at The Science Gallery in Trinity College Dublin. Visitors to the gallery freely engaged with the robot, with interactions on their behalf being spontaneous and non-scripted. The robot dialogue was a set pattern of utterances to engage interlocutors in a simple conversation. A large number of audio and video recordings were collected over a three month period

Similar works

This paper was published in Arrow@TUDublin.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.