We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
Article Users Activity Gesture Recognition on Kinect Sensor Using Convolutional Neural Networks and FastDTW for Controlling Movements of a Mobile Robot
'IBERAMIA: Sociedad Iberoamericana de Inteligencia Artificial'
Doi
Abstract
In this paper, we use data from the Microsoft Kinect sensor that processes the captured image
of a person using and extracting the joints information on every frame. Then, we propose the creation of
an image derived from all the sequential frames of a gesture the movement, which facilitates training in a
convolutional neural network. We trained a CNN using two strategies: combined training and individual
training. The strategies were experimented in the convolutional neural network (CNN) using the
MSRC-12 dataset, obtaining an accuracy rate of 86.67% in combined training and 90.78% of accuracy
rate in the individual training.. Then, the trained neural network was used to classify data obtained from
Kinect with a person, obtaining an accuracy rate of 72.08% in combined training and 81.25% in
individualized training. Finally, we use the system to send commands to a mobile robot in order to control
it
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.