Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Real-Time Collaborative Robot Control Using Hand Gestures Recognised By Deep Learning

Abstract

In an ever-changing and demanding world new technologies, which allow more efficient and easier industrial processes, are needed. Furthermore, until now, traditional vision technologies and algorithms have been used in the industrial area. These techniques, even though they achieve good results in simple vision tasks, they are really limited since any change in the processed image affects their performance. For example, in code reading tasks, if the code has a mark or it is not completely visible, the piece with the code would be discarded which leads to losses for the company. These kind of problems can be solved by the implementation of machine learning techniques for vision purposes. Moreover, these techniques learn from example images and even though a perfect performance is difficult to get, machine learning techniques are much more flexible than traditional techniques. Even though the term machine learning was coined for the first time in 1959, until now, these techniques have barely been implemented in the industrial area. They have mostly been used for investigation purposes. Apart from the new vision techniques, new types of robots are being implemented in industrial environments such as collaborative or social robots. On the one hand, collaborative robots allow the workers to work next to or with the robot without any type of physical interference between them. On the other hand, social robots allow an easier communication between the robot and the user which can be applied in different parts of the industry such as introducing the company to new visitors. The present project gathers information in regard to the analysis, training and implementation of a vision artificial neuronal network based software called ViDi Cognex software. By the use of this software, three different vision tasks were trained. The most important one is the hand gesture recognition task since the obtained hand gesture controls the action performed by the YuMi robot, which is programmed in RAPID language. It is believed that the development of the different artificial neuronal networks with industrial purposes can show the applicability of machine learning techniques in an industrial environment. Apart from that, the hand gesture recognition shows an easy way to control the movements of a robot which could be used by a person with no knowledge of robots or programming. To finish, the use of a two arm collaborative robot, could show the potential and versatility of collaborative robots for industrial purposes

Similar works

Full text

thumbnail-image

UPCommons. Portal del coneixement obert de la UPC

redirect
Last time updated on 19/11/2020

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.