Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Antipodal Robotic Grasping using Deep Learning

Abstract

In this work, we discuss two implementations that predict antipodal grasps for novel objects: A deep Q-learning approach and a Generative Residual Convolutional Neural Network approach. We present a deep reinforcement learning based method to solve the problem of robotic grasping using visio-motor feedback. The use of a deep learning based approach reduces the complexity caused by the use of hand-designed features. Our method uses an off-policy reinforcement learning framework to learn the grasping policy. We use the double deep Q-learning framework along with a novel Grasp-Q-Network to output grasp probabilities used to learn grasps that maximize the pick success. We propose a visual servoing mechanism that uses a multi-view camera setup that observes the scene which contains the objects of interest. We performed experiments using a Baxter Gazebo simulated environment as well as on the actual robot. The results show that our proposed method outperforms the baseline Q-learning framework and increases grasping accuracy by adapting a multi-view model in comparison to a single-view model. The second method tackles the problem of generating antipodal robotic grasps for unknown objects from an n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (20ms). We evaluate the proposed model architecture on standard dataset and previously unseen household objects. We achieved state-of-the-art accuracy of 97.7% on Cornell grasp dataset. We also demonstrate a 93.5% grasp success rate on previously unseen real-world objects

Similar works

Full text

thumbnail-image

RIT Scholar Works

redirect
Last time updated on 12/01/2024

This paper was published in RIT Scholar Works.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.