Authors
Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen
Publication date
2016/3/7
Journal
arXiv preprint arXiv:1603.02199
Description
Abstract: We describe a learning-based approach to hand-eye coordination for robotic
grasping from monocular images. To learn hand-eye coordination for grasping, we trained a
large convolutional neural network to predict the probability that task-space motion of the
gripper will result in successful grasps, using only monocular camera images and
independently of camera calibration or the current robot pose. This requires the network to
observe the spatial relationship between the gripper and objects in the scene, thus ...
grasping from monocular images. To learn hand-eye coordination for grasping, we trained a
large convolutional neural network to predict the probability that task-space motion of the
gripper will result in successful grasps, using only monocular camera images and
independently of camera calibration or the current robot pose. This requires the network to
observe the spatial relationship between the gripper and objects in the scene, thus ...
Total citations
20167
Scholar articles
S Levine, P Pastor, A Krizhevsky, D Quillen - arXiv preprint arXiv:1603.02199, 2016
Dates and citation counts are estimated and are determined automatically by a computer program.