Google X’s Deep Reinforcement Learning in Robotics using Vision

Written by sagarsharma4244 | Published 2018/06/12
Tech Story Tags: machine-learning | ai | google | robotics | research

TLDRvia the TL;DR App

#3 Research Paper Explained

Google is famous for their cutting edge technology and projects including Self Driving Car, Project Loon (Internet balloon), Project Ara and the list goes on. But alot of research goes behind the scenes, which yields in some interesting research papers that literally gives us access and insight on these fun experiments. Encouraging us to replicate the experiments by ourselves and build further to push the boundaries.

Project Ara | Source

The Learning Robots Project by GoogleX has published QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation that tries to master the simple task of picking and grasping different shaped objects. Aiming to replicate some common human activities.

Source (Look at robot no. 6 learning stuff)

And the success rate is fascinating.

This experiment uses 7 robotic arms that ran 800 hours at a course of 4 months to grasp objects placed infront of them. Each uses a RBG camera (image above) with a resolution of 472x472. The closed-loop vision-based control system is based on a general formulation of robotic manipulation as a Markov Decision Process (MDP).

Same objects with Different Colours | Source

To be efficient, Off-policy Reinforcement Learning is used which has the ability to learn from data collected hours, days or even weeks ago.

The Qt-Opt algorithm is designed by combining two methods:

1.Large-scale Distributed Optimization (using multiple robots to train model faster, making it a large-scale distributed system)

2.Deep Q-learning algorithm ( RL technique used to learn a policy, which tells an agent which action to take under which circumstances)

What is Qt-Opt?

QT-Opt is a combination of large-scale distributed optimization and Q-learning algorithm resulting in Distributed Q-learning algorithm that supports continuous action spaces, making it well-suited to robotics problems.

To make the robot not go crazy at their initial attempts, the model is initially trained with offline data which doesn’t require real robots and improves the scalability.

For this case, the policy takes an image and returns the sequence on how the arm and gripper should move in 3D space.

RESULTS

The results gives a unbelievable 96% grasp success rate.

Source

The model learned many new things that are sophisticated and borderline humane.

1.When blocks are too close to each other and there is no space for the gripper, policy separates the blocks from the rest before picking it up.

Source

2.Swatting objects from gripper were not the part of dataset but it automatically repositions the gripper for another attempt.

Source

I encourage you to read the research paper for more insight.

Follow me on Medium and Twitter for more #ResearchPaperExplained Notifications.

If you have any Query about the paper or want me to explain your favourite paper, comment below.

Clap it… Share it…. and Clap it again.

Previous Stories you will Love:

DeepMind’s Amazing Mix & Match RL Technique_#1 Research Paper Explained_hackernoon.com

What the Hell is “Tensor” in “TensorFlow”?_I didn’t know it…_hackernoon.com


Published by HackerNoon on 2018/06/12