RGB-stacking πŸ›‘ 🟩 πŸ”· for robotic manipulation

Overview

RGB-stacking πŸ›‘ 🟩 πŸ”· for robotic manipulation

BLOG | PAPER | VIDEO

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes,
Alex X. Lee*, Coline Devin*, Yuxiang Zhou*, Thomas Lampe*, Konstantinos Bousmalis*, Jost Tobias Springenberg*, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, Francesco Nori.
In Conference on Robot Learning (CoRL), 2021.

The RGB environment

This repository contains an implementation of the simulation environment described in the paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes". Note that this is a re-implementation of the environment (to remove dependencies on internal libraries). As a result, not all the features described in the paper are available at this point. Noticeably, domain randomization is not included in this release. We also aim to provide reference performance metrics of trained policies on this environment in the near future.

In this environment, the agent controls a robot arm with a parallel gripper above a basket, which contains three objects β€” one red, one green, and one blue, hence the name RGB. The agent's task is to stack the red object on top of the blue object, within 20 seconds, while the green object serves as an obstacle and distraction. The agent controls the robot using a 4D Cartesian controller. The controlled DOFs are x, y, z and rotation around the z axis. The simulation is a MuJoCo environment built using the Modular Manipulation (MoMa) framework.

Corresponding method

The RGB-stacking paper "Beyond Pick-and-Place: Tackling robotic stacking of diverse shapes" also contains a description and thorough evaluation of our initial solution to both the 'Skill Mastery' (training on the 5 designated test triplets and evaluating on them) and the 'Skill Generalization' (training on triplets of training objects and evaluating on the 5 test triplets). Our approach was to first train a state-based policy in simulation via a standard RL algorithm (we used MPO) followed by interactive distillation of the state-based policy into a vision-based policy (using a domain randomized version of the environment) that we then deployed to the robot via zero-shot sim-to-real transfer. We finally improved the policy further via offline RL based on data collected from the sim-to-real policy (we used CRR). For details on our method and the results please consult the paper.

Installing and visualizing the environment

Please ensure that you have a working MuJoCo200 installation and a valid MuJoCo licence.

  1. Clone this repository:

    git clone https://github.com/deepmind/rgb_stacking.git
    cd rgb_stacking
  2. Prepare a Python 3 environment - venv is recommended.

    python3 -m venv rgb_stacking_venv
    source rgb_stacking_venv/bin/activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Run the environment viewer:

    python -m rgb_stacking.main

Step 2-4 can also be done by running the run.sh script:

./run.sh

Specifying the object triplet

The default environment will load with Triplet 4 (see Sect. 3.2.1 in the paper). If you wish to use a different triplet you can use the following commands:

from rgb_stacking import environment

env = environment.rgb_stacking(object_triplet=NAME_OF_SET)

The possible NAME_OF_SET are:

  • rgb_test_triplet{i} where i is one of 1, 2, 3, 4, 5: Loads test triplet i.
  • rgb_test_random: Randomly loads one of the 5 test triplets.
  • rgb_train_random: Triplet comprised of blocks from the training set.
  • rgb_heldout_random: Triplet comprised of blocks from the held-out set.

For more information on the blocks and the possible options, please refer to the rgb_objects repository.

Specifying the observation space

By default, the observations exposed by the environment are only the ones we used for training our state-based agents. To use another set of observations please use the following code snippet:

from rgb_stacking import environment

env = environment.rgb_stacking(
    observations=environment.ObservationSet.CHOSEN_SET)

The possible CHOSEN_SET are:

  • STATE_ONLY: Only the state observations, used for training expert policies from state in simulation (stage 1).
  • VISION_ONLY: Only image observations.
  • ALL: All observations.
  • INTERACTIVE_IMITATION_LEARNING: Pair of image observations and a subset of proprioception observations, used for interactive imitation learning (stage 2).
  • OFFLINE_POLICY_IMPROVEMENT: Pair of image observations and a subset of proprioception observations, used for the one-step offline policy improvement (stage 3).

Real RGB-Stacking Environment: CAD models and assembly instructions

The CAD model of the setup is available in onshape.

We also provide the following documents for the assembly of the real cell:

  • Assembly instructions for the basket.
  • Assembly instructions for the robot.
  • Assembly instructions for the cell.
  • The bill of materials of all the necessary parts.
  • A diagram with the wiring of cell.

The RGB-objects themselves can be 3D-printed using the STLs available in the rgb_objects repository.

Citing

If you use rgb_stacking in your work, please cite the accompanying paper:

@inproceedings{lee2021rgbstacking,
    title={Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes},
    author={Alex X. Lee and
            Coline Devin and
            Yuxiang Zhou and
            Thomas Lampe and
            Konstantinos Bousmalis and
            Jost Tobias Springenberg and
            Arunkumar Byravan and
            Abbas Abdolmaleki and
            Nimrod Gileadi and
            David Khosid and
            Claudio Fantacci and
            Jose Enrique Chen and
            Akhil Raju and
            Rae Jeong and
            Michael Neunert and
            Antoine Laurens and
            Stefano Saliceti and
            Federico Casarini and
            Martin Riedmiller and
            Raia Hadsell and
            Francesco Nori},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2021},
    url={https://openreview.net/forum?id=U0Q8CrtBJxJ}
}
Owner
DeepMind
DeepMind
A whale detector design for the Kaggle whale-detector challenge!

CNN (InceptionV1) + STFT based Whale Detection Algorithm So, this repository is my PyTorch solution for the Kaggle whale-detection challenge. The obje

Tarin Ziyaee 92 Sep 28, 2021
PyTorch implementation of the REMIND method from our ECCV-2020 paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"

REMIND Your Neural Network to Prevent Catastrophic Forgetting This is a PyTorch implementation of the REMIND algorithm from our ECCV-2020 paper. An ar

Tyler Hayes 72 Nov 27, 2022
CKD - Collaborative Knowledge Distillation for Heterogeneous Information Network Embedding

Collaborative Knowledge Distillation for Heterogeneous Information Network Embed

zhousheng 9 Dec 05, 2022
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

RΓΆst Lab 13 Oct 27, 2022
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

Google Research 66 Oct 07, 2022
This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

TransUNet This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation Usage

1.4k Jan 04, 2023
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Kindle is an easy model build package for PyTorch.

Kindle is an easy model build package for PyTorch. Building a deep learning model became so simple that almost all model can be made by copy and paste from other existing model codes. So why code? wh

Jongkuk Lim 77 Nov 11, 2022
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
Vpw analyzer - A visual J1850 VPW analyzer written in Python

VPW Analyzer A visual J1850 VPW analyzer written in Python Requires Tkinter, Pan

7 May 01, 2022
JORLDY an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise

Repository for Open Source Reinforcement Learning Framework JORLDY

Kakao Enterprise Corp. 330 Dec 30, 2022
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification

Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification This repository is the official implementation of [Dealing With Misspeci

0 Oct 25, 2021
Rethinking Transformer-based Set Prediction for Object Detection

Rethinking Transformer-based Set Prediction for Object Detection Here are the code for the ICCV paper. The code is adapted from Detectron2 and AdelaiD

Zhiqing Sun 62 Dec 03, 2022
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".

SimMIM By Zhenda Xie*, Zheng Zhang*, Yue Cao*, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai and Han Hu*. This repo is the official implementation of

Microsoft 674 Dec 26, 2022
Source Code for Simulations in the Publication "Can the brain use waves to solve planning problems?"

Code for Simulations in the Publication Can the brain use waves to solve planning problems? Installing Required Python Packages Please use Python vers

EMD Group 2 Jul 01, 2022
CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching(CVPR2021οΌ‰

CFNet(CVPR 2021) This is the implementation of the paper CFNet: Cascade and Fused Cost Volume for Robust Stereo Matching, CVPR 2021, Zhelun Shen, Yuch

106 Dec 28, 2022
abess: Fast Best-Subset Selection in Python and R

abess: Fast Best-Subset Selection in Python and R Overview abess (Adaptive BEst Subset Selection) library aims to solve general best subset selection,

297 Dec 21, 2022
PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.

ALiBi PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Quickstart Clone this reposit

Jake Tae 4 Jul 27, 2022
Powerful unsupervised domain adaptation method for dense retrieval.

Powerful unsupervised domain adaptation method for dense retrieval

Ubiquitous Knowledge Processing Lab 191 Dec 28, 2022
Crawl & visualize ICLR papers and reviews

Crawl and Visualize ICLR 2022 OpenReview Data Descriptions This Jupyter Notebook contains the data crawled from ICLR 2022 OpenReview webpages and thei

Federico Berto 75 Dec 05, 2022