Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

Related tags

Deep Learningcogail
Overview

CoGAIL

Table of Content

Overview

This repository is the implementation code of the paper "Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration"(arXiv, Project, Video) by Wang et al. at Stanford Vision and Learning Lab. In this repo, we provide our full implementation code of training and evaluation.

Installation

  • python 3.6+
conda create -n cogail python=3.6
conda activate cogail
  • iGibson 1.0 variant version for co-gail. For more details of iGibson installation please refer to Link
git clone https://github.com/j96w/iGibson.git --recursive
cd iGibson
git checkout cogail
python -m pip install -e .

Please also download the assets of iGibson (models of the objects, 3D scenes, etc.) follow the instruction. The data should be located at your_installation_path/igibson/data/. After downloaded the dataset, copy the modified robot and humanoid mesh file to this location as follows

cd urdfs
cp fetch.urdf your_installation_path/igibson/data/assets/models/fetch/.
cp camera.urdf your_installation_path/igibson/data/assets/models/grippers/basic_gripper/.
cp -r humanoid_hri your_installation_path/igibson/data/assets/models/.
  • other requirements
cd cogail
python -m pip install -r requirements.txt

Dataset

You can download the collected human-human collaboration demonstrations for Link. The demos for cogail_exp1_2dfq is collected by a pair of joysticks on an xbox controller. The demos for cogail_exp2_handover and cogail_exp3_seqmanip are collected with two phones on the teleoperation system RoboTurk. After downloaded the file, simply unzip them at cogail/ as follows

unzip dataset.zip
mv dataset your_installation_path/cogail/dataset

Training

There are three environments (cogail_exp1_2dfq, cogail_exp2_handover, cogail_exp3_seqmanip) implemented in this work. Please specify the choice of environment with --env-name

python scripts/train.py --env-name [cogail_exp1_2dfq / cogail_exp2_handover / cogail_exp3_seqmanip]

Evaluation

Evaluation on unseen human demos (replay evaluation):

python scripts/eval_replay.py --env-name [cogail_exp1_2dfq / cogail_exp2_handover / cogail_exp3_seqmanip]

Trained Checkpoints

You can download the trained checkpoints for all three environments from Link.

Acknowledgement

The cogail_exp1_2dfq is implemented with Pygame. The cogail_exp2_handover and cogail_exp3_seqmanip are implemented in iGibson v1.0.

The demos for robot manipulation in iGibson is collected with RoboTurk.

Code is based on the PyTorch GAIL implementation by ikostrikov (https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail.git).

Citations

Please cite Co-GAIL if you use this repository in your publications:

@article{wang2021co,
  title={Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration},
  author={Wang, Chen and P{\'e}rez-D'Arpino, Claudia and Xu, Danfei and Fei-Fei, Li and Liu, C Karen and Savarese, Silvio},
  journal={arXiv preprint arXiv:2108.06038},
  year={2021}
}

License

Licensed under the MIT License

Owner
Jeremy Wang
Ph.D. student, Stanford
Jeremy Wang
A Transformer-Based Siamese Network for Change Detection

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022) Wele Gedara Chaminda Bandara, Vishal M. Patel Her

Wele Gedara Chaminda Bandara 214 Dec 29, 2022
A simple root calculater for python

Root A simple root calculater Usage/Examples python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
Image Completion with Deep Learning in TensorFlow

Image Completion with Deep Learning in TensorFlow See my blog post for more details and usage instructions. This repository implements Raymond Yeh and

Brandon Amos 1.3k Dec 23, 2022
A data-driven maritime port simulator

PySeidon - A Data-Driven Maritime Port Simulator 🌊 Extendable and modular software for maritime port simulation. This software uses entity-component

6 Apr 10, 2022
Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ryuichiro Hataya 50 Dec 05, 2022
Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

FGR This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(I

Yi Wei 31 Dec 08, 2022
Surrogate-Assisted Genetic Algorithm for Wrapper Feature Selection

SAGA Surrogate-Assisted Genetic Algorithm for Wrapper Feature Selection Please refer to the Jupyter notebook (Example.ipynb) for an example of using t

9 Dec 28, 2022
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
Deep Sea Treasure Environment for Multi-Objective Optimization Research

DeepSeaTreasure Environment Installation In order to get started with this environment, you can install it using the following command: python3 -m pip

imec IDLab 6 Nov 14, 2022
How to Predict Stock Prices Easily Demo

How-to-Predict-Stock-Prices-Easily-Demo How to Predict Stock Prices Easily - Intro to Deep Learning #7 by Siraj Raval on Youtube ##Overview This is th

Siraj Raval 752 Nov 16, 2022
Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution.

convolver Python script that takes an Impulse response .wav and a input .wav to demonstrate audio convolution. Created by Sean Higley

Sean Higley 1 Feb 23, 2022
MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions

MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions Project Page | Paper If you find our work useful for your research, please con

96 Jan 04, 2023
A facial recognition doorbell system using a Raspberry Pi

Facial Recognition Doorbell This project expands on the person-detecting doorbell system to allow it to identify faces, and announce names accordingly

rydercalmdown 22 Apr 15, 2022
Self Driving RC Car Code

Derp Learning Derp Learning is a Python package that collects data, trains models, and then controls an RC car for track racing. Hardware You will nee

Not Karol 39 Dec 07, 2022
Code for "The Box Size Confidence Bias Harms Your Object Detector"

The Box Size Confidence Bias Harms Your Object Detector - Code Disclaimer: This repository is for research purposes only. It is designed to maintain r

Johannes G. 24 Dec 07, 2022
Model-based reinforcement learning in TensorFlow

Bellman Website | Twitter | Documentation (latest) What does Bellman do? Bellman is a package for model-based reinforcement learning (MBRL) in Python,

46 Nov 09, 2022
Trainable Bilateral Filter Layer (PyTorch)

Trainable Bilateral Filter Layer (PyTorch) This repository contains our GPU-accelerated trainable bilateral filter layer (three spatial and one range

FabianWagner 26 Dec 25, 2022
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
Feup-csr - Repository holding my group's submission to the CSR project competition

CSR Competições de Swarm Robotics Swarm Robotics Competitions This repository holds the files submitted for the CSR project competition. Project group

Nuno Pereira 1 Jan 04, 2022