ProMP: Proximal Meta-Policy Search

Overview

Build Status Docs

ProMP: Proximal Meta-Policy Search

Implementations corresponding to ProMP (Rothfuss et al., 2018). Overall this repository consists of two branches:

  1. master: lightweight branch that provides the necessary code to run Meta-RL algorithms such as ProMP, E-MAML, MAML. This branch is meant to provide an easy start with Meta-RL and can be integrated into other projects and setups.
  2. full-code: branch that provides the comprehensive code that was used to produce the experimental results in Rothfuss et al. (2018). This includes experiment scripts and plotting scripts that can be used to reproduce the experimental results in the paper.

The code is written in Python 3 and builds on Tensorflow. Many of the provided reinforcement learning environments require the Mujoco physics engine. Overall the code was developed under consideration of modularity and computational efficiency. Many components of the Meta-RL algorithm are parallelized either using either MPI or Tensorflow in order to ensure efficient use of all CPU cores.

Documentation

An API specification and explanation of the code components can be found here. Also the documentation can be build locally by running the following commands

# ensure that you are in the root folder of the project
cd docs
# install the sphinx documentaiton tool dependencies
pip install requirements.txt
# build the documentaiton
make clean && make html
# now the html documentation can be found under docs/build/html/index.html

Installation / Dependencies

The provided code can be either run in A) docker container provided by us or B) using python on your local machine. The latter requires multiple installation steps in order to setup dependencies.

A. Docker

If not installed yet, set up docker on your machine. Pull our docker container jonasrothfuss/promp from docker-hub:

docker pull jonasrothfuss/promp

All the necessary dependencies are already installed inside the docker container.

B. Anaconda or Virtualenv

B.1. Installing MPI

Ensure that you have a working MPI implementation (see here for more instructions).

For Ubuntu you can install MPI through the package manager:

sudo apt-get install libopenmpi-dev
B.2. Create either venv or conda environment and activate it
Virtualenv
pip install --upgrade virtualenv
virtualenv 
   
    
source 
    
     /bin/activate

    
   
Anaconda

If not done yet, install anaconda by following the instructions here. Then reate a anaconda environment, activate it and install the requirements in requirements.txt.

conda create -n 
   
     python=3.6
source activate 
    

    
   
B.3. Install the required python dependencies
pip install -r requirements.txt
B.4. Set up the Mujoco physics engine and mujoco-py

For running the majority of the provided Meta-RL environments, the Mujoco physics engine as well as a corresponding python wrapper are required. For setting up Mujoco and mujoco-py, please follow the instructions here.

Running ProMP

In order to run the ProMP algorithm point environment (no Mujoco needed) with default configurations execute:

python run_scripts/pro-mp_run_point_mass.py 

To run the ProMP algorithm in a Mujoco environment with default configurations:

python run_scripts/pro-mp_run_mujoco.py 

The run configuration can be change either in the run script directly or by providing a JSON configuration file with all the necessary hyperparameters. A JSON configuration file can be provided through the flag. Additionally the dump path can be specified through the dump_path flag:

python run_scripts/pro-mp_run.py --config_file 
   
     --dump_path 
    

    
   

Additionally, in order to run the the gradient-based meta-learning methods MAML and E-MAML (Finn et. al., 2017 and Stadie et. al., 2018) in a Mujoco environment with the default configuration execute, respectively:

python run_scripts/maml_run_mujoco.py 
python run_scripts/e-maml_run_mujoco.py 

Cite

To cite ProMP please use

@article{rothfuss2018promp,
  title={ProMP: Proximal Meta-Policy Search},
  author={Rothfuss, Jonas and Lee, Dennis and Clavera, Ignasi and Asfour, Tamim and Abbeel, Pieter},
  journal={arXiv preprint arXiv:1810.06784},
  year={2018}
}

Acknowledgements

This repository includes environments introduced in (Duan et al., 2016, Finn et al., 2017).

Owner
Jonas Rothfuss
Doctoral researcher - Institute of Machine Learning (ETH Zurich) Research emphasis on meta-learning and reinforcement learning
Jonas Rothfuss
DRIFT is a tool for Diachronic Analysis of Scientific Literature.

About DRIFT is a tool for Diachronic Analysis of Scientific Literature. The application offers user-friendly and customizable utilities for two modes:

Rajaswa Patil 108 Dec 12, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
Locally cache assets that are normally streamed in POPULATION: ONE

Population One Localizer This is no longer needed as of the build shipped on 03/03/22, thank you bigbox :) Locally cache assets that are normally stre

Ahman Woods 2 Mar 04, 2022
Tensorflow implementation of "Learning Deep Features for Discriminative Localization"

Weakly_detector Tensorflow implementation of "Learning Deep Features for Discriminative Localization" B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and

Taeksoo Kim 363 Jun 29, 2022
Python script that analyses the given datasets and comes up with the best polynomial regression representation with the smallest polynomial degree possible

Python script that analyses the given datasets and comes up with the best polynomial regression representation with the smallest polynomial degree possible, to be the most reliable with the least com

Nikolas B Virionis 2 Aug 01, 2022
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noรฉ and Djork-Arnรฉ Clevert

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584 PHM Linear Layer Illustration

Bayer AG 26 Aug 11, 2022
Hierarchical probabilistic 3D U-Net, with attention mechanisms (โ€”๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜œ-๐˜•๐˜ฆ๐˜ต, ๐˜š๐˜Œ๐˜™๐˜ฆ๐˜ด๐˜•๐˜ฆ๐˜ต) and a nested decoder structure with deep supervision (โ€”๐˜œ๐˜•๐˜ฆ๐˜ต++).

Hierarchical probabilistic 3D U-Net, with attention mechanisms (โ€”๐˜ˆ๐˜ต๐˜ต๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜œ-๐˜•๐˜ฆ๐˜ต, ๐˜š๐˜Œ๐˜™๐˜ฆ๐˜ด๐˜•๐˜ฆ๐˜ต) and a nested decoder structure with deep supervision (โ€”๐˜œ๐˜•๐˜ฆ๐˜ต++). Built in TensorFlow 2.5. Configured for vox

Diagnostic Image Analysis Group 32 Dec 08, 2022
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

88 Dec 21, 2022
QHackโ€”the quantum machine learning hackathon

Official repo for QHackโ€”the quantum machine learning hackathon

Xanadu 72 Dec 21, 2022
Block-wisely Supervised Neural Architecture Search with Knowledge Distillation (CVPR 2020)

DNA This repository provides the code of our paper: Blockwisely Supervised Neural Architecture Search with Knowledge Distillation. Illustration of DNA

Changlin Li 215 Dec 19, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 03, 2023
Spatiotemporal resampling methods for mlr3

mlr3spatiotempcv Package website: release | dev Spatiotemporal resampling methods for mlr3. This package extends the mlr3 package framework with spati

45 Nov 21, 2022
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
Tensorflow 2 Object Detection API kurulumu, GPU desteฤŸi, custom model hazฤฑrlama

Tensorflow 2 Object Detection API Bu tutorial, TensorFlow 2.x'in kararlฤฑ sรผrรผmรผ olan TensorFlow 2.3'ye yรถneliktir. Bu, gรถrรผntรผlerde / videoda nesne a

46 Nov 20, 2022
190 Jan 03, 2023
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

118 Dec 12, 2022
Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)

Depth-supervised NeRF: Fewer Views and Faster Training for Free Project | Paper | YouTube Pytorch implementation of our method for learning neural rad

524 Jan 08, 2023
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022