Isaac Gym Environments for Legged Robots

Related tags

Hardwarelegged_gym
Overview

Isaac Gym Environments for Legged Robots

This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym. It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.
Maintainer: Nikita Rudin
Affiliation: Robotic Systems Lab, ETH Zurich
Contact: [email protected]

Useful Links

Project website: https://leggedrobotics.github.io/legged_gym/ Paper: https://arxiv.org/abs/2109.11978

Installation

  1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
  2. Install pytorch 1.10 with cuda-11.3:
    • pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
  3. Install Isaac Gym
    • Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
    • cd isaacgym_lib/python && pip install -e .
    • Try running an example python examples/1080_balls_of_solitude.py
    • For troubleshooting check docs isaacgym/docs/index.html)
  4. Install rsl_rl (PPO implementation)
  5. Install legged_gym
    • Clone this repository
    • cd legged_gym && git checkout develop && pip install -e .

CODE STRUCTURE

  1. Each environment is defined by an env file (legged_robot.py) and a config file (legged_robot_config.py). The config file contains two classes: one conatianing all the environment parameters (LeggedRobotCfg) and one for the training parameters (LeggedRobotCfgPPo).
  2. Both env and config classes use inheritance.
  3. Each non-zero reward scale specified in cfg will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
  4. Tasks must be registered using task_registry.register(name, EnvClass, EnvConfig, TrainConfig). This is done in envs/__init__.py, but can also be done from outside of this repository.

Usage

  1. Train:
    python issacgym_anymal/scripts/train.py --task=anymal_c_flat
    • To run on CPU add following arguments: --sim_device=cpu, --rl_device=cpu (sim on CPU and rl on GPU is possible).
    • To run headless (no rendering) add --headless.
    • Important: To improve performance, once the training starts press v to stop the rendering. You can then enable it later to check the progress.
    • The trained policy is saved in issacgym_anymal/logs/ / _ /model_ .pt . Where and are defined in the train config.
    • The following command line arguments override the values set in the config files:
    • --task TASK: Task name.
    • --resume: Resume training from a checkpoint
    • --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
    • --run_name RUN_NAME: Name of the run.
    • --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
    • --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
    • --num_envs NUM_ENVS: Number of environments to create.
    • --seed SEED: Random seed.
    • --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
  2. Play a trained policy:
    python issacgym_anymal/scripts/play.py --task=anymal_c_flat
    • By default the loaded policy is the last model of the last run of the experiment folder.
    • Other runs/model iteration can be selected by setting load_run and checkpoint in the train config.

Adding a new environment

The base environment legged_robot implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and no reward scales.

  1. Add a new folder to envs/ with ' _config.py , which inherit from an existing environment cfgs
  2. If adding a new robot:
    • Add the corresponding assets to resourses/.
    • In cfg set the asset path, define body names, default_joint_positions and PD gains. Specify the desired train_cfg and the name of the environment (python class).
    • In train_cfg set experiment_name and run_name
  3. (If needed) implement your environment in .py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
  4. Register your env in isaacgym_anymal/envs/__init__.py.
  5. Modify/Tune other parameters in your cfg, cfg_train as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!

Troubleshooting

  1. If you get the following error: ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory, do: sudo apt install libpython3.8

Known Issues

  1. The contact forces reported by net_contact_force_tensor are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesireable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from trhe reported forces with sensor_options.enable_forward_dynamics_forces. Example:
    sensor_pose = gymapi.Transform()
    for name in feet_names:
        sensor_options = gymapi.ForceSensorProperties()
        sensor_options.enable_forward_dynamics_forces = False # for example gravity
        sensor_options.enable_constraint_solver_forces = True # for example contacts
        sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
        index = self.gym.find_asset_rigid_body_index(robot_asset, name)
        self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
    (...)

    sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
    self.gym.refresh_force_sensor_tensor(self.sim)
    force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
    self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
    (...)

    self.gym.refresh_force_sensor_tensor(self.sim)
    contact = self.sensor_forces[:, :, 2] > 1.
Owner
Robotic Systems Lab - Legged Robotics at ETH Zürich
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments.
Robotic Systems Lab - Legged Robotics at ETH Zürich
Home Assistant component to handle key atom

KeyAtome Home Assistant component to handle key atom, a Linky-compatible device made by Total/Direct-Energie. Installation Either use HACS (default),

18 Dec 21, 2022
Add filters (background blur, etc) to your webcam on Linux.

webcam-filters Add filters (background blur, etc) to your webcam on Linux. Video conferencing applications tend to either lack video effects altogethe

Jashandeep Sohi 480 Dec 14, 2022
Workshop for student hackathons focused on IoT dev

Scenario: The Mutt Matcher (IoT version) According to the World Health Organization there are more than 200 million stray dogs worldwide. The American

Microsoft 15 Aug 10, 2022
My 500 LED xmas tree

xmastree2020 This repository contains the code used for Matt's Christmas tree, as featured in "I wired my tree with 500 LED lights and calculated thei

Stand-up Maths 581 Jan 07, 2023
Hardware: CTWingSKIT_BC28 Development Toolkit

IoT Portal Monitor Tools hardware: CTWingSKIT_BC28 Development Toolkit serial port driver: ST-LINK hardware development environment: Keli 5 MDK IoT pl

Fengming Zhang 1 Nov 07, 2021
Python Wrapper for Homeassistant's REST API

HomeassistantAPI Python Wrapper for Homeassistant's REST API Please ⭐️ the repo if you find this project useful or cool! Here is a quick example. from

Nate 29 Dec 31, 2022
Modi2-firmware-updater - MODI+ Firmware Updater With Python

MODI+ Firmware Updater 실행 준비 python3(파이썬3.9 혹은 그 이상의 버전)를 컴퓨터에 설치 python3 -m pip

LUXROBO 1 Feb 04, 2022
Point Density-Aware Voxels for LiDAR 3D Object Detection (CVPR 2022)

PDV PDV is LiDAR 3D object detection method. This repository is based off [OpenPCDet]. Point Density-Aware Voxels for LiDAR 3D Object Detection Jordan

Toronto Robotics and AI Laboratory 114 Dec 21, 2022
An open source operating system designed primarily for the Raspberry Pi Pico, written entirely in MicroPython

PycOS An open source operating system designed primarily for the Raspberry Pi Pico, written entirely in MicroPython. "PycOS" is an combination of the

8 Oct 06, 2022
Python Keylogger for Linux

A keylogger is a program that records your keystrokes, this program saves them in a .txt file on your local computer and, after 30 seconds (or as long as you want), it will close the .txt file and se

Darío Mazzitelli 4 Jul 31, 2021
ArucoFollow - A script for Robot Operating System and it is a part of a project Robot

ArucoFollow ArucoFollow is a script for Robot Operating System and it is a part

5 Jan 25, 2022
ENC28J60 Ethernet chip driver for MicroPython (RP2)

micropy-ENC28J60 ENC28J60 Ethernet chip driver for MicroPython v1.17 (RP2) Rationale ENC28J60 is a popular and cheap module for DIY projects. At the m

11 Nov 16, 2022
Unofficial Playdate reverse-engineering notes/tools - covers file formats, server API and USB commands

Unofficial Playdate reverse-engineering notes/tools - covers file formats, server API and USB commands ⚠️ This documentation is unofficial and is not

James 106 Dec 31, 2022
uOTA - OTA updater for MicroPython

Update your device firmware written in MicroPython over the air. Suitable for private and/or larger projects with many files.

Martin Komon 25 Dec 19, 2022
LUNA: a USB multitool & nMigen library

LUNA is a full toolkit for working with USB using FPGA technology; and provides hardware, gateware, and software to enable USB applications.

Great Scott Gadgets 750 Dec 28, 2022
A Python class for controlling the Pimoroni RGB Keypad for Raspberry Pi Pico

rgbkeypad A Python class for controlling the Pimoroni RGB Keypad for the Raspberry Pi Pico. Compatible with MicroPython and CircuitPython. keypad = RG

Martin O'Hanlon 43 Nov 11, 2022
A Simple Python KeyLogger App

✨ Kurulum Uygulamayı bilgisayarınızda kullana bilmek için bazı işlemler yapmanız gerekiyor. Aşağıdaki yönlendirmeleri takip ederek bunu yapabilirsiniz

VorteX 7 Jun 11, 2022
I made this so I can control my Tapo L510 light bulb and Govee H6159 light strip using the PyP100 module and the Govee public API

TAPO-And-Govee-Controller I made this so I can control my Tapo L510 light bulb and Govee H6159 light strip using the PyP100 module and the Govee publi

James Westhead 0 Nov 23, 2021
🔆 A Python module for controlling power and brightness of the official Raspberry Pi 7

rpi-backlight A Python module for controlling power and brightness of the official Raspberry Pi 7" touch display. Note: This GIF was created using the

Linus Groh 238 Jan 08, 2023
This is an incredible led matrix simulation using the ultimate mosaik co-simulation framework.

This project uses the mosaik co-simulation framework, developed by the brilliant developers at the high-ranked Offis institue for computer science, Oldenburg, Germany, to simulate multidimensional LE

Felix 1 Jan 28, 2022