Simple renderer for use with MuJoCo (>=2.1.2) Python Bindings.

Overview

Viewer for MuJoCo in Python

Interactive renderer to use with the official Python bindings for MuJoCo.

Starting with version 2.1.2, MuJoCo comes with native Python bindings officially supported by the MuJoCo devs.

If you have been a user of mujoco-py, you might be looking to migrate.
Some pointers on migration are available here.

Install

$ git clone https://github.com/rohanpsingh/mujoco-python-viewer
$ cd mujoco-python-viewer
$ pip install -e .

Or, install via Pip.

$ pip install mujoco-python-viewer

Usage

import mujoco
import mujoco_viewer

model = mujoco.MjModel.from_xml_path('humanoid.xml')
data = mujoco.MjData(model)

# create the viewer object
viewer = mujoco_viewer.MujocoViewer(model, data)

# simulate and render
for _ in range(100000):
    mujoco.mj_step(model, data)
    viewer.render()

# close
viewer.close()

The render should pop up and the simulation should be running.
Double-click on a geom and hold Ctrl to apply forces (right) and torques (left).

ezgif-2-6758c40cdf

Press ESC to quit.
Other key bindings are shown in the overlay menu (almost similar to mujoco-py).

Comments
  • Not able to get view to render in real time.

    Not able to get view to render in real time.

    I am running a simulation with a time step of 0.001 and gravity of -9.8. My model isn't very tall just 0.4m. When I use your viewer it puts everything into slow motion. If I turn off help it gets faster but still in slow motion. Pressing D seems to make it just go faster than real life. Why is it not moving at the same rate it would in real life? How do I get it to render in real time?

    opened by Robokan 5
  • Can I use mujoco-python-viewer using `dm_control` API?

    Can I use mujoco-python-viewer using `dm_control` API?

    I'm new to mujoco and I'm trying to play with interactive visualization. mujoco-python-viewer seems really useful!

    I noticed though that I cannot use it with the dm_control.mujoco.Physics API (which is more convenient for named indexing, etc.).

    To clarify my intention, below is an example of the way I would like to use it:

    from dm_control import mujoco
    import mujoco_viewer
    
    physics = mujoco.Physics.from_xml_path('my_model.xml')
    model = physics.model
    data = physics.data
    
    viewer = mujoco_viewer.MujocoViewer(model, data)
    
    for _ in range(10000):
        if viewer.is_alive:
            physics.step()
            viewer.render()
        else:
            break
    
    viewer.close()
    

    Is there a way to do that?

    opened by omershalev 5
  • Quitting does not release ctx

    Quitting does not release ctx

    When ESC is pressed to terminate the viewer, the code will just:

    print("Prssed ESC")
    print("Quitting.")
    glfw.terminate()
    sys.exit(0)
    

    Is there a reason why this code is not just calling the self.close() which does partly the same and in addition releases ctx?

    opened by rpapallas 5
  • Code simplification, kinematic loop example

    Code simplification, kinematic loop example

    Added a Kinematic Loop example Simplified the mujoco_viewer.py with callbacks in another file. Auto creates a root/tmp (unless root/tmp exists) to save screen captures in it.

    TODO: No.2 in #4

    opened by rohit-kumar-j 4
  • Added ability to toggle on/off the small bottom-left menu

    Added ability to toggle on/off the small bottom-left menu

    Sometimes, especially for experiments, it's good to have a clean window without any menus to take screenshots. I added a small code that provides a toggle to turn on or off the bottom-left stats menu. I also added an optional parameter to turn it off when initializing the viewer. By default, nothing changes, the stats menu will be visible as like before.

    I had to introduce two different names for these menus: help_menu for the previous menu and statistics_menu for the bottom-left one.

    opened by rpapallas 3
  • Feature: Extra Examples (outside a simple the viewer)?

    Feature: Extra Examples (outside a simple the viewer)?

    I'm currently working with masse, torques, etc, seeing this series. And was hoping to take the examples directory a bit further (Although I'm not sure how much of this is practical). By creating sort of a tutorial/example with a simple pendulum to obtain units of torque, tuning values of kp, kd, ki etc... A graphing of PID error like the profiler/sensor section of the simulate viewer which generates live graphs.

    Perhaps a wiki with these:

    Existing graphing: profiler_mujoco

    opened by rohit-kumar-j 3
  • How to display arrow when dragging?

    How to display arrow when dragging?

    This is nice repository. This code will help me a lot.

    But I have a question about displaying arrow when dragging.

    In example in readme.md, arrow to indicate force is displayed like this.

    161459985-a47e74dc-92c9-4a0b-99fc-92d1b5b04163

    https://user-images.githubusercontent.com/53563180/185560247-a8f1c8f9-95a5-450d-bd3f-c6554323b6c6.mp4

    However, it shows a box and prevent me from undastanfing the direction of the force in my trial. I also trys left/right ctrl keys.

    Do you have any idea to fix this?

    thanks

    this is my environment python 3.7.12 glfw 2.5.4 mujoco-python-viewer 0.1.1

    opened by gyuta 2
  • Testing

    Testing

    I tested the code on python3 on mac (intel) and I had to do 3 changes to get it work

    1. remove import imagio (package is not needed and I was not able to install it anyway)
    2. line 500 and 533 I had to change is to ==
    opened by pab47 2
  • [Issue] Multi-instances for multiple view

    [Issue] Multi-instances for multiple view

    Thanks for the great work, and I am trying to transform my script to use this lib from mujoco-py. But I realized that this library seems to be incapable to create multiple instances: For example, a -1 observer view, and 0,1 for stereo vision.

    I am wondering if there is any workaround in-mind related to this?

    Best Jack

    opened by jaku-jaku 2
  • Bugs occur when using 'double click'  and 'ctrl and left click or right click' on mac

    Bugs occur when using 'double click' and 'ctrl and left click or right click' on mac

    Hello, I am using mbp m1 to test this viewer with mujoco python bindings.

    I found that when I run the basic example by this viewer, mouse actions are wrong on mac.

    The bug is "double click" cannot select an object but turn on/off the contact force option( and c button function by keyboard can still work). So I cannot use ctrl + left/right click to give a torque or force on object.

    I tested this in MuJoCo simulation by importing a xml file to MuJoCo directly, also on mac, and the "double click" worked and can select the object. So it's not the mujoco issue. Besides, I also tested same version viewer on ubuntu, it works very well.

    I suspect there are somethings different on mac. I checked the code but found nothing.

    Please have a look, many thanks!

    opened by KJaebye 1
  • Converted class to a context manager

    Converted class to a context manager

    This allows a client to use the class in the following way:

    with MujocoViewer(model, data) as viewer:
        viewer.render()
    

    and it will call viewer.close() when it goes out of scope, so the client doesn't have to.

    opened by rpapallas 1
  • How to record simulation movies?

    How to record simulation movies?

    Hello,

    I am new to Mujoco but I could take a screenshot by referring to your program! Thank you very much.

    However, I did not know how to take a video and would like to know how to do so.

    I am sorry to trouble you with this, but thank you in advance for your time.

    Thank you in advance.

    opened by miyukin73 1
  • Full Reload of Sim without closing window?

    Full Reload of Sim without closing window?

    This might be a breaking change:

    # pass in the xml path to the viewer directly and upon KEY_BACKSPACE, reload the sim
    viewer = viewer.MujocoViewer(xml_path="Projects/rjax_python/robots/humanoid/scene.xml")
    while True:
            mujoco.mj_step(viewer.model, viewer.data)
            viewer.render()
    

    Each time the model and data have to be accessed, they have to be done via viewer.model and viewer.data. The examples, etc need to change. Would this PR be okay? (Of course, the changes will be reflected in the README and examples)

    Need for this/Use case:

    No relaunching of the python script for .xml tweaking, no need to use simulate.cc for the same

    Implementation Example:

    https://user-images.githubusercontent.com/37873142/192229058-3711d7ab-b69c-46c0-b6b3-998364ce704f.mp4

    (If the video stops in the middle, kindly scrub manually to the end. The video may be corrupted)

    opened by rohit-kumar-j 2
  • Large/Small Font options with MjrContext?

    Large/Small Font options with MjrContext?

    Too many changes in #23. So I want to ask this here (perhaps there are too many config options): Add font options withinMjrContext?

    viewer.__init__(font="small")  # or "large"
    
    if font == "large":
       self.ctx = mujoco.MjrContext(
           self.model, mujoco.mjtFontScale.mjFONTSCALE_150.value)
    elif font == "small":
        self.ctx = mujoco.MjrContext(
           self.model, mujoco.mjtFontScale.mjFONTSCALE_100.value)
    

    | Small | Large | |:------:|:-------:| | | |

    opened by rohit-kumar-j 0
  • Added graph rendering, Actuator force visualization[no sites], Sim reset method(backspace) and window positioning

    Added graph rendering, Actuator force visualization[no sites], Sim reset method(backspace) and window positioning

    Graph preview (KEY: G)

    Unfortunately, the time at the bottom of the graph was not recorded in the video. It gives a time-based graph The red line is a random signal(sine in this case)

    https://user-images.githubusercontent.com/37873142/190720262-22f09c46-363b-4dc3-8c37-b340ed66a69b.mp4

    Actuator Force visualization via graphs

    The sites are used to get the location and orientation of the body at the actuator location only.

    https://user-images.githubusercontent.com/37873142/190720353-2cf6e5d8-a1d5-4c67-8757-a5c2f8d6cbd6.mp4

    ... and added examples

    opened by rohit-kumar-j 5
  • User options

    User options

    Hello,

    I needed some way to get some "user options". I have different MuJoCo data that I would like to visualize, so I wanted a way for a user to press "1" and then the client code alter the viewer data/model to the first data, then press "2" and alter the viewer data/model to the second data etc.

    I have written this here: https://github.com/rpapallas/mujoco-python-viewer/commit/dc8679ee39623cd7d93b7576ed1d089d938beee7

    If you think something like this is going to be useful and could be implemented like this or differently, please let me know. This could be a generic feature like "user options" allowing the client code to do something if a certain user option is pressed, which currently is limited to numeric values, but could be any value while shift is pressed. I understand that this might not be useful to everyone, though.

    opened by rpapallas 0
Releases(v0.1.2)
  • v0.1.2(Aug 23, 2022)

    New feature

    • Ctrl+S will save current camera configuration in config.yaml
    • Saved camera configuration will automatically be loaded on startup and applied (if possible)

    NOTE

    Not tested on Windows or MacOS

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 7, 2022)

  • v0.1.0(Jul 26, 2022)

    Added

    • Support for offscreen rendering!
    • Sample program for offscreen: examples/offscreen_demo.py

    Changes

    • examples/markers_demo.py will now loop forever until window is closed.

    Fixes

    • Fix thread crash behavior on ESC key.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 22, 2022)

Owner
Rohan P. Singh
PhD student at JRL, Japan.
Rohan P. Singh
Tensorflow implementation of Character-Aware Neural Language Models.

Character-Aware Neural Language Models Tensorflow implementation of Character-Aware Neural Language Models. The original code of author can be found h

Taehoon Kim 751 Dec 26, 2022
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

MINDs Lab 170 Jan 04, 2023
Combinatorial model of ligand-receptor binding

Combinatorial model of ligand-receptor binding The binding of ligands to receptors is the starting point for many import signal pathways within a cell

Mobolaji Williams 0 Jan 09, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
This repository contains the entire code for our work "Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding"

Two-Timescale-DNN Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid Precoding This repository contains the entire code for our work

QiyuHu 3 Mar 07, 2022
PyTorch implementation of our method for adversarial attacks and defenses in hyperspectral image classification.

Self-Attention Context Network for Hyperspectral Image Classification PyTorch implementation of our method for adversarial attacks and defenses in hyp

22 Dec 02, 2022
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
Decorators for maximizing memory utilization with PyTorch & CUDA

torch-max-mem This package provides decorators for memory utilization maximization with PyTorch and CUDA by starting with a maximum parameter size and

Max Berrendorf 10 May 02, 2022
Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization

Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization Code for reproducing our results in the Head2Toe paper. Paper: arxiv.or

Google Research 62 Dec 12, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Jan 02, 2023
In this project, we develop a face recognize platform based on MTCNN object-detection netcwork and FaceNet self-supervised network.

模式识别大作业——人脸检测与识别平台 本项目是一个简易的人脸检测识别平台,提供了人脸信息录入和人脸识别的功能。前端采用 html+css+js,后端采用 pytorch,

Xuhua Huang 5 Aug 02, 2022
Monitora la qualità della ricezione dei segnali radio nelle province siciliane.

FMap-server Monitora la qualità della ricezione dei segnali radio nelle province siciliane. Conversion data Frequency - StationName maps are stored in

Triglie 5 May 24, 2021
Implementation of "Learning to Match Features with Seeded Graph Matching Network" ICCV2021

SGMNet Implementation PyTorch implementation of SGMNet for ICCV'21 paper "Learning to Match Features with Seeded Graph Matching Network", by Hongkai C

87 Dec 11, 2022
Arquitetura e Desenho de Software.

S203 Este é um repositório dedicado às aulas de Arquitetura e Desenho de Software, cuja sigla é "S203". E agora, José? Como não tenho muito a falar aq

Fabio 7 Oct 23, 2021
Using pretrained language models for biomedical knowledge graph completion.

LMs for biomedical KG completion This repository contains code to run the experiments described in: Scientific Language Models for Biomedical Knowledg

Rahul Nadkarni 41 Nov 30, 2022
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020 Oral) Figure: Face image editing controlled via style images and segmenta

Peihao Zhu 579 Dec 30, 2022
Deep Structured Instance Graph for Distilling Object Detectors (ICCV 2021)

DSIG Deep Structured Instance Graph for Distilling Object Detectors Authors: Yixin Chen, Pengguang Chen, Shu Liu, Liwei Wang, Jiaya Jia. [pdf] [slide]

DV Lab 31 Nov 17, 2022
The repo for reproducing Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study

ECIR Reproducibility Paper: Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study This code corresponds to the reproducibility

ielab 3 Mar 31, 2022
Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Clay Mullis 82 Oct 13, 2022
Real-time Joint Semantic Reasoning for Autonomous Driving

MultiNet MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-

Marvin Teichmann 518 Dec 12, 2022