robomimic: A Modular Framework for Robot Learning from Demonstration

Overview

robomimic

[Homepage][Documentation][Study Paper][Study Website][ARISE Initiative]


Latest Updates

[08/09/2021] v0.1.0: Initial code and paper release


robomimic is a framework for robot learning from demonstration. It offers a broad set of demonstration datasets collected on robot manipulation domains, and learning algorithms to learn from these datasets. This project is part of the broader Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative, with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics.

Imitating human demonstrations is a promising approach to endow robots with various manipulation capabilities. While recent advances have been made in imitation learning and batch (offline) reinforcement learning, a lack of open-source human datasets and reproducible learning methods make assessing the state of the field difficult. The overarching goal of robomimic is to provide researchers and practitioners with:

  • a standardized set of large demonstration datasets across several benchmarking tasks to facilitate fair comparisons, with a focus on learning from human-provided demonstrations
  • high-quality implementations of several learning algorithms for training closed-loop policies from offline datasets to make reproducing results easy and lower the barrier to entry
  • a modular design that offers great flexibility in extending algorithms and designing new algorithms

This release of robomimic contains seven offline learning algorithms and standardized datasets collected across five simulated and three real-world multi-stage manipulation tasks of varying complexity. We highlight some features below:

  • standardized datasets: a set of datasets collected from different sources (single proficient human, multiple humans, and machine-generated) across several simulated and real-world tasks, along with a plug-and-play Dataset class to easily use the datasets outside of this project
  • algorithm implementations: several high-quality implementations of offline learning algorithms, including BC, BC-RNN, HBC, IRIS, BCQ, CQL, and TD3-BC
  • multiple observation spaces: support for learning both low-dimensional and visuomotor policies, with support for observation tensor dictionaries throughout the codebase, making it easy to specify different subsets of observations to train a policy. This includes a set of useful tensor utilities to work with nested dictionaries of torch Tensors and numpy arrays.
  • visualization utilities: utilities for visualizing demonstration data, playing back actions, visualizing trained policies, and collecting new datasets using trained policies
  • train launching utilities: utilities for easily running hyperparameter sweeps, enabled by a flexible Config management system

Contributing to robomimic

This framework originally began development in late 2018. Researchers in the Stanford Vision and Learning Lab (SVL) used it as an internal tool for training policies from offline human demonstration datasets. Now it is actively maintained and used for robotics research projects across multiple labs. We welcome community contributions to this project. For details please check our contributing guidelines.

Troubleshooting

Please see the troubleshooting section for common fixes, or submit an issue on our github page.

Reproducing study results

The robomimic framework also makes reproducing the results from this study easy. See the results documentation for more information.

Citations

Please cite this paper if you use this framework in your work:

@inproceedings{robomimic2021,
  title={What Matters in Learning from Offline Human Demonstrations for Robot Manipulation},
  author={Ajay Mandlekar and Danfei Xu and Josiah Wong and Soroush Nasiriany and Chen Wang and Rohun Kulkarni and Li Fei-Fei and Silvio Savarese and Yuke Zhu and Roberto Mart\'{i}n-Mart\'{i}n},
  booktitle={arXiv preprint arXiv:2108.03298},
  year={2021}
}
Comments
  • Demo collection script

    Demo collection script

    Hi, is the demonstration collection script available somewhere? The one in robosuite repo does not seem to output demos with the right format.

    Thanks!

    opened by yuchen93 11
  • Segfault of some algorithms on cluster

    Segfault of some algorithms on cluster

    Hi,

    I am trying to run all the algorithms on the TwoArmTransport environment, and I ran into Segmentation issue when trying td3_bc, bcq and cql on our school's cluster (with GeForce GTX 1080 with 8120 MB memory). Here is an example of the segmentation fault when running the td3_bc algorithm on the low_dim dataset. I tried to investigate a little bit, but it's not clear to me what is causing this segfault issue (I've attached the error message below from the terminal). There is no such issue if I run these algorithms on my own laptop. It would be great if there are solutions to the segfault so that I can run my experiments on the cluster. Thanks a lot in advance.

    SequenceDataset (
    	path=robomimic_data/low_dim.hdf5
    	obs_keys=('object', 'robot0_eef_pos', 'robot0_eef_quat', 'robot0_gripper_qpos')
    	seq_length=1
    	filter_key=none
    	frame_stack=1
    	pad_seq_length=True
    	pad_frame_stack=True
    	goal_mode=none
    	cache_mode=all
    	num_demos=200
    	num_sequences=93752
    )
    
     10%|#         | 519/5000 [00:28<04:03, 18.43it/s]Segmentation fault (core dumped)
    
    opened by vivianchen98 9
  • robosuite env.reset_to

    robosuite env.reset_to "hack" present in run_trained_agent.py but not in train_utils.py

    I'm trying to use code from run_trained_agent.py to collect some rollout statistics and it seems like this particular script uses a .reset_to() call that resets an environment to its current state. To my knowledge, this trick isn't present in other robomimic evaluation scripts, like run_rollout() in train_utils.py.

    When collecting human demonstrations on the robosuite task, was the .reset_to() trick used? I'm seeing some performance differences between the two versions of eval scripts, and I'm trying to pinpoint the issue.

    https://github.com/ARISE-Initiative/robomimic/blob/b5d2aa9902825c6c652e3b08b19446d199b49590/robomimic/scripts/run_trained_agent.py#L103

    opened by MaxDu17 5
  • Goal-conditioned observations

    Goal-conditioned observations

    Hello,

    I noticed that there is a functionality to use goal-conditioned images in robomimic. I was interested to figure out how this worked and found out that we can use the get_goal() function in the env_robosuite.py file. However, this function was used only in the rollout runs and I couldn't find its use during the training. Is it possible to train using goal-conditioned observations in robomimic? I was thinking of instances such as goal-conditioned imitation learning where the image from the last time-step in the demonstration is used as the goal observation in the policy.

    There was also a documentation comment in the get_goal() function noting that not all environments support this. I went to check all the environments in robosuite and saw that none of them had a _get_goal() function. If I were to write my own get_goal() function in a robosuite environment, is it possible to return an agentview image in the function as a goal observation once the task has succeeded? Would appreciate any help on this, thank you!

    opened by PraveenElango 5
  • How to use transport environment ?

    How to use transport environment ?

    I get the following error when I pass in the path to transport data Environment TwoArmTransport not found. Make sure it is a registered environment among: Lift, Stack, NutAssembly, .....

    Does this error arise due to robosuite not having the TwoArmTransport environment ? Is yes, how do I reproduce the paper results on transport. Any suggestions will be helpful.

    opened by prajjwal1 3
  • ImportError: cannot import name 'postprocess_model_xml' from 'robosuite.utils.mjcf_utils'

    ImportError: cannot import name 'postprocess_model_xml' from 'robosuite.utils.mjcf_utils'

    I get this error when I run a coupe of scripts such as : python examples/train_bc_rnn.py --debug or even the playback_dataset.py script:

    from robosuite.utils.mjcf_utils import postprocess_model_xml ImportError: cannot import name 'postprocess_model_xml' from 'robosuite.utils.mjcf_utils' (/home/xyz/anaconda3/envs/robomimic_venv/lib/python3.7/site-packages/robosuite/utils/mjcf_utils.py)

    Is 'postprocess_model_xml' removed/moved or renamed?

    opened by supriyasathya 3
  • Project Roadmap

    Project Roadmap

    Hi there, amazing project.

    I'm considering building on top of this framework, but I would like to understand what are your plans for the future. I see that many libraries are outdated (mujoco-py not active, pytorch 1.6, python 3.7).

    Which one of the below would you say is more accurate? -Project will be maintaned with basic fixes and small updates. -Project will be updated and will keep following the state of the art. -The project will be not be maintained anymore. -Other

    opened by lorepieri8 3
  • ERROR: GLEW initalization error: Missing GL version

    ERROR: GLEW initalization error: Missing GL version

    Hi, I followed tutorials accoding to this documentation, I have exported lines in my: .bashrc, .zshrc (i am using .zsh):

    export LD_LIBRARY_PATH=/home/dato/.mujoco/mujoco210/bin
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia
    export PATH="$LD_LIBRARY_PATH:$PATH"
    export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so
    

    when i run: python examples/train_bc_rnn.py --debug to test my installation it gives me an error: ERROR: GLEW initalization error: Missing GL version when i check my environmental variables in .zsh/bash terminal, LD_PRELOAD env variable is present, I don't know how to proceed forward

    opened by datonefaridze 3
  • run_policy.ipynb error

    run_policy.ipynb error

    When trying to play the trajectory:

    Playing back demo key: demo_0
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-14-b0a68c6d2406> in <module>
          2 for ep in demos[:5]:
          3     print("Playing back demo key: {}".format(ep))
    ----> 4     playback_trajectory(ep)
          5 
          6 # done writing video
    
    <ipython-input-13-455a7b8775a3> in playback_trajectory(demo_key)
         13 
         14     # reset to initial state
    ---> 15     env.reset_to(initial_state_dict)
         16 
         17     # playback actions one by one, and render frames
    
    ~/.local/lib/python3.8/site-packages/robomimic/envs/env_robosuite.py in reset_to(self, state)
        133             self.reset()
        134             xml = postprocess_model_xml(state["model"])
    --> 135             self.env.reset_from_xml_string(xml)
        136             self.env.sim.reset()
        137             if not self._is_v1:
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/base.py in reset_from_xml_string(self, xml_string)
        537 
        538         # Now reset as normal
    --> 539         self.reset()
        540 
        541         # Turn off deterministic reset
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/base.py in reset(self)
        263 
        264         # Reset necessary robosuite-centric variables
    --> 265         self._reset_internal()
        266         self.sim.forward()
        267         # Setup observables, reloading if
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/manipulation/lift.py in _reset_internal(self)
        387         Resets simulation internal configurations.
        388         """
    --> 389         super()._reset_internal()
        390 
        391         # Reset all object positions using initializer sampler if we're not directly loading from an xml
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/robot_env.py in _reset_internal(self)
        508         """
        509         # Run superclass reset functionality
    --> 510         super()._reset_internal()
        511 
        512         # Reset controllers
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/base.py in _reset_internal(self)
        316         # additional housekeeping
        317         self.sim_state_initial = self.sim.get_state()
    --> 318         self._setup_references()
        319         self.cur_time = 0
        320         self.timestep = 0
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/manipulation/lift.py in _setup_references(self)
        333         in a flatten array, which is how MuJoCo stores physical simulation data.
        334         """
    --> 335         super()._setup_references()
        336 
        337         # Additional object references from this env
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/robot_env.py in _setup_references(self)
        311         in a flatten array, which is how MuJoCo stores physical simulation data.
        312         """
    --> 313         super()._setup_references()
        314 
        315         # Setup robot-specific references as well (note: requires resetting of sim for robot first)
    
    ~/.local/lib/python3.8/site-packages/robosuite/environments/base.py in _setup_references(self)
        216         """
        217         # Setup mappings from model to IDs
    --> 218         self.model.generate_id_mappings(sim=self.sim)
        219 
        220     def _setup_observables(self):
    
    ~/.local/lib/python3.8/site-packages/robosuite/models/tasks/task.py in generate_id_mappings(self, sim)
        114             id_groups = [
        115                 get_ids(sim=sim, elements=model.visual_geoms + model.contact_geoms, element_type="geom"),
    --> 116                 get_ids(sim=sim, elements=model.sites, element_type="site"),
        117             ]
        118             group_types = ("geom", "site")
    
    ~/.local/lib/python3.8/site-packages/robosuite/utils/mjcf_utils.py in get_ids(sim, elements, element_type, inplace)
        887     else:  # We assume this is an iterable array
        888         assert isinstance(elements, Iterable), "Elements must be iterable for get_id!"
    --> 889         elements = [get_ids(sim=sim, elements=ele, element_type=element_type, inplace=True) for ele in elements]
        890 
        891     return elements
    
    ~/.local/lib/python3.8/site-packages/robosuite/utils/mjcf_utils.py in <listcomp>(.0)
        887     else:  # We assume this is an iterable array
        888         assert isinstance(elements, Iterable), "Elements must be iterable for get_id!"
    --> 889         elements = [get_ids(sim=sim, elements=ele, element_type=element_type, inplace=True) for ele in elements]
        890 
        891     return elements
    
    ~/.local/lib/python3.8/site-packages/robosuite/utils/mjcf_utils.py in get_ids(sim, elements, element_type, inplace)
        880             elements = sim.model.body_name2id(elements)
        881         else:  # site
    --> 882             elements = sim.model.site_name2id(elements)
        883     elif isinstance(elements, dict):
        884         # Iterate over each element in dict and recursively repeat
    
    wrappers.pxi in mujoco_py.cymj.PyMjModel.site_name2id()
    
    ValueError: No "site" with name gripper0_ee_x exists. Available "site" names = ('table_top', 'robot0_ee', 'robot0_ee_x', 'robot0_ee_z', 'robot0_ee_y', 'gripper0_ft_frame', 'gripper0_grip_site', 'gripper0_grip_site_cylinder', 'cube_default_site').
    
    
    opened by seann999 2
  • Using image datasets from demonstrations for  - memory issues

    Using image datasets from demonstrations for - memory issues

    Hello,

    I read the study paper for robomimic and saw that around 200-300 demonstrations were collected for various tasks. I collected 200 demonstrations for the Wipe task in robosuite and converted them and extracted image observations from the MuJoCo states as described here and created a new hdf5 file.

    I then used this new converted and extracted hdf5 file to conduct training in robomimic using the train_bc_rnn.py script while including agentview_image and robot0_eye_in_hand_image in the config.observation.modalities.obs.rgb. When I commenced training, the process keeps getting killed when dataset was being loaded into memory. I ran htop on my terminal and noticed that the Mem bar was full (125G/126G) right before the process was killed. The size of my hdf5 file is around 7 GB.

    When I reduced batch_size all the way to 1 and tried again, after about 50 epochs, the process gets killed again due to memory. Does this mean that I have to reduce the size of my demonstration dataset, or is there something that I may have missed here? Would appreciate any help, thank you!

    opened by PraveenElango 2
  • using SequenceDataset as standalone module

    using SequenceDataset as standalone module

    Hi! I want to use SequenceDataset in my project so I don't have to write it myself, but all other modules are not relevant for me.

    When I try to create a dataset, I get an error:

    AssertionError: error: must call ObsUtils.initialize_obs_utils_with_obs_config first
    

    Can I somehow use a dataset without creating (or using minimal only relevant args) configs? For example, as in https://arise-initiative.github.io/robomimic-web/docs/introduction/examples.html, but without model creation and training loop.

    Thanks!

    opened by Howuhh 2
  • ObsUtils.unprocess_obs_dict() modifies obs dict in-place

    ObsUtils.unprocess_obs_dict() modifies obs dict in-place

    The ObsUtils.unprocess_obs_dict() seems to modify the observation dictionary that is passed in, in addition to returning it. For example, I observed that in the lines referenced below, next_obs images are between 0 and 1, while after line 147, next_obs images are between 0 and 255. This leads to a problem, as obs is derived from next_obs, which means that in the next loop around, we will pass already unprocessed images into unprocess_obs_dict(). This has led to some significant issues, as the saved images are corrupted from what is observed. As a simple fix, I wrapped the next_obs in deepcopy(next_obs) on line 147.

    https://github.com/ARISE-Initiative/robomimic/blob/b5d2aa9902825c6c652e3b08b19446d199b49590/robomimic/scripts/run_trained_agent.py#L142-L147

    opened by MaxDu17 4
  • wrong rendering

    wrong rendering

    Hi! I want to visualize my agent's rollouts. However, the resulting videos turns out to be strange, I see some color artifacts. Should it be like this?

    What I do:

    # on each step
    render_frames.append(env.render(mode="rgb_array", width=256, height=256))
    
    # at the end
    imageio.mimsave(render_path, render_frames, fps=32)
    

    Result: Screenshot-from-2022-02-21-16-11-13

    opened by Howuhh 4
  • basic support for logging warnings

    basic support for logging warnings

    • adding functions log_warning and flush_warnings to utils/log_utils.py, allowing us to log warnings (in yellow text by default) at the start of training, and cache them so that they are displayed once again right before training starts, so that they appear all together in a convenient location that's easy to check while debugging
    • to leverage, call log_warning with the warning message, and optionally specify the text color (default is "yellow"), and whether you want to print the warning immediately (in addition to printing later when flush_warnings is called)
    opened by amandlek 0
  • Problem with train_bc_rnn.py in python 3.8

    Problem with train_bc_rnn.py in python 3.8

    When running examples/train_bc_rnn.py on python 3.8, if you set config.train.hdf5_cache_mode = "low_dim" and config.train.num_data_workers = 2, the training fails. It appears to be related to an issue with global variables in utils/obs_utils.py and the torch dataloader workers not having those set properly.

    opened by amandlek 0
  • Switch from urllib to requests

    Switch from urllib to requests

    Fixes issues where urllib would return "503: Service Temporary Unavailable", despite the following url being valid: http://downloads.cs.stanford.edu/downloads/rt_benchmark/lift/ph/low_dim.hdf5

    For more information, see: https://stackoverflow.com/a/25936312

    And for the progress bar implementation: https://stackoverflow.com/a/37573701

    opened by ellislm 0
Releases(v0.2.0)
  • v0.2.0(Dec 17, 2021)

    robomimic 0.2.0 Release Notes

    Highlights

    This release of robomimic brings integrated support for mobile manipulation datasets from the recent MOMART paper, and adds modular features for easily modifying and adding custom observation modalities and corresponding encoding networks.

    MOMART Datasets

    We have added integrated support for MOMART datasets, a large-scale set of multi-stage, long-horizon mobile manipulation task demonstrations in a simulated kitchen environment collected in iGibson.

    Using MOMART Datasets

    Datasets can be easily downloaded using download_momart_datasets.py.

    For step-by-step instructions for setting up your machine environment to visualize and train with the MOMART datasets, please visit the Getting Started page.

    Modular Observation Modalities

    We also introduce modular features for easily modifying and adding custom observation modalities and corresponding encoding networks. A modality corresponds to a group of specific observations that should be encoded the same way.

    Default Modalities

    robomimic natively supports the following modalities (expected size from a raw dataset shown, excluding the optional leading batch dimension):

    • rgb (H, W, 3): Standard 3-channel color frames with values in range [0, 255]
    • depth (H, W, 1): 1-channel frame with normalized values in range [0, 1]
    • low_dim (N): low dimensional observations, e.g.: proprioception or object states
    • scan (1, N): 1-channel, single-dimension data from a laser range scanner

    We have default encoder networks which can be configured / modified by setting relevant parameters in your config, e.g.:

    # These keys should exist in your dataset
    config.observation.modalities.obs.rgb = ["cam1", "cam2", "cam3"]    # Add camera observations to the RGB modality
    config.observation.modalities.obs.low_dim = ["proprio", "object"]   # Add proprioception and object states to low dim modality
    ...
    
    # Now let's modify the default RGB encoder network and set the feature dimension size
    config.observation.encoder.rgb.core_kwargs.feature_dimension = 128
    ...
    

    To see the structure of the observation modalities and encoder parameters, please see the base config module.

    Custom Modalities

    You can also easily add your own modality and corresponding custom encoding network! Please see our example add_new_modality.py.

    Refactored Config Structure

    With the introduction of modular modalities, our Config class structure has been modified slightly, and will likely cause breaking changes to any configs you have created using version 0.1.0. Below, we describe the exact changes in the config that need to be updated to match the current structure:

    Observation Modalities

    The image modality have been renamed to rgb. Thus, you would need to change your config in any places referencing image modality, e.g.:

    # Old format
    config.observation.modalities.image.<etc>
    
    # New format
    config.observation.modalities.rgb.<etc>
    

    The low_dim modality remains unchanged. Note, however, that we have additionally added integrated support for both depth and scan modalities, and can be referenced in the same way, e.g.:

    config.observation.modalities.depth.<etc>
    config.observation.modalities.scan.<etc>
    

    Observation Encoders / Randomizer Networks

    We have modularized the encoder / randomizer arguments so that they are general, and are unique to each type of observation modality. All of the original arguments in v0.1.0 have been preserved, but are now re-formatted as follows:

    ############# OLD ##############
    
    # Previously, a single set of arguments were specified, and was hardcoded to process image (rgb) observations
    
    # Assumes that you're using the VisualCore class, not general!
    config.observation.encoder.visual_feature_dimension = 64
    config.observation.encoder.visual_core = 'ResNet18Conv'
    config.observation.encoder.visual_core_kwargs.pretrained = False
    config.observation.encoder.visual_core_kwargs.input_coord_conv = False
    
    # For pooling, is hardcoded to use spatial softmax or not, not general!
    config.observation.encoder.use_spatial_softmax = True
    # kwargs for spatial softmax layer
    config.observation.encoder.spatial_softmax_kwargs.num_kp = 32
    config.observation.encoder.spatial_softmax_kwargs.learnable_temperature = False
    config.observation.encoder.spatial_softmax_kwargs.temperature = 1.0
    config.observation.encoder.spatial_softmax_kwargs.noise_std = 0.0
    
    
    ############# NEW ##############
    
    # Now, argument names are general (network-agnostic), and are specified per modality!
    
    # Example for RGB, to reproduce the above configuration
    
    # The core encoder network can be arbitrarily specified!
    config.observation.encoder.rgb.core_class = "VisualCore"
    
    # Corresponding kwargs that should be passed to the core class are specified below
    config.observation.encoder.rgb.core_kwargs.feature_dimension = 64
    config.observation.encoder.rgb.core_kwargs.backbone_class = "ResNet18Conv"
    config.observation.encoder.rgb.core_kwargs.backbone_kwargs.pretrained = False
    config.observation.encoder.rgb.core_kwargs.backbone_kwargs.input_coord_conv = False
    
    # The pooling class can also arbitrarily be specified!
    config.observation.encoder.rgb.core_kwargs.pool_class = "SpatialSoftmax"
    
    # Corresponding kwargs that should be passed to the pooling class are specified below
    config.observation.encoder.rgb.core_kwargs.pool_kwargs.num_kp = 32
    config.observation.encoder.rgb.core_kwargs.pool_kwargs.learnable_temperature = False
    config.observation.encoder.rgb.core_kwargs.pool_kwargs.temperature = 1.0
    config.observation.encoder.rgb.core_kwargs.pool_kwargs.noise_std = 0.0
    

    Thankfully, the observation randomization network specifications were already modularized, but were hardcoded to process image (rgb) modality only. Thus, the only change we made is to allow the randomization kwargs to be specified per modality:

    ############# OLD ##############
    # Previously, observation randomization was hardcoded for image / rgb modality
    config.observation.encoder.obs_randomizer_class = None
    config.observation.encoder.obs_randomizer_kwargs.crop_height = 76
    config.observation.encoder.obs_randomizer_kwargs.crop_width = 76
    config.observation.encoder.obs_randomizer_kwargs.num_crops = 1
    config.observation.encoder.obs_randomizer_kwargs.pos_enc = False
    
    ############# NEW ##############
    
    # Now, the randomization arguments are specified per modality. An example for RGB is shown below
    config.observation.encoder.rgb.obs_randomizer_class = None
    config.observation.encoder.rgb.obs_randomizer_kwargs.crop_height = 76
    config.observation.encoder.rgb.obs_randomizer_kwargs.crop_width = 76
    config.observation.encoder.rgb.obs_randomizer_kwargs.num_crops = 1
    config.observation.encoder.rgb.obs_randomizer_kwargs.pos_enc = False
    

    You can also view the default configs and compare your config to these templates to view exact diffs in structure.

    Source code(tar.gz)
    Source code(zip)
    robomimic-0.2.0-py2.py3-none-any.whl(218.02 KB)
    robomimic-0.2.0.tar.gz(188.39 KB)
  • v0.1.0(Nov 16, 2021)

Owner
ARISE Initiative
Advancing Robot Intelligence through Simulated Environments (ARISE)
ARISE Initiative
Code that accompanies the paper Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance

Semi-supervised Deep Kernel Learning This is the code that accompanies the paper Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data

58 Oct 26, 2022
Experiments for Fake News explainability project

fake-news-explainability Experiments for fake news explainability project This repository only contains the notebooks used to train the models and eva

Lorenzo Flores (Lj) 1 Dec 03, 2022
A hue shift helper for OBS

obs-hue-shift A hue shift helper for OBS This is a repo based on the really nice script Hegemege made. The original script can be found https://gist.g

Alexis Tyler 1 Jan 10, 2022
CVPR 2022 "Online Convolutional Re-parameterization"

OREPA: Online Convolutional Re-parameterization This repo is the PyTorch implementation of our paper to appear in CVPR2022 on "Online Convolutional Re

Mu Hu 121 Dec 21, 2022
A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

FedML-AI 175 Dec 01, 2022
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022
The Turing Change Point Detection Benchmark: An Extensive Benchmark Evaluation of Change Point Detection Algorithms on real-world data

Turing Change Point Detection Benchmark Welcome to the repository for the Turing Change Point Detection Benchmark, a benchmark evaluation of change po

The Alan Turing Institute 85 Dec 28, 2022
Bayesian optimization in PyTorch

BoTorch is a library for Bayesian Optimization built on PyTorch. BoTorch is currently in beta and under active development! Why BoTorch ? BoTorch Prov

2.5k Dec 31, 2022
Spectrum is an AI that uses machine learning to generate Rap song lyrics

Spectrum Spectrum is an AI that uses deep learning to generate rap song lyrics. View Demo Report Bug Request Feature Open In Colab About The Project S

39 Dec 16, 2022
A simple Tensorflow based library for deep and/or denoising AutoEncoder.

libsdae - deep-Autoencoder & denoising autoencoder A simple Tensorflow based library for Deep autoencoder and denoising AE. Library follows sklearn st

Rajarshee Mitra 147 Nov 18, 2022
Generating Images with Recurrent Adversarial Networks

Generating Images with Recurrent Adversarial Networks Python (Theano) implementation of Generating Images with Recurrent Adversarial Networks code pro

Daniel Jiwoong Im 121 Sep 08, 2022
Code for the paper "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Jukebox Code for "Jukebox: A Generative Model for Music" Paper Blog Explorer Colab Insta

OpenAI 6k Jan 02, 2023
PyTorch wrappers for using your model in audacity!

audacitorch This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for

Hugo Flores García 130 Dec 14, 2022
A computational block to solve entity alignment over textual attributes in a knowledge graph creation pipeline.

How to apply? Create your config.ini file following the example provided in config.ini Choose one of the options below to run: Run with Python3 pip in

Scientific Data Management Group 3 Jun 23, 2022
学习 python3 以来写的一些垃圾玩具……

和东哥做兄弟 Author: chiupam 版权 未经本人同意,仓库内所有资源文件,禁止任何公众号、自媒体、开发者进行任何形式的转载、发布、搬运。 声明 这不是一个开源项目,只是把 GitHub 当作一个代码的存储空间,本项目不接受任何开源要求。 仅用于学习研究,禁止用于商业用途,不能保证其合法性

Chiupam 67 Mar 26, 2022
Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network

ild-cnn This is supplementary material for the manuscript: "Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neur

22 Nov 05, 2022
Project Aquarium is a SUSE-sponsored open source project aiming at becoming an easy to use, rock solid storage appliance based on Ceph.

Project Aquarium Project Aquarium is a SUSE-sponsored open source project aiming at becoming an easy to use, rock solid storage appliance based on Cep

Aquarist Labs 73 Jul 21, 2022
A pytorch reprelication of the model-based reinforcement learning algorithm MBPO

Overview This is a re-implementation of the model-based RL algorithm MBPO in pytorch as described in the following paper: When to Trust Your Model: Mo

Xingyu Lin 93 Jan 05, 2023
Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision

MLP-Mixer: An all-MLP Architecture for Vision This repo contains PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision. Usage : impo

Rishikesh (ऋषिकेश) 175 Dec 23, 2022
An implementation of the efficient attention module.

Efficient Attention An implementation of the efficient attention module. Description Efficient attention is an attention mechanism that substantially

Shen Zhuoran 194 Dec 15, 2022