Godot RL Agents is a fully Open Source packages that allows video game creators

Overview

Godot RL Agents

The Godot RL Agents is a fully Open Source packages that allows video game creators, AI researchers and hobbiest the opportunity to learn complex behaviors for their Non Player Characters or agents. This repository provides:

  • An interface between games created in Godot and Machine Learning algorithms running in Python
  • Access to 21 state of the art Machine Learning algorithms, provided by the Ray RLLib framework.
  • Support for memory-based agents, with LSTM or attention based interfaces
  • Support for 2D and 3D games
  • A suite of AI sensors to augment your agent's capacity to observe the game world
  • Godot and Godot RL agents are completely free and open source under the very permissive MIT license. No strings attached, no royalties, nothing.
godot_rl_agents_trailer_v01_20211008.mp4

Contents

  1. Motivation
  2. Citing Godot RL Agents
  3. Installation
  4. Examples
  5. Documentation
  6. Roadmap
  7. FAQ
  8. Licence
  9. Acknowledgments
  10. References

Motivation

Over the next decade advances in AI algorithms, notably in the fields of Machine Learning and Deep Reinforcement Learning, are primed to revolutionize the Video Game industry. Customizable enemies, worlds and story telling will lead to diverse gameplay experiences and new genres of games. Currently the field is dominated by large organizations and pay to use engines that have the budget to create such AI enhanced agents. The objective of the Godot RL Agents package is to lower the bar of accessability so that game developers can take their idea from creation to publication end-to-end with an open source and free package.

Citing Godot RL Agents

@misc{beeching2021godotrlagents,
  author = {Edward Beeching},
  title = {Godot RL agents},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/edbeeching/godot_rl_agents}},
}

Installation

Please follow the installation instructions to install Godot RL agents.

Examples

We provide several reference implementations and instructions to implement your own environment, please refer to the Examples documentation.

Creating custom environments

Once you have studied the example environments, you can follow the instructions in Custom environments in order to make your own.

Roadmap

We have number features that will soon be available in versions 0.2.0 and 0.3.0. Refer to the Roadmap for more information.

FAQ

  1. Why have we developed Godot RL Agents? The objectives of the framework are to:
  • Provide a free and open source tool for Deep RL research and game development.
  • Enable game creators to imbue their non-player characters with unique * behaviors.
  • Allow for automated gameplay testing through interaction with an RL agent.
  1. How can I contribute to Godot RL Agents? Please try it out, find bugs and either raise an issue or if you fix them yourself, submit a pull request.
  2. When will you be providing Mac support? I would like to provide this ASAP but I do not own a mac so I cannot perform any manual testing of the codebase.
  3. Can you help with my game project? If the game example do not provide enough information, reach out to us on github and we may be able to provide some advice.
  4. How similar is this tool to Unity ML agents? We are inspired by the the Unity ML agents toolkit and make no effort to hide it.

Licence

Godot RL Agents is MIT licensed. See the LICENSE file for details.

"Cartoon Plane" (https://skfb.ly/UOLT) by antonmoek is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

Acknowledgments

We thank the authors of the Godot Engine for providing such a powerful and flexible game engine for AI agent development. We thank the developers at Ray and Stable baselines for creating easy to use and powerful RL training frameworks. We thank the creators of the Unity ML Agents Toolkit, which inspired us to create this work.

References

Comments
  • How do I use rllib for the examples provided?

    How do I use rllib for the examples provided?

    SO, I found out that sample-factory is not supported on Windows OS. And rllib is the only backend that successfully installed on my pc. So, how can I use rllib to run the examples provided and make my own RL environments with it.

    opened by ryash072007 13
  • Unable to install RL agents.

    Unable to install RL agents.

    It says package not found:

    (base) PS C:\Users\Jetpackjules\Downloads\godot_rl_agents-0.2.2> conda env create Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • libffi=3.3
    • libunistring=0.9.10
    • libopus=1.3.1
    • libtasn1=4.16.0
    • openh264=2.1.1
    • x264=1!157.20191217
    • libidn2=2.3.2
    • libvpx=1.7.0
    • _openmp_mutex=4.5
    • lame=3.100
    • ncurses=6.3
    • gmp=6.2.1
    • freetype=2.11.0
    • gnutls=3.6.15
    • readline=8.1.2
    • nettle=3.7.3
    • libgcc-ng=9.3.0
    • libgomp=9.3.0
    • libstdcxx-ng=9.3.0
    • ld_impl_linux-64=2.35.1
    opened by Jetpackjules11 7
  • Installation Help

    Installation Help

    I am a complete novice to github and conda and I am having trouble installing (likely user error). Looking for specific help or general guidance on where to go for help. I am on Windows. Seems solving environment fails, maybe has to do with linux-64 line or prefix at bottom of .ym file being to an unkown directory. Thanks in advance for any advice.

    Installed the full anaconda so I could use the Navigator Opened powershell prompt cd to the directory with the godot_rl_agents folder and enviroment.ym; ran "conda env create" output "Collecting package metadata (repodata.json): done Solving environment: failed

    ResolvePackageNotFound:

    • ld_impl_linux-64=2.35.1"
    opened by Quantemplation 4
  • Solving environment: failed  ResolvePackageNotFound when creating environment in Windows

    Solving environment: failed ResolvePackageNotFound when creating environment in Windows

    Hello Ed!

    I've tried following the install instructions for Windows but I get the following error:

    (base) PS F:\Repos\godot_rl_agents> conda env create
    Collecting package metadata (repodata.json): done
    Solving environment: failed
    
    ResolvePackageNotFound:
      - zstd==1.4.9=haebb681_0
      - openssl==1.1.1m=h7f8727e_0
      - cudatoolkit==11.3.1=h2bc3f7f_2
      - _openmp_mutex==4.5=1_gnu
      - jpeg==9d=h7f8727e_0
      - freetype==2.11.0=h70c0345_0
      - libstdcxx-ng==9.3.0=hd4cf53a_17
      - ca-certificates==2022.2.1=h06a4308_0
      - lz4-c==1.9.3=h295c915_1
      - nettle==3.7.3=hbbd107a_1
      - mkl_fft==1.3.1=py38hd3c417c_0
      - lame==3.100=h7b6447c_0
      - bzip2==1.0.8=h7b6447c_0
      - gnutls==3.6.15=he1e5248_0
      - ld_impl_linux-64==2.35.1=h7274673_9
      - libgomp==9.3.0=h5101ec6_17
      - openh264==2.1.1=h4ff587b_0
      - pytorch==1.11.0=py3.8_cuda11.3_cudnn8.2.0_0
      - certifi==2021.10.8=py38h06a4308_2
      - x264==1!157.20191217=h7b6447c_0
      - libwebp-base==1.2.2=h7f8727e_0
      - ncurses==6.3=h7f8727e_2
      - pillow==9.0.1=py38h22f2fdc_0
      - cryptography==36.0.0=py38h9ce1e76_0
      - mkl-service==2.4.0=py38h7f8727e_0
      - lcms2==2.12=h3be6417_0
      - libuv==1.40.0=h7b6447c_0
      - gmp==6.2.1=h2531618_2
      - tk==8.6.11=h1ccaba5_0
      - python==3.8.12=h12debd9_0
      - libvpx==1.7.0=h439df22_0
      - numpy==1.21.2=py38h20f2e39_0
      - mkl_random==1.2.2=py38h51133e4_0
      - libunistring==0.9.10=h27cfd23_0
      - pip==21.2.4=py38h06a4308_0
      - mkl==2021.4.0=h06a4308_640
      - xz==5.2.5=h7b6447c_0
      - intel-openmp==2021.4.0=h06a4308_3561
      - ffmpeg==4.2.2=h20bf706_0
      - libtasn1==4.16.0=h27cfd23_0
      - numpy-base==1.21.2=py38h79a1101_0
      - brotlipy==0.7.0=py38h27cfd23_1003
      - libopus==1.3.1=h7b6447c_0
      - libtiff==4.2.0=h85742a9_0
      - libwebp==1.2.2=h55f646e_0
      - libffi==3.3=he6710b0_2
      - libgcc-ng==9.3.0=h5101ec6_17
      - libidn2==2.3.2=h7f8727e_0
      - setuptools==58.0.4=py38h06a4308_0
      - pysocks==1.7.1=py38h06a4308_0
      - zlib==1.2.11=h7f8727e_4
      - sqlite==3.38.0=hc218d9a_0
      - giflib==5.2.1=h7b6447c_0
      - readline==8.1.2=h7f8727e_1
      - libpng==1.6.37=hbc83047_0
      - cffi==1.15.0=py38hd667e15_1
    

    It seems like conda is unable to find those packages on Windows. I think it's due to the build numbers (ex zstd==1.4.9=haebb681_0) referencing a build for a different platform. I've created a new environment specification where I've removed them with conda env export -n gdrl_conda -f .\environment.yml --no-builds and was able to create the environment with the original command conda env create.

    opened by PhilippeMarcotte 4
  • People who want to use SF in windows, read this:

    People who want to use SF in windows, read this:

    For people who want to use SF in windows OS because of its features, I recommend WSL. Ill update this issue with my progress and possible problems you may face trying to get WSL and/or get SF in it.

    opened by ryash072007 3
  • Training stuck in

    Training stuck in "PENDING" status and editor not connecting

    I followed the installation instructions provided, everything goes well, but couldn't train nor use the pretrained models from any of the example envs. First of all when I use the following command:

    gdrl --env_path envs/builds/JumperHard/jumper_hard.x86_64 --config_path envs/configs/ppo_config_jumper_hard.yaml

    It says

    usage: gdrl [-h] [--env_path ENV_PATH] [-f CONFIG_FILE] [-c RESTORE] [-e] gdrl: error: unrecognized arguments: --config_path envs/configs/ppo_config_jumper_hard.yaml

    So I just changed the argument --config_path to -f and now it works, but...

    == Status == Memory usage on this node: 6.1/15.5 GiB Using FIFO scheduling algorithm. Resources requested: 0/4 CPUs, 0/0 GPUs, 0.0/7.38 GiB heap, 0.0/3.69 GiB objects Result logdir: /home/hibiscus-tea/ray_results/PPO/jumper_hard Number of trials: 1/1 (1 PENDING) +-----------------------+----------+-------+ | Trial name | status | loc | |-----------------------+----------+-------| | PPO_godot_0479d_00000 | PENDING | | +-----------------------+----------+-------+

    It stays like that forever. Neither running jumper_hard.x86_64 or running the game from the editor changes anything. If I use the pretrained model command it stays the same. I tried the same process on Windows 10 and I get the same results. I think I am missing something. The editor outputs this:

    getting command line arguments Waiting for one second to allow server to start trying to connect to server 03

    If I change the const DEFAULT_PORT to 6007 (the default godot port) it outputs this:

    getting command line arguments Waiting for one second to allow server to start trying to connect to server 02 performing handshake server disconnected, closing

    I hope you help me with this issue. This project looks amazing and I am looking forward to the multi-agents update. :)

    opened by AleryBerry 3
  • TypeError: '>=' not supported between instances of 'list' and 'int'

    TypeError: '>=' not supported between instances of 'list' and 'int'

    Traceback (most recent call last): File "C:\Users\ryash\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\ryash\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\Scripts\gdrl.exe_main.py", line 7, in File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\main.py", line 108, in main training_function(args, extras) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\wrappers\stable_baselines_wrapper.py", line 78, in stable_baselines_training env = StableBaselinesGodotEnv() File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\wrappers\stable_baselines_wrapper.py", line 12, in init self.env = GodotEnv(port=port, seed=seed) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\core\godot_env.py", line 44, in init self._get_env_info() File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\godot_rl\core\godot_env.py", line 235, in _get_env_info observation_spaces[k] = spaces.Discrete(v["size"]) File "C:\Users\ryash\Documents\Godot RL\try1\RL1\lib\site-packages\gym\spaces\discrete.py", line 15, in init assert n >= 0 TypeError: '>=' not supported between instances of 'list' and 'int'

    opened by ryash072007 2
  • Installation Problems

    Installation Problems

    Hi there,

    I am currently looking into your project and it looks super interesting.

    Unfortunately I have troubles installing the environment on windows. The first errors occur when running the instruction conda env create from the installation guide. See Screenshot: Screenshot 2022-10-23 112009

    Could it be that you are using packages for linux only? _openmp_mutex=4.5 seems to be one of them. Is there a way to get this project running on windows? Would be cool, because I am consider using it for my master thesis.

    Cheers!

    opened by visuallization 2
  • Reward always displayed as nan

    Reward always displayed as nan

    Hello,

    I am having another issue, the rewards are always displayed as nan in the console, like this:

    == Status ==
    Current time: 2022-06-21 15:40:17 (running for 00:04:32.32)
    Memory usage on this node: 14.3/31.3 GiB
    Using FIFO scheduling algorithm.
    Resources requested: 2.0/16 CPUs, 1.0/1 GPUs, 0.0/13.01 GiB heap, 0.0/6.5 GiB objects (0.0/1.0 accelerator_type:G)
    Result logdir: /home/ls11det/ray_results/PPO/editor
    Number of trials: 1/1 (1 RUNNING)
    +-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------+
    | Trial name            | status   | loc                   |   iter |   total time (s) |   ts |   reward |   episode_reward_max |   episode_reward_min |   episode_len_mean |
    |-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------|
    | PPO_godot_0dbb4_00000 | RUNNING  | 129.217.38.190:865027 |      3 |          208.046 | 3072 |      nan |                  nan |                  nan |                nan |
    +-----------------------+----------+-----------------------+--------+------------------+------+----------+----------------------+----------------------+--------------------+
    

    I even tried just giving back a number as reward to see if any of my code was causing the issue, but it is still displayed as nan:

    func get_reward():
    	# What behavior do you want to reward, kills? penalties for death, key waypoints
    	return 0.5
    

    I also printed in the sync.gd script where it collects and sends the reward and it picks up the 0.5 correctly. Is there anything I am missing here?

    opened by themars2011 2
  • BallChase example: Does best_fruit_distance need a reset after collection?

    BallChase example: Does best_fruit_distance need a reset after collection?

    I am not sure if I understand the examples correctly. In the BallChase example best_fruit_distance is initialized and reset in the reset() method. But shouldn't it also be reset after every fruit collection? Only the distance reduction to the first fruit gets rewarded at the moment.

    bug 
    opened by mischkadb 2
  • Errors with default config: KeyError

    Errors with default config: KeyError "observation_space"

    Hi, I just installed godot_rl_agents as described in the installation instructions. I have been trying to train an agent for one of the default envs but I get the following error

    (pid=38965) KeyError: 'observation_space'
    (pid=38965) SCRIPT ERROR: handle_message: Invalid get index 'type' (on base: 'Nil').
    (pid=38965)    At: res://addons/godot_rl_agents/sync.gdc:172.
    Traceback (most recent call last):
      File "/home/ashutosh/HDD/anaconda3/envs/godot_rl/bin/gdrl", line 33, in <module>
        sys.exit(load_entry_point('godot-rl-agents', 'console_scripts', 'gdrl')())
      File "/home/ashutosh/HDD/MachineLearning/godot_rl_agents/godot_rl_agents/core/main.py", line 91, in main
        results = tune.run(
      File "/home/ashutosh/HDD/anaconda3/envs/godot_rl/lib/python3.8/site-packages/ray/tune/tune.py", line 555, in run
        raise TuneError("Trials did not complete", incomplete_trials)
    

    I also manually tried printing json_dict and here are the contents:

    {'algorithm': 'PPO', 'stop': {'episode_reward_mean': 5000, 'training_iteration': 1000, 'timesteps_total': 200000000}, 'config': {'env': 'godot', 'env_config': {'framerate': None, 'action_repeat': None, 'show_window': False, 'seed': 0, 'env_path': 'envs/builds/BallChase/ball_chase.x86_64'}, 'framework': 'torch', 'lambda': 0.95, 'gamma': 0.95, 'vf_clip_param': 100.0, 'clip_param': 0.2, 'entropy_coeff': 0.001, 'entropy_coeff_schedule': None, 'train_batch_size': 1024, 'sgd_minibatch_size': 128, 'num_sgd_iter': 16, 'num_workers': 4, 'lr': 0.0003, 'num_envs_per_worker': 16, 'batch_mode': 'truncate_episodes', 'rollout_fragment_length': 32, 'num_gpus': 1, 'model': {'fcnet_hiddens': [256, 256], 'num_framestacks': 4}, 'no_done_at_end': True, 'soft_horizon': True}}
    

    Here's the full log : https://www.toptal.com/developers/hastebin/epovenonow.yaml

    Do I absolutely need to keep Godot editor open ? I'm currently using the ball_chase.x86_64 from the repo

    Lastly, opening an environment in godot opens with 16 agents together. Is there a way to fix this ?

    opened by ashutoshbsathe 2
  • Unable to open any example in the godot editor

    Unable to open any example in the godot editor

    I just get a message that says "the following file does not specify the version of godot with which it was created. If you proceed with opening it, it will be configured for godot's file format" and when I force open the project immediatly closes. (this means I can't run "gdrl.interactive")

    I also noticed that ryash072007 managed to get sb3 working to some extent, and would greatly appreciate any advice on how to accomplish that.

    (I am using Anaconda Powershell prompt and Godot 3.5.1)

    opened by Jetpackjules11 4
  • What may be happening if Godot freezes when performing handshake?

    What may be happening if Godot freezes when performing handshake?

    I'm using a Linux VM to run the sf part of the training and am using port-forwarding to allow it to communicate to my host computer. However, while performing handshake, the game just gets stuck. I have tried debugging this but nothing worked. Do you know what may be happening?

    opened by ryash072007 4
  • Export model to ONNX

    Export model to ONNX

    this is a suggestion/request in which I want to contribute, I have started work on this feature (which I have committed to my fork), but I am not well versed on Torch code, though I have gotten to the point where the model gets loaded from the checkpoint, I get an error saying I need to pass a Tensor of shape [...,8] to the torch.onnx.export function

    opened by yaelatletl 6
  • Using TorchSharp in Godot

    Using TorchSharp in Godot

    Hi, Ed! I have a problem with using TorchSharp nuget lib in Godot C# version. Every time I try to use it in godot I get the error like:

    System.DllNotFoundException: LibTorchSharp assembly: unknown assembly type: unknown type member:

    But the same code can work in a regular console project without godot involved.

    I see you mentioned in other issue #https://github.com/virtualmlnet/hackathon-2021/issues/6#issuecomment-968059783 that you have tried the torchsharp, it seems that it can work but just nor support onnx format. If so, can you share how you configure the godot project to let it work with torchsharp? or maybe you can share a demo project ?

    opened by HangedDream 1
  • Questions on performance and headless

    Questions on performance and headless

    Hi @edbeeching

    thanks for your API!

    I've got two questions: In your paper you state that 12k interactions per second are recorded. How many environments ran in parallel for this results? Do you need X for running environments featuring visual observations? Your roadmap says that headless is not supported yet.

    I'm basically looking for alternatives to ml-agents that run significantly faster. Like one Unity build with only one environment is capable of only generating like 200-300 interactions per second.

    opened by MarcoMeter 1
Releases(v0.2.2)
  • v0.2.2(Apr 21, 2022)

  • v0.2.1(Mar 28, 2022)

  • v0.2.0(Mar 24, 2022)

    Implemented a number of features, bug fixes and improvements to the documentation.

    • Including an updated sensor suite.
    • New checkpoints for the updated sensors.
    • The conda environment should now work out of the box and support GPUs. #8 #9
    • Fixed a bug with the reward function in the BallChase env #11
    • Improved documentation #7
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Oct 17, 2021)

Owner
Edward Beeching
PhD Student in Deep Reinforcement Learning at INRIA, Chroma research group, INSA Lyon, France.
Edward Beeching
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

40 Dec 30, 2022
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
Rapid experimentation and scaling of deep learning models on molecular and crystal graphs.

LitMatter A template for rapid experimentation and scaling deep learning models on molecular and crystal graphs. How to use Clone this repository and

Nathan Frey 32 Dec 06, 2022
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]

Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps Here is the code for ssbassline model. We also provide OCR results/features/mode

ZephyrZhuQi 51 Nov 18, 2022
Interpolation-based reduced-order models

Interpolation-reduced-order-models Interpolation-based reduced-order models High-fidelity computational fluid dynamics (CFD) solutions are time consum

Donovan Blais 1 Jan 10, 2022
(JMLR' 19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats & License PyOD is a comprehensive and scalable Python toolkit for detecting outlyin

Yue Zhao 6.6k Jan 05, 2023
An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow

Fast Transformer This repo implements Fastformer: Additive Attention Can Be All You Need by Wu et al. in TensorFlow. Fast Transformer is a Transformer

Rishit Dagli 139 Dec 28, 2022
Learning Calibrated-Guidance for Object Detection in Aerial Images

Learning Calibrated-Guidance for Object Detection in Aerial Images arxiv We propose a simple yet effective Calibrated-Guidance (CG) scheme to enhance

51 Sep 22, 2022
Source code for the paper: Variance-Aware Machine Translation Test Sets (NeurIPS 2021 Datasets and Benchmarks Track)

Variance-Aware-MT-Test-Sets Variance-Aware Machine Translation Test Sets License See LICENSE. We follow the data licensing plan as the same as the WMT

NLP2CT Lab, University of Macau 5 Dec 21, 2021
Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP.

Hire-Wave-MLP.pytorch Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP Resul

Nevermore 29 Oct 28, 2022
Official pytorch implementation of the AAAI 2021 paper Semantic Grouping Network for Video Captioning

Semantic Grouping Network for Video Captioning Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. AAAI 2021. [arxiv] Environment Ubuntu 16.04 CU

Hobin Ryu 43 Nov 25, 2022
Employs neural networks to classify images into four categories: ship, automobile, dog or frog

Neural Net Image Classifier Employs neural networks to classify images into four categories: ship, automobile, dog or frog Viterbi_1.py uses a classic

Riley Baker 1 Jan 18, 2022
Experiments with Fourier layers on simulation data.

Factorized Fourier Neural Operators This repository contains the code to reproduce the results in our NeurIPS 2021 ML4PS workshop paper, Factorized Fo

Alasdair Tran 57 Dec 25, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters.

openmc-plasma-source This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources a

Fusion Energy 10 Oct 18, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
Repository for the NeurIPS 2021 paper: "Exploiting Domain-Specific Features to Enhance Domain Generalization".

meta-Domain Specific-Domain Invariant (mDSDI) Source code implementation for the paper: Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung. "Exploiting

VinAI Research 12 Nov 25, 2022
ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

Katherine Crowson 53 Dec 29, 2022
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022