Deep Learning to Create StepMania SM FIles

Overview

StepCOVNet

header_example

Codacy Badge

Running Audio to SM File Generator

Currently only produces .txt files. Use SMDataTools to convert .txt to .sm

python stepmania_note_generator.py -i --input <string> -o --output <string> --model <string> -v --verbose <int>
  • -i --input input directory path to audio files
  • -o --output output directory path to .txt files
  • -m --model input directory path to StepCOVNet model````
  • OPTIONAL: -v --verbose 1 shows full verbose, 0 shows no verbose; default is 0

Creating Training Dataset

Link to training data: https://drive.google.com/open?id=1eCRYSf2qnbsSOzC-KmxPWcSbMzi1fLHi

To create a training dataset, you need to parse the .sm files and convert sound files into .wav files:

  • SMDataTools should be used to parse the .sm files into .txt files.
  • wav_converter.py can be used to convert the audio files into .wav files. The default sample rate is 16000hz.

Once the parsed .txt files and .wav files are generated, place the .wav files into separate directories and run training_data_collection.py.

python training_data_collection.py -w --wav <string> -t --timing <string> -o --output <string> --multi <int> --limit <int> --cores <int> --name <string> --distributed <int>
  • -w --wav input directory path to .wav files
  • -t --timing input directory path to timing files
  • -o --output output directory path to output dataset
  • OPTIONAL: --multi 1 collects STFTs using frame_size of [2048, 1024, 4096], 0 collects STFTs using frame_size of [2048]; default is 0
  • OPTIONAL: --limit > 0 stops data collection at limit, -1 means unlimited; default is -1
  • OPTIONAL: --cores > 0 sets the number of cores to use when collecting data; -1 means uses the number of physical cores; default is 1
  • OPTIONAL: --name name to give the dataset; default names dataset based on the configuration parameters
  • OPTIONAL: --distributed 0 creates a single dataset, 1 creates a distributed dataset; default is 0

Training Model

Once training dataset has been created, run train.py.

python train.py -i --input <string> -o --output <string> -d --difficulty <int> --lookback <int> --limit <int> --name <string> --log <string>
  • -i --input input directory path to training dataset
  • -o --output output directory path to save model
  • OPTIONAL: -d --difficulty [0, 1, 2, 3, 4] sets the song difficulty to use when training to ["challenge", "hard", "medium", "easy", "beginner"], respectively; default is 0 or "challenge"
  • OPTIONAL: --lookback > 2 uses timeseries based on lookback when modeling; default is 3
  • OPTIONAL: --limit > 0 limits the amount of training samples used during training, -1 uses all the samples; default is -1
  • OPTIONAL: --name name to give the finished model; default names model based on dat aset used
  • OPTIONAL: --log output directory path to store tensorboard data

TODO

  • End-to-end unit tests for all modules

Credits

Owner
Chimezie Iwuanyanwu
Software Engineer
Chimezie Iwuanyanwu
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

71 Dec 14, 2022
Semi-supervised Stance Detection of Tweets Via Distant Network Supervision

SANDS This is an annonymous repository containing code and data necessary to reproduce the results published in "Semi-supervised Stance Detection of T

2 Sep 22, 2022
Official PyTorch implementation of the NeurIPS 2021 paper StyleGAN3

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Eugenio Herrera 92 Nov 18, 2022
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic plasticity".

Impression-Learning-Camera-Ready Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic

2 Feb 09, 2022
💡 Learnergy is a Python library for energy-based machine learning models.

Learnergy: Energy-based Machine Learners Welcome to Learnergy. Did you ever reach a bottleneck in your computational experiments? Are you tired of imp

Gustavo Rosa 57 Nov 17, 2022
Development of IP code based on VIPs and AADM

Sparse Implicit Processes In this repository we include the two different versions of the SIP code developed for the article Sparse Implicit Processes

1 Aug 22, 2022
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
A super lightweight Lagrangian model for calculating millions of trajectories using ERA5 data

Easy-ERA5-Trck Easy-ERA5-Trck Galleries Install Usage Repository Structure Module Files Version iteration Easy-ERA5-Trck is a super lightweight Lagran

Zhenning Li 26 Nov 19, 2022
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
[ICML 2021] Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data

Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data This repo provides the source code & data of our paper: Break-It-Fix-It: Unsupervised

Michihiro Yasunaga 86 Nov 30, 2022
Python Implementation of algorithms in Graph Mining, e.g., Recommendation, Collaborative Filtering, Community Detection, Spectral Clustering, Modularity Maximization, co-authorship networks.

Graph Mining Author: Jiayi Chen Time: April 2021 Implemented Algorithms: Network: Scrabing Data, Network Construbtion and Network Measurement (e.g., P

Jiayi Chen 3 Mar 03, 2022
Neural Ensemble Search for Performant and Calibrated Predictions

Neural Ensemble Search Introduction This repo contains the code accompanying the paper: Neural Ensemble Search for Performant and Calibrated Predictio

AutoML-Freiburg-Hannover 26 Dec 12, 2022
Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks This is a Pytorch-Lightning implementation of the paper "Self-s

Photogrammetry & Robotics Bonn 111 Dec 06, 2022
A stock generator that assess a list of stocks and returns the best stocks for investing and money allocations based on users choices of volatility, duration and number of stocks

Stock-Generator Please visit "Stock Generator.ipynb" for a clearer view and "Stock Generator.py" for scripts. The stock generator is designed to allow

jmengnyay 1 Aug 02, 2022
CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detecti

Bhchen 69 Dec 08, 2022
22 Oct 14, 2022
Python wrapper to access the amazon selling partner API

PYTHON-AMAZON-SP-API Amazon Selling-Partner API If you have questions, please join on slack Contributions very welcome! Installation pip install pytho

Michael Primke 330 Jan 06, 2023
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
ObjectDetNet is an easy, flexible, open-source object detection framework

Getting started with the ObjectDetNet ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resu

5 Aug 25, 2020