Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Overview

Spacetimeformer Multivariate Forecasting

This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecasting", Grigsby, Wang and Qi, 2021. (arxiv)

spatiotemporal_embedding

Transformers are a high-performance approach to sequence-to-sequence timeseries forecasting. However, stacking multiple sequences into each token only allows the model to learn temporal relationships across time. This can ignore important spatial relationships between variables. Our model (nickamed "Spacetimeformer") flattens multivariate timeseries into extended sequences where each token represents the value of one variable at a given timestep. Long-Range Transformers can then learn relationships over both time and space. For much more information, please refer to our paper.

We will be adding additional instructions, example commands and dataset links in the coming days.

Installation

This repository was written and tested for python 3.8 and pytorch 1.9.0.

git clone https://github.com/UVA-MachineLearningBioinformatics/spacetimeformer.git
cd spacetimeformer
conda create -n spacetimeformer python==3.8
source activate spacetimeformer
pip install -r requirements.txt
pip install -e .

This installs a python package called spacetimeformer.

Dataset Setup

CSV datsets like AL Solar, NY-TX Weather, Exchange Rates, and the Toy example are included with the source code of this repo.

Larger datasets should be downloaded and their folders placed in the data/ directory. You can find them with this google drive link. Note that the metr-la and pems-bay data is directly from this repo - all we've done is skip a step for you and provide the raw train, val, test, *.npz files our dataset code expects.

Recreating Experiments with Our Training Script

The main training functionality for spacetimeformer and most baselines considered in the paper can be found in the train.py script. The training loop is based on the pytorch_lightning framework.

Commandline instructions for each experiment can be found using the format: python train.py *model* *dataset* -h.

Model Names:

  • linear: a basic autoregressive linear model.
  • lstnet: a more typical RNN/Conv1D model for multivariate forecasting. Based on the attention-free implementation of LSTNet.
  • lstm: a typical encoder-decoder LSTM without attention. We use scheduled sampling to anneal teacher forcing throughout training.
  • mtgnn: a hybrid GNN that learns its graph structure from data. For more information refer to the paper. We use the implementation from pytorch_geometric_temporal
  • spacetimeformer: the multivariate long-range transformer architecture discussed in our paper.
    • note that the "Temporal" ablation discussed in the paper is a special case of the spacetimeformer model. Set the embed_method = temporal. Spacetimeformer has many configurable options and we try to provide a thorough explanation with the commandline -h instructions.

Dataset Names:

  • metr-la and pems-bay: traffic forecasting datasets. We use a very similar setup to DCRNN.
  • toy2: is the toy dataset mentioned at the beginning of our experiments section. It is heavily based on the toy dataset in TPA-LSTM.
  • asos: Is the codebase's name for what the paper calls "NY-TX Weather."
  • solar_energy: Is the codebase's name for what is more commonly called "AL Solar."
  • exchange: A dataset of exchange rates. Spacetimeformer performs relatively well but this is tiny dataset of highly non-stationary data where linear is already a SOTA model.
  • precip: A challenging spatial message-passing task that we have not yet been able to solve. We collected daily precipitation data from a latitude-longitude grid over the Continental United States. The multivariate sequences are sampled from a ringed "radar" configuration as shown below in green. We expand the size of the dataset by randomly moving this radar around the country.

Logging with Weights and Biases

We used wandb to track all of results during development, and you can do the same by hardcoding the correct organization/username and project name in the train.py file. Comments indicate the location near the top of the main method. wandb logging can then be enabled with the --wandb flag.

There are two automated figures that can be saved to wandb between epochs. These include the attention diagrams (e.g., Figure 4 of our paper) and prediction plots (e.g., Figure 6 of our paper). Enable attention diagrams with --attn_plot and prediction curves with --plot.

Example Spacetimeformer Training Commands

Toy Dataset

python train.py spacetimeformer toy2 --run_name spatiotemporal_toy2 \
--d_model 100 --d_ff 400 --enc_layers 4 --dec_layers 4 \
--gpus 0 1 2 3 --batch_size 32 --start_token_len 4 --n_heads 4 \
--grad_clip_norm 1 --early_stopping --trials 1

Metr-LA

python train.py spacetimeformer metr-la --start_token_len 3 --batch_size 32 \
--gpus 0 1 2 3 --grad_clip_norm 1 --d_model 128 --d_ff 512 --enc_layers 5 \
--dec_layers 4 --dropout_emb .3 --dropout_ff .3 --dropout_qkv 0 \ 
--run_name spatiotemporal_metr-la --base_lr 1e-3 --l2_coeff 1e-2 \

Temporal Attention Ablation with Negative Log Likelihood Loss on NY-TX Weather ("asos") with WandB Loggin and Figures

python train.py spacetimeformer asos --context_points 160 --target_points 40 \ 
--start_token_len 8 --grad_clip_norm 1 --gpus 0 --batch_size 128 \ 
--d_model 200 --d_ff 800 --enc_layers 3 --dec_layers 3 \
--local_self_attn none --local_cross_attn none --l2_coeff .01 \
--dropout_emb .1 --run_name temporal_asos_160-40-nll --loss nll \
--time_resolution 1 --dropout_ff .2 --n_heads 8 --trials 3 \ 
--embed_method temporal --early_stopping --wandb --attn_plot --plot

Using Spacetimeformer in Other Applications

If you want to use our model in the context of other datasets or training loops, you will probably want to go a step lower than the spacetimeformer_model.Spacetimeformer_Forecaster pytorch-lightning wrapper. Please see spacetimeformer_model.nn.Spacetimeformer. arch-fig

Citation

If you use this model in academic work please feel free to cite our paper

@misc{grigsby2021longrange,
      title={Long-Range Transformers for Dynamic Spatiotemporal Forecasting}, 
      author={Jake Grigsby and Zhe Wang and Yanjun Qi},
      year={2021},
      eprint={2109.12218},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

st-embed-fig

Comments
  • training_epoch_loop.py

    training_epoch_loop.py", line 486, in _update_learning_rates raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ReduceLROnPlateau conditioned on metric val/forecast_loss which is not available. Available metrics are: ['train/mape', 'train/mae', 'train/mse', 'train/rse', 'train/forecast_loss', 'train/class_loss', 'train/loss', 'train/acc']. Condition can be set using `monitor` key in lr scheduler dict

    training_epoch_loop.py", line 486, in _update_learning_rates raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: ReduceLROnPlateau conditioned on metric val/forecast_loss which is not available. Available metrics are: ['train/mape', 'train/mae', 'train/mse', 'train/rse', 'train/forecast_loss', 'train/class_loss', 'train/loss', 'train/acc']. Condition can be set using monitor key in lr scheduler dict

    opened by jeevanu 6
  • "CUDA extension for cauchy multiplication not found" when running toy project

    Hi!

    I get error "cauchy multiplication not found" when running the command below. I have tried the suggested "python setup install" in the cachy directory without success.

    python train.py spacetimeformer toy2 --run_name spatiotemporal_toy2 --d_model 100 --d_ff 400 --enc_layers 2 --dec_layers 2 --gpus 0 --batch_size 32 --start_token_len 4 --n_heads 4 --grad_clip_norm 1 --early_stopping --trials 1
    
    CUDA extension for cauchy multiplication not found. go to ./extensions/cauchy and try `python setup.py install`
    

    Any tips how to fix this error?

    opened by christofer-f 4
  • How to get inference result?

    How to get inference result?

    Thanks for your great work.

    I want to infer a trained spacetimeformer model. The model needs 4 inputs; (x_c, y_c, x_t, y_t). We have only x_c, y_c, and x_t ready.

    • y_t must be zeros with right shape. Right?

    • output, (logits, labels), attn = spacetimeformer(x_c, y_c, x_t, y_t). The output has two tensors; output.loc and output.scale. What is the real output for y_t here? output.loc or output.scale?

      Thanks in advance.

    opened by TNA8 4
  • Using the model to inference

    Using the model to inference

    Hi,

    Thank you for this module. Is there an example for inferencing? I tried adding self.save_hyperparameters() to the Spacetimeformer_Forecaster init and then using the checkpoint as below but this requires x_c, y_c, x_t and y_t. Shouldn't inference only need the context and not the target? Is there some other step that I need to take to produce a model that can be used to inference?

        model = Spacetimeformer_Forecaster.load_from_checkpoint(checkpoint_path=checkpoint_path)
    

    Thank you for your help.

    opened by threadthreasher 4
  • Question about embedding

    Question about embedding

    Hi, great job!

    I just have a quick question about the embedding part. Although position embedding is a tradition in Transformer based models, it seems you embeded so much different kinds of information into the original sequence in an addition way. I am not sure if my understanding is correct. The concatenation form is easier for the network to learn at cost of more parameters and vice versa for the addition form. I believe the nerual network bears certain capability to extract different information from the raw input and projects them into higher order latent space, but would that possibly deteriorates the performance if we add too much? And will large hidden dim mitigates?

    BTW, I also heard of the explanation: (a+b)' = a' + b' which means the addition and concatenation operations are equal fom a gradient perspective. And that contradicts to my gut feeling.

    Thanks very much!

    opened by kpmokpmo 4
  • Issue while loading a model

    Issue while loading a model

    Hi, After the training I'm trying to save the model and load it from the checkpoint. But I got mismatch size error, so I tried to load the data as soon as the training is over. But I still have the issue. It looks like that it's the same issue as #13 .

    The code :

        # Test from the code
        trainer.test(datamodule=data_module, ckpt_path="best")
        #Added only these two line
        trainer.save_checkpoint("best.ckpt")
        forecaster = forecaster.load_from_checkpoint(checkpoint_path="best.ckpt")
    

    And I got a LOT of size mismatch

    RuntimeError: Error(s) in loading state_dict for Spacetimeformer_Forecaster:
    	size mismatch for spacetimeformer.enc_embedding.x_emb.embed_weight: copying a param with shape torch.Size([7, 6]) from checkpoint, the shape in current model is torch.Size([5, 6]).
    	size mismatch for spacetimeformer.enc_embedding.y_emb.weight: copying a param with shape torch.Size([512, 43]) from checkpoint, the shape in current model is torch.Size([512, 31]).
    	size mismatch for spacetimeformer.enc_embedding.var_emb.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.dec_embedding.x_emb.embed_weight: copying a param with shape torch.Size([7, 6]) from checkpoint, the shape in current model is torch.Size([5, 6]).
    	size mismatch for spacetimeformer.dec_embedding.y_emb.weight: copying a param with shape torch.Size([512, 43]) from checkpoint, the shape in current model is torch.Size([512, 31]).
    	size mismatch for spacetimeformer.dec_embedding.var_emb.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.classifier.weight: copying a param with shape torch.Size([18, 512]) from checkpoint, the shape in current model is torch.Size([1, 512]).
    	size mismatch for spacetimeformer.classifier.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([1]).
    

    What am I doing wrong ?

    Also I didn't get what does the --start_token_len does ?

    Length of decoder start token. Adds this many of the final context points to the start of the target sequence.
    

    For example if I have one data every hour, and start_token_len = 3, it will predict for the 3 hours from now ? And train to predict this value ?

    Best Regards ! And thanks for the model !

    opened by jgcb00 3
  • GPU memory leak

    GPU memory leak

    After 80 epochs (8 hours), I got this error

    Epoch 80:  87%|████████████████▌  | 750/858 [04:56<00:42,  2.53it/s, loss=0.258]Traceback (most recent call last):
      File "train.py", line 442, in <module>
        main(args)
      File "train.py", line 424, in main
        trainer.fit(forecaster, datamodule=data_module)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
        self._run(model)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
        self.dispatch()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
        self.accelerator.start_training(self)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
        self.training_type_plugin.start_training(trainer)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
        self._results = trainer.run_stage()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
        return self.run_train()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 871, in run_train
        self.train_loop.run_training_epoch()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 569, in run_training_epoch
        self.trainer.logger_connector.log_train_epoch_end_metrics(epoch_output)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 325, in log_train_epoch_end_metrics
        self.log_metrics(epoch_log_metrics, {})
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py", line 208, in log_metrics
        mem_map = memory.get_memory_profile(self.log_gpu_memory)
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/core/memory.py", line 365, in get_memory_profile
        memory_map = get_gpu_memory_map()
      File "/home/u7701783/.local/lib/python3.8/site-packages/pytorch_lightning/core/memory.py", line 384, in get_gpu_memory_map
        result = subprocess.run(
      File "/opt/conda/lib/python3.8/subprocess.py", line 516, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['/usr/bin/nvidia-smi', '--query-gpu=memory.used', '--format=csv,nounits,noheader']' returned non-zero exit status 255.
    Epoch 80:  87%|████████▋ | 750/858 [04:57<00:42,  2.52it/s, loss=0.258]         
    
    opened by Suppersine 3
  • load_from_checkpoint   error

    load_from_checkpoint error

    `model = spacetimeformer_model.Spacetimeformer_Forecaster(d_x=6, d_y=6) model.load_from_checkpoint(check_point) data_module, inv_scaler, null_val = create_dset() trainer = pl.Trainer()

    trainer.test(model=model, datamodule=data_module)`

    there is an error as RuntimeError: Error(s) in loading state_dict for Spacetimeformer_Forecaster: Unexpected key(s) in state_dict:

    could you tell me how to solve it? thank you very much!

    opened by SingleXuu 3
  • Question on dimensions of Datasets

    Question on dimensions of Datasets

    Hello -

    I am looking to try new datasets with your model, but just having a little hard time understanding the x_dim and y_dim hard coded into the train.py.

    What exactly do each of these mean?

    For example, for the solar_energy data set, I see that the y_dim is 137 because it has 137 features, but where does the 6 come from?

    opened by ghost 3
  • Spacetimeformer invalid accelerator name

    Spacetimeformer invalid accelerator name

    executed the following exaple from the readme:

    python train.py spacetimeformer exchange --batch_size 32 --warmup_steps 1000 --d_model 200 --d_ff 700 --enc_layers 5 --dec_layers 6 --dropout_emb .1 --dropout_ff .3 --run_name pems-bay-spatiotemporal --base_lr 1e-3 --l2_coeff 1e-3 --loss mae --d_qk 30 --d_v 30 --n_heads 10 --patience 10 --decay_factor .8 --workers 4 --gpus 0
    

    I get the following error:

    Traceback (most recent call last):
      File "train.py", line 851, in <module>
        main(args)
      File "train.py", line 816, in main
        trainer = pl.Trainer(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults
        return fn(self, **kwargs)
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 433, in __init__
        self._accelerator_connector = AcceleratorConnector(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 193, in __init__
        self._check_config_and_set_final_flags(
      File "/opt/miniconda3/envs/spacetimeformer/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 292, in _check_config_and_set_final_flags
        raise ValueError(
    ValueError: You selected an invalid accelerator name: `accelerator='dp'`. Available names are: cpu, cuda, hpu, ipu, mps, tpu.
    

    I see in the train.py file accelerator is set to "dp". I dont seen "dp" as a valid command in the torch lightning documentation, where dp is a data parallel strategy and the accelerator s/b gpu

    opened by alre5639 2
  • Questions about Global-Attention and Decoder Fast-Attention

    Questions about Global-Attention and Decoder Fast-Attention

    Thanks for uploading the code. This is a very interesting project!

    I have a few questions surrounding some implementation details.

    1) Global-Attention What is the purpose of WindowTime function and how does it help facilitate the concept of "Global-attention"?

    2) Decoder Fast-Attention When the Performer option is selected, the FastAttention for decoder self-attention is not instantiated with the causal flag set to True. This results in the attention mechanism of the forecast sequence attending to future tokens. Is this issue being addressed somewhere else?

    opened by ShuAiii 2
  • Time Emb Dim

    Time Emb Dim

    Default Time Emb Dim is 6, however, the paper states that the value used was 12. The example training commands don't specify this hyper parameter. To recreate the results in the paper should 12 be used? Would there be drastically different results due to this hyper parameter?

    Also had to update a line in classification_loss function: acc = torchmetrics.functional.accuracy(torch.softmax(logits, dim=1),labels,task='multiclass',num_classes=self.d_yc) in spacetimeformer_model.py. I'm assuming due to versioning differences.

    opened by angmc 0
  • Getting error on PEMS-BAY dataset

    Getting error on PEMS-BAY dataset

    I downloaded the pems-bay.h5 file from https://zenodo.org/record/4263971 and used https://github.com/liyaguang/DCRNN/blob/master/scripts/generate_training_data.py to generate test.npz, train.npz, val.npz files in ./data/pems_bay/.

    When I run the command from the README.md python train.py spacetimeformer pems-bay --batch_size 32 --warmup_steps 1000 --d_model 200 --d_ff 700 --enc_layers 5 --dec_layers 6 --dropout_emb .1 --dropout_ff .3 --run_name pems-bay-spatiotemporal --base_lr 1e-3 --l2_coeff 1e-3 --loss mae --data_path ./data/pems_bay/ --d_qk 30 --d_v 30 --n_heads 10 --patience 10 --decay_factor .8

    I get the following error

    Traceback (most recent call last): File "/spacetimeformer-main/spacetimeformer/train.py", line 854, in main(args) File "/spacetimeformer-main/spacetimeformer/train.py", line 758, in main ) = create_dset(args) File "/spacetimeformer-main/spacetimeformer/train.py", line 394, in create_dset data = stf.data.metr_la.METR_LA_Data(config.data_path) File "/spacetimeformer-main/spacetimeformer/data/metr_la/metr_la.py", line 43, in init x_c_train, y_c_train = self._split_set(context_train) File "/spacetimeformer-main/spacetimeformer/data/metr_la/metr_la.py", line 21, in _split_set time = 2.0 * x[:, :, 0] - 1.0 IndexError: too many indices for array: array is 2-dimensional, but 3 were indexed

    opened by kadattack 2
  • Purpose of ignore_col Parameter?

    Purpose of ignore_col Parameter?

    Hello -

    Thank you for all the improvements in the model, this is truly a very robust implementation.

    Quick Question: What is the purpose of the ignore_columns parameter within the CSVTimeSeries Class? Specifically, if you were targetting specific columns in a CSV dataset, how would ignoring specific columns change yc_dim and yt_dim.

    Thank you for answering a possible simple question.

    Best

    opened by ghost 0
  • All forecast values close to mean value

    All forecast values close to mean value

    Hello Jake

    I have a model which uses spatial attention with GAT and temporal attention with Transformers. I am working on Pems bay dataset for my project.

    During training, the decoder is one shot meaning all the timesteps are fed into the decoder at the same time. When I printed out the predictions, they are all near the mean value and not capturing the trends in the data.

    Is there any obvious issues that you see?

    I checked position encoding, attention weights from graphs and normalised with Z-score normalisation and use inverse scaling before comparison.

    opened by ghost 1
  • How to interpret the forecast results of Toy2, exchange, NY-TX datasets

    How to interpret the forecast results of Toy2, exchange, NY-TX datasets

    Hello dear author! Recently, after I modified it, I tested the effect on toy2, exchange and NY-TX according to the original text, and got the effect picture in the article. What are their respective prediction targets on these three datasets?(use the specified command without modification) (1) In the test of the toy2 data set, wandb records the prediction results of the test set at different time steps, with eight pictures at a single time step. In the test set of toy2, how to determine the part of the time period corresponding to the multivariate prediction output? toy2 toy2_8 (2) The result output on the exchange is also a single time step with eight pictures. Are the eight graphs here one-to-one corresponding to the exchange rates of eight different countries? (Use several other countries and national data as multivariate input to predict the output of the country) exchange (3) What is the prediction goal of the NY-TX dataset, is it to combine the data of these six sites to predict one of the sites? The article states that the experiment forecasts the temperature of three weather stations in Texas and three in New York. Appendix C.1 shows the forecast map for one of these stations (Figure 6). With so many diagrams, how do we determine which station is being predicted, and what is the approximate time period corresponding to the prediction? image

    opened by GA12WAINM 1
Releases(v1.1)
Owner
QData
http://www.cs.virginia.edu/yanjun/
QData
TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction.

TalkNet 2 [WIP] TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Predictio

Rishikesh (ऋषिकेश) 69 Dec 17, 2022
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals, CVPR2021

End-to-End Object Detection with Learnable Proposal, CVPR2021

Peize Sun 1.2k Dec 27, 2022
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 06, 2023
This is the repo for the paper `SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization'. (published in Bioinformatics'21)

SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization This is the code for our paper ``SumGNN: Multi-typed Drug

Yue Yu 58 Dec 21, 2022
Implement slightly different caffe-segnet in tensorflow

Tensorflow-SegNet Implement slightly different (see below for detail) SegNet in tensorflow, successfully trained segnet-basic in CamVid dataset. Due t

Tseng Kuan Lun 364 Oct 27, 2022
Official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch.

Multi-speaker DGP This repository provides official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch. O

sarulab-speech 24 Sep 07, 2022
ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.

ESRGAN (Enhanced SRGAN) [ 🚀 BasicSR] [Real-ESRGAN] ✨ New Updates. We have extended ESRGAN to Real-ESRGAN, which is a more practical algorithm for rea

Xintao 4.7k Jan 02, 2023
RLBot Python bindings for the Rust crate rl_ball_sym

RLBot Python bindings for rl_ball_sym 0.6 Prerequisites: Rust & Cargo Build Tools for Visual Studio RLBot - Verify that the file %localappdata%\RLBotG

Eric Veilleux 2 Nov 25, 2022
WRENCH: Weak supeRvision bENCHmark

🔧 What is it? Wrench is a benchmark platform containing diverse weak supervision tasks. It also provides a common and easy framework for development

Jieyu Zhang 176 Dec 28, 2022
DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021) input image, aligned reconstruction, animation with various poses & expressions This is

Yao Feng 1.5k Jan 02, 2023
3D Avatar Lip Syncronization from speech (JALI based face-rigging)

visemenet-inference Inference Demo of "VisemeNet-tensorflow" VisemeNet is an audio-driven animator centric speech animation driving a JALI or standard

Junhwan Jang 17 Dec 20, 2022
A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

A small demonstration of using WebDataset with ImageNet and PyTorch Lightning

Tom 50 Dec 16, 2022
Generating Videos with Scene Dynamics

Generating Videos with Scene Dynamics This repository contains an implementation of Generating Videos with Scene Dynamics by Carl Vondrick, Hamed Pirs

Carl Vondrick 706 Jan 04, 2023
PyTorch deep learning projects made easy.

PyTorch Template Project PyTorch deep learning project made easy. PyTorch Template Project Requirements Features Folder Structure Usage Config file fo

Victor Huang 3.8k Jan 01, 2023
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022
System Design course at HSE (2021)

System Design course at HSE (2021) Wiki-страница курса Структура репозитория: slides - директория с презентациями с занятий tasks - материалы для выпо

22 Dec 25, 2022
T2F: text to face generation using Deep Learning

⭐ [NEW] ⭐ T2F - 2.0 Teaser (coming soon ...) Please note that all the faces in the above samples are generated ones. The T2F 2.0 will be using MSG-GAN

Animesh Karnewar 533 Dec 22, 2022
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021)

RSCD (BS-RSCD & JCD) Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021) by Zhihang Zhong, Yinqiang Zheng, Imari Sato We co

81 Dec 15, 2022
A compendium of useful, interesting, inspirational usage of pandas functions, each example will be an ipynb file

Pandas_by_examples A compendium of useful/interesting/inspirational usage of pandas functions, each example will be an ipynb file What is this reposit

Guangyuan(Frank) Li 32 Nov 20, 2022