Data & Code for ACCENTOR Adding Chit-Chat to Enhance Task-Oriented Dialogues

Related tags

Deep Learningaccentor
Overview

ACCENTOR: Adding Chit-Chat to Enhance Task-Oriented Dialogues

Overview

ACCENTOR consists of the human-annotated chit-chat additions to the 23.8K dialogues from Schema Guided Dialogue (SGD) and MultiWOZ 2.1, allowing researchers to study contexutal addition of chit-chat utterances for virtual assistants, to make task-oriented dialogues more engaging and social.

We also provide three new models for ACCENTOR explicitly trained to predict user goals and to generate contextually relevant chit-chat responses.

Automatic and human evaluations show that, compared with the state of-the-art task-oriented baseline, our models can code-switch between task and chit-chat to be more engaging, interesting, knowledgeable, and humanlike, while maintaining competitive task performance.

For more details, please refer to this paper.

Data

  • v1.0/candidates-{sgd,multiwoz}.json: Annotated chit-chat candidates. The format is as follows.
{
 "dialogue 1 / id": [
  [
   dialogue 1 / candidate 1 / turn id,
   dialogue 1 / candidate 1 / position,
   dialogue 1 / candidate 1 / candidate,
   dialogue 1 / candidate 1 / label,
   dialogue 1 / candidate 1 / justification
  ],
  [
   dialogue 1 / candidate 2 / turn id,
   ...
  ],
  ...
 ],
 "dialogue 2 / id": [
  ...
 ],
 ...
}
  • Folder v1.0/accentor-sgd: The augmented SGD dataset. The format follows the original SGD dataset, with two additional keys (i.e., beginning and end) that store lists of (candidate, label, justification) tuples.

    • The folder is generated by v1.0/accentor-sgd.py (with v1.0/candidates-sgd.json and the original SGD dataset as input). Usage: python3 v1.0/accentor-sgd.py --help.
  • v1.0/accentor-multiwoz-1k.json: 1K augmented MultiWOZ 2.1 dialogues. The format follows the original MultiWOZ dataset, with two additional keys (i.e., beginning and end) that store lists of (candidate, label, justification) tuples.

    • The file is generated by v1.0/accentor-multiwoz.py (with v1.0/candidates-multiwoz.json and the original MultiWOZ 2.1 dataset as input). Usage: python3 v1.0/accentor-multiwoz.py --help.

Baseline Models

Preparation

  • Dependencies: ParlAI (af12799a) and Transformers (2.11.0)

  • Run the following commands to prepare the data for model training and the off-the-shelf models (i.e., a task-oriented dialogue model and a chit-chat model) for Arranger and Rewriter.

cp -r ./v1.0/accentor-sgd .

python3 gen_delex.py

python3 gen_parlai_data.py

parlai train_model -t fromfile:parlaiformat --fromfile_datapath ./parlai --fromfile-datatype-extension true  -m transformer/generator --init-model zoo:tutorial_transformer_generator/model --dict-file zoo:tutorial_transformer_generator/model.dict --embedding-size 512 --n-layers 8 --ffn-size 2048 --dropout 0.1 --n-heads 16 --learn-positional-embeddings True --n-positions 512 --variant xlm --activation gelu --skip-generation True --fp16 True --text-truncate 512 --label-truncate 128 --dict-tokenizer bpe --dict-lower True -lr 1e-06 --optimizer adamax --lr-scheduler reduceonplateau --gradient-clip 0.1 -veps 0.25 --betas 0.9,0.999 --update-freq 1 --attention-dropout 0.0 --relu-dropout 0.0 --skip-generation True -vp 15 -stim 60 -vme 20000 -bs 16 -vmt ppl -vmm min --save-after-valid True --model-file ./train_90M

parlai interactive -mf ./train_90M < lm.input.dev.cc.txt > lm.output.dev.cc.txt

parlai interactive -mf ./train_90M < lm.input.test.cc.txt > lm.output.test.cc.txt

python3 run_language_modeling.py --output_dir=output_gpt2_10epoch_1e-3_fp16 --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=lm.input.train.txt --do_eval  --eval_data_file=lm.input.dev.txt --per_device_train_batch_size 2 --gradient_accumulation_steps 18 --num_train_epochs 10 --learning_rate 1e-3 --fp16 --overwrite_output_dir

python3 run_generation.py --input lm.input.dev.eval.txt --output dev.inference.gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

python3 run_generation.py --input lm.input.test.eval.txt --output test.inference.gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

SimpleTOD+

  • Dependency: Transformers (2.11.0)
python3 run_language_modeling.py --output_dir=output_both_gpt2_10epoch_1e-3_fp16 --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=lm.input.train.both.txt --do_eval  --eval_data_file=lm.input.dev.both.txt --per_device_train_batch_size 2 --gradient_accumulation_steps 18 --num_train_epochs 10 --learning_rate 1e-3 --fp16 --overwrite_output_dir

python3 run_generation.py --input lm.input.dev.eval.txt --output dev.inference.both_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_both_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

python3 run_generation.py --input lm.input.test.eval.txt --output test.inference.both_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_both_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

Arranger

  • Dependency: Transformers (2.2.0)
python3 gen_arranger_input.py

python3 run_multiple_choice.py --model_type roberta --task_name acc --model_name_or_path roberta-base --do_train --do_eval --do_test --do_lower_case --data_dir . --learning_rate 2e-5 --num_train_epochs 3 --max_seq_length 512 --output_dir acc_arranger_roberta_base_3epoch --per_gpu_eval_batch_size=16 --per_gpu_train_batch_size=1 --gradient_accumulation_steps 24 --overwrite_output --save_steps 10000

python3 gen_arranger_output.py

Rewriter

  • Dependency: Transformers 2.11.0
python3 gen_rewriter_data.py

python3 run_language_modeling.py --output_dir=output_ff_gpt2_10epoch_1e-3_fp16 --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=lm.input.train.ff.txt  --do_eval --eval_data_file=lm.input.dev.ff.txt --per_device_train_batch_size 2 --gradient_accumulation_steps 18 --num_train_epochs 10 --learning_rate 1e-3 --fp16 --overwrite_output_dir

python3 run_generation.py --input lm.input.dev.eval.ff.txt --output dev.inference.ff_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_ff_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

python3 run_generation.py --input lm.input.test.eval.ff.txt --output test.inference.ff_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_ff_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

Evaluation

  • Dependency: the official evaluation script of SGD

  • Pass the output inference files (i.e., {dev,test}.inference*.json) to gen_predict.py to obtain act-slot F1 and BLEU-4 scores. For example,

python3 gen_predict.py --inference test.inference.both_gpt2_10epoch_1e-3_fp16.json --split test
  • The above command will also generate a folder (named ./prediction/ by default), which can be passed to the official evaluation script of SGD to obtain the joint goal accuracy and average accuracy. For example,
python3 -m schema_guided_dst.evaluate --dstc8_data_dir ./simpletod/ --prediction_dir ./prediction/test/ --eval_set test --output_metric_file simpletod+_test_result.json

Citations

If you want to publish experimental results with our datasets or use the baseline models, please cite the following article (pdf):

@inproceedings{sun2020adding,
  title={Adding Chit-Chat to Enhance Task-Oriented Dialogues},
  author={Sun, Kai and Moon, Seungwhan and Crook, Paul and Roller, Stephen and Silvert, Becka and Liu, Bing and Wang, Zhiguang and Liu, Honglei and Cho, Eunjoon and Cardie, Claire},
  booktitle={Proceedings of the NAACL-HLT},
  year={2021},
  url={https://arxiv.org/abs/2010.12757}
}

License

ACCENTOR is released under CC-BY-SA-4.0, see LICENSE for details.

Owner
Facebook Research
Facebook Research
Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.

aft-pytorch Unofficial PyTorch implementation of Attention Free Transformer's layers by Zhai, et al. [abs, pdf] from Apple Inc. Installation You can i

Rishabh Anand 184 Dec 12, 2022
The Illinois repository for Climatehack (https://climatehack.ai/). We won 1st place!

Climatehack This is the repository for Illinois's Climatehack Team. We earned first place on the leaderboard with a final score of 0.87992. An overvie

Jatin Mathur 20 Jun 09, 2022
Scalable, event-driven, deep-learning-friendly backtesting library

...Minimizing the mean square error on future experience. - Richard S. Sutton BTGym Scalable event-driven RL-friendly backtesting library. Build on

Andrew 922 Dec 27, 2022
Using Python to Play Cyberpunk 2077

CyberPython 2077 Using Python to Play Cyberpunk 2077 This repo will contain code from the Cyberpython 2077 video series on Youtube (youtube.

Harrison 118 Oct 18, 2022
Implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs".

PPO-BiHyb This is the official implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Grap

<a href=[email protected]"> 66 Nov 23, 2022
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Hailo Model Zoo The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can mea

Hailo 50 Dec 07, 2022
Official implementation of Sparse Transformer-based Action Recognition

STAR Official implementation of S parse T ransformer-based A ction R ecognition Dataset download NTU RGB+D 60 action recognition of 2D/3D skeleton fro

Chonghan_Lee 15 Nov 02, 2022
Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs

Perceiver IO Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs Usage import torch from src.perceiver.

Timur Ganiev 111 Nov 15, 2022
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

CyGNet This repository reproduces the AAAI'21 paper “Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Network

CunchaoZ 89 Jan 03, 2023
Training vision models with full-batch gradient descent and regularization

Stochastic Training is Not Necessary for Generalization -- Training competitive vision models without stochasticity This repository implements trainin

Jonas Geiping 32 Jan 06, 2023
Activity tragle - Google is tracking everything, we just look at it

activity_tragle Google is tracking everything, we just look at it here. You need

BERNARD Guillaume 1 Feb 15, 2022
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 05, 2023
A project to make Amazon Echo respond to sign language using your webcam

Making Alexa respond to Sign Language using Tensorflow.js Try the live demo Read the Blog Post on Tensorflow's Blog Coming Soon Watch the video This p

Abhishek Singh 444 Jan 03, 2023
Probabilistic Cross-Modal Embedding (PCME) CVPR 2021

Probabilistic Cross-Modal Embedding (PCME) CVPR 2021 Official Pytorch implementation of PCME | Paper Sanghyuk Chun1 Seong Joon Oh1 Rafael Sampaio de R

NAVER AI 87 Dec 21, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
[ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets"

EarlyBERT This is the official implementation for the paper in ACL-IJCNLP 2021 "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by

VITA 13 May 11, 2022
Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design"

Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design". CoordAttention tensorflow slim

Billy 9 Aug 22, 2022
A medical imaging framework for Pytorch

Welcome to MedicalTorch MedicalTorch is an open-source framework for PyTorch, implementing an extensive set of loaders, pre-processors and datasets fo

Christian S. Perone 799 Jan 03, 2023
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering Authors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou and

Salesforce 72 Dec 05, 2022
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting

Official code of APHYNITY Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting (ICLR 2021, Oral) Yuan Yin*, Vincent Le Guen*

Yuan Yin 24 Oct 24, 2022