NaturalProofs: Mathematical Theorem Proving in Natural Language

Overview

NaturalProofs: Mathematical Theorem Proving in Natural Language

NaturalProofs: Mathematical Theorem Proving in Natural Language
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

This repo contains:

  • The NaturalProofs Dataset and the mathematical reference retrieval task data.
  • Preprocessing NaturalProofs and the retrieval task data.
  • Training and evaluation for mathematical reference retrieval.
  • Pretrained models for mathematical reference retrieval.

Please cite our work if you found the resources in this repository useful:

@article{welleck2021naturalproofs,
  title={NaturalProofs: Mathematical Theorem Proving in Natural Language},
  author={Welleck, Sean and Liu, Jiacheng and Le Bras, Ronan and Hajishirzi, Hannaneh and Choi, Yejin and Cho, Kyunghyun},
  year={2021}
}
Section Subsection
NaturalProofs Dataset Dataset
Preprocessing
Mathematical Reference Retrieval Dataset
Setup
Preprocessing
Pretrained models
Training
Evaluation

NaturalProofs Dataset

We provide the preprocessed NaturalProofs Dataset (JSON):

NaturalProofs Dataset
dataset.json [zenodo]

Preprocessing

To see the steps used to create the NaturalProofs dataset.json from raw ProofWiki data:

  1. Download the ProofWiki XML.
  2. Preprocess the data using notebooks/parse_proofwiki.ipynb.
  3. Form the data splits using notebooks/dataset_splits.ipynb.

Mathematical Reference Retrieval

Dataset

The Mathematical Reference Retrieval dataset contains (x, r, y) examples with theorem statements x, positive and negative references r, and 0/1 labels y, derived from NaturalProofs.

We provide the version used in the paper (bert-based-cased tokenizer, 200 randomly sampled negatives):

Reference Retrieval Dataset
bert-base-cased 200 negatives

Pretrained Models

Pretrained models
bert-base-cased
lstm

These models were trained with the "bert-base-cased 200 negatives" dataset provided above.

Setup

python setup.py develop

You can see the DockerFile for additional version info, etc.

Generating and tokenizing

To create your own version of the retrieval dataset, use python utils.py.

This step is not needed if you are using the reference retrieval dataset provided above.

Example:

python utils.py --filepath /path/to/dataset.json --output-path /path/to/out/ --model-type bert-base-cased
# => Writing dataset to /path/to/out/dataset_tokenized__bert-base-cased_200.pkl

Evaluation

Using the retrieval dataset and a model provided above, we compute the test evaluation metrics in the paper:

  1. Predict the rankings:
python naturalproofs/predict.py \
--method bert-base-cased \      # | lstm
--model-type bert-base-cased \  # | lstm
--datapath /path/to/dataset_tokenized__bert-base-cased_200.pkl \
--datapath-base /path/to/dataset.json \
--checkpoint-path /path/to/best.ckpt \
--output-dir /path/to/out/ \
--split test  # use valid during model development
  1. Compute metrics over the rankings:
python naturalproofs/analyze.py \
--method bert-base-cased \      # | lstm
--eval-path /path/to/out/eval.pkl \
--analysis-path /path/to/out/analysis.pkl

Training

python naturalproofs/model.py \
--datapath /path/to/dataset_tokenized__bert-base-cased_200.pkl \
--default-root-dir /path/to/out/

Classical Retrieval Baselines

TF-IDF example:

python naturalproofs/baselines.py \
--method tfidf \
--datapath /path/to/dataset_tokenized__bert-base-cased_200.pkl \
--datapath-base /path/to/dataset.json \
--savedir /path/to/out/

Then use analyze.py as shown above to compute metrics.

Owner
Sean Welleck
Sean Welleck
RIM: Reliable Influence-based Active Learning on Graphs.

RIM: Reliable Influence-based Active Learning on Graphs. This repository is the official implementation of RIM. Requirements To install requirements:

Wentao Zhang 4 Aug 29, 2022
Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

17 Dec 19, 2022
Local Multi-Head Channel Self-Attention for FER2013

LHC-Net Local Multi-Head Channel Self-Attention This repository is intended to provide a quick implementation of the LHC-Net and to replicate the resu

12 Jan 04, 2023
Joint learning of images and text via maximization of mutual information

mutual_info_img_txt Joint learning of images and text via maximization of mutual information. This repository incorporates the algorithms presented in

Ruizhi Liao 10 Dec 22, 2022
Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface Usage TransformMultiStepScheduler import albumentations a

19 Aug 04, 2021
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022
Using python and scikit-learn to make stock predictions

MachineLearningStocks in python: a starter project and guide EDIT as of Feb 2021: MachineLearningStocks is no longer actively maintained MachineLearni

Robert Martin 1.3k Dec 29, 2022
Datasets, tools, and benchmarks for representation learning of code.

The CodeSearchNet challenge has been concluded We would like to thank all participants for their submissions and we hope that this challenge provided

GitHub 1.8k Dec 25, 2022
Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting (ICCV, 2021)

DKPNet ICCV 2021 Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting Baseline of DKPNet is availa

19 Oct 14, 2022
A fast, dataset-agnostic, deep visual search engine for digital art history

imgs.ai imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings. It utilizes modern

Fabian Offert 5 Dec 14, 2022
ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation

ClevrTex This repository contains dataset generation code for ClevrTex benchmark from paper: ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi

Laurynas Karazija 26 Dec 21, 2022
2021 Artificial Intelligence Diabetes Datathon

A.I.D.D. 2021 2021 Artificial Intelligence Diabetes Datathon A.I.D.D. 2021은 ‘2021 인공지능 학습용 데이터 구축사업’을 통해 만들어진 학습용 데이터를 활용하여 당뇨병을 효과적으로 예측할 수 있는가에 대한 A

2 Dec 27, 2021
NovelD: A Simple yet Effective Exploration Criterion

NovelD: A Simple yet Effective Exploration Criterion Intro This is an implementation of the method proposed in NovelD: A Simple yet Effective Explorat

29 Dec 05, 2022
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech Jaehyeon Kim, Jungil Kong, and Juhee Son In our rece

Jaehyeon Kim 1.7k Jan 08, 2023
A Keras implementation of YOLOv3 (Tensorflow backend)

keras-yolo3 Introduction A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K. Quick Start Download YOLOv3 weights fro

7.1k Jan 03, 2023
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

4 Jul 12, 2021
ZEBRA: Zero Evidence Biometric Recognition Assessment

ZEBRA: Zero Evidence Biometric Recognition Assessment license: LGPLv3 - please reference our paper version: 2020-06-11 author: Andreas Nautsch (EURECO

Voice Privacy Challenge 2 Dec 12, 2021
Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.

Deformable Butterfly: A Highly Structured and Sparse Linear Transform DeBut Advantages DeBut generalizes the square power of two butterfly factor matr

Rui LIN 8 Jun 10, 2022