This repository contains demos I made with the Transformers library by HuggingFace.

Overview

Transformers-Tutorials

Hi there!

This repository contains demos I made with the Transformers library by 🤗 HuggingFace. Currently, all of them are implemented in PyTorch.

NOTE: if you are not familiar with HuggingFace and/or Transformers, I highly recommend to check out our free course, which introduces you to several Transformer architectures (such as BERT, GPT-2, T5, BART, etc.), as well as an overview of the HuggingFace libraries, including Transformers, Tokenizers, Datasets, Accelerate and the hub.

Currently, it contains the following demos:

  • BERT (paper):
    • fine-tuning BertForTokenClassification on a named entity recognition (NER) dataset. Open In Colab
    • fine-tuning BertForSequenceClassification for multi-label text classification. Open In Colab
  • CANINE (paper):
    • fine-tuning CanineForSequenceClassification on IMDb Open In Colab
  • DETR (paper):
    • performing inference with DetrForObjectDetection Open In Colab
    • fine-tuning DetrForObjectDetection on a custom object detection dataset Open In Colab
    • evaluating DetrForObjectDetection on the COCO detection 2017 validation set Open In Colab
    • performing inference with DetrForSegmentation Open In Colab
    • fine-tuning DetrForSegmentation on COCO panoptic 2017 Open In Colab
  • GPT-J-6B (repository):
    • performing inference with GPTJForCausalLM to illustrate few-shot learning and code generation Open In Colab
  • ImageGPT (blog post):
    • (un)conditional image generation with ImageGPTForCausalLM Open In Colab
    • linear probing with ImageGPT Open In Colab
  • LayoutLM (paper):
    • fine-tuning LayoutLMForTokenClassification on the FUNSD dataset Open In Colab
    • fine-tuning LayoutLMForSequenceClassification on the RVL-CDIP dataset Open In Colab
    • adding image embeddings to LayoutLM during fine-tuning on the FUNSD dataset Open In Colab
  • LayoutLMv2 (paper):
    • fine-tuning LayoutLMv2ForSequenceClassification on RVL-CDIP Open In Colab
    • fine-tuning LayoutLMv2ForTokenClassification on FUNSD Open In Colab
    • fine-tuning LayoutLMv2ForTokenClassification on FUNSD using the 🤗 Trainer Open In Colab
    • performing inference with LayoutLMv2ForTokenClassification on FUNSD Open In Colab
    • true inference with LayoutLMv2ForTokenClassification (when no labels are available) + Gradio demo Open In Colab
    • fine-tuning LayoutLMv2ForTokenClassification on CORD Open In Colab
    • fine-tuning LayoutLMv2ForQuestionAnswering on DOCVQA Open In Colab
  • LUKE (paper):
    • fine-tuning LukeForEntityPairClassification on a custom relation extraction dataset using PyTorch Lightning Open In Colab
  • SegFormer (paper):
    • performing inference with SegformerForSemanticSegmentation Open In Colab
    • fine-tuning SegformerForSemanticSegmentation on custom data using native PyTorch Open In Colab
  • Perceiver IO (paper):
    • showcasing masked language modeling and image classification with the Perceiver Open In Colab
    • fine-tuning the Perceiver for image classification Open In Colab
    • fine-tuning the Perceiver for text classification Open In Colab
    • predicting optical flow between a pair of images with PerceiverForOpticalFlowOpen In Colab
    • auto-encoding a video (images, audio, labels) with PerceiverForMultimodalAutoencoding Open In Colab
  • T5 (paper):
    • fine-tuning T5ForConditionalGeneration on a Dutch summarization dataset on TPU using HuggingFace Accelerate Open In Colab
    • fine-tuning T5ForConditionalGeneration (CodeT5) for Ruby code summarization using PyTorch Lightning Open In Colab
  • TAPAS (paper):
  • TrOCR (paper):
    • performing inference with TrOCR to illustrate optical character recognition with Transformers, as well as making a Gradio demo Open In Colab
    • fine-tuning TrOCR on the IAM dataset using the Seq2SeqTrainer Open In Colab
    • fine-tuning TrOCR on the IAM dataset using native PyTorch Open In Colab
    • evaluating TrOCR on the IAM test set Open In Colab
  • Vision Transformer (paper):
    • performing inference with ViTForImageClassification Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using PyTorch Lightning Open In Colab
    • fine-tuning ViTForImageClassification on CIFAR-10 using the 🤗 Trainer Open In Colab

... more to come! 🤗

If you have any questions regarding these demos, feel free to open an issue on this repository.

Btw, I was also the main contributor to add the following algorithms to the library:

  • TAbular PArSing (TAPAS) by Google AI
  • Vision Transformer (ViT) by Google AI
  • Data-efficient Image Transformers (DeiT) by Facebook AI
  • LUKE by Studio Ousia
  • DEtection TRansformers (DETR) by Facebook AI
  • CANINE by Google AI
  • BEiT by Microsoft Research
  • LayoutLMv2 (and LayoutXLM) by Microsoft Research
  • TrOCR by Microsoft Research
  • SegFormer by NVIDIA
  • ImageGPT by OpenAI
  • Perceiver by Deepmind

All of them were an incredible learning experience. I can recommend anyone to contribute an AI algorithm to the library!

Data preprocessing

Regarding preparing your data for a PyTorch model, there are a few options:

  • a native PyTorch dataset + dataloader. This is the standard way to prepare data for a PyTorch model, namely by subclassing torch.utils.data.Dataset, and then a creating corresponding DataLoader (which is a Python generator that allows to loop over the items of a dataset). When subclassing the Dataset class, one needs to implement 3 methods: __init__, __len__ (which returns the number of examples of the dataset) and __getitem__ (which returns an example of the dataset, given an integer index). Here's an example of creating a basic text classification dataset (assuming one has a CSV that contains 2 columns, namely "text" and "label"):
from torch.utils.data import Dataset

class CustomTrainDataset(Dataset):
    def __init__(self, df, tokenizer):
        self.df = df
        self.tokenizer = tokenizer

    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx):
        # get item
        item = df.iloc[idx]
        text = item['text']
        label = item['label']
        # encode text
        encoding = self.tokenizer(text, padding="max_length", max_length=128, truncation=True, return_tensors="pt")
        # remove batch dimension which the tokenizer automatically adds
        encoding = {k:v.squeeze() for k,v in encoding.items()}
        # add label
        encoding["label"] = torch.tensor(label)
        
        return encoding

Instantiating the dataset then happens as follows:

from transformers import BertTokenizer
import pandas as pd

tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
df = pd.read_csv("path_to_your_csv")

train_dataset = CustomTrainDataset(df=df tokenizer=tokenizer)

Accessing the first example of the dataset can then be done as follows:

encoding = train_dataset[0]

In practice, one creates a corresponding DataLoader, that allows to get batches from the dataset:

from torch.utils.data import DataLoader

train_dataloader = DataLoader(train_dataset, batch_size=4, shuffle=True)

I often check whether the data is created correctly by fetching the first batch from the data loader, and then printing out the shapes of the tensors, decoding the input_ids back to text, etc.

batch = next(iter(train_dataloader))
for k,v in batch.items():
    print(k, v.shape)
# decode the input_ids of the first example of the batch
print(tokenizer.decode(batch['input_ids'][0].tolist())
  • HuggingFace Datasets. Datasets is a library by HuggingFace that allows to easily load and process data in a very fast and memory-efficient way. It is backed by Apache Arrow, and has cool features such as memory-mapping, which allow you to only load data into RAM when it is required. It only has deep interoperability with the HuggingFace hub, allowing to easily load well-known datasets as well as share your own with the community.

Loading a custom dataset as a Dataset object can be done as follows (you can install datasets using pip install datasets):

from datasets import load_dataset

dataset = load_dataset('csv', data_files={'train': ['my_train_file_1.csv', 'my_train_file_2.csv'] 'test': 'my_test_file.csv'})

Here I'm loading local CSV files, but there are other formats supported (including JSON, Parquet, txt) as well as loading data from a local Pandas dataframe or dictionary for instance. You can check out the docs for all details.

Training frameworks

Regarding fine-tuning Transformer models (or more generally, PyTorch models), there are a few options:

  • using native PyTorch. This is the most basic way to train a model, and requires the user to manually write the training loop. The advantage is that this is very easy to debug. The disadvantage is that one needs to implement training him/herself, such as setting the model in the appropriate mode (model.train()/model.eval()), handle device placement (model.to(device)), etc. A typical training loop in PyTorch looks as follows (inspired by this great PyTorch intro tutorial):
import torch

model = ...

# I almost always use a learning rate of 5e-5 when fine-tuning Transformer based models
optimizer = torch.optim.Adam(model.parameters(), lr=5-e5)

# put model on GPU, if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

for epoch in range(epochs):
    model.train()
    train_loss = 0.0
    for batch in train_dataloader:
        # put batch on device
        batch = {k:v.to(device) for k,v in batch.items()}
        
        # forward pass
        outputs = model(**batch)
        loss = outputs.loss
        
        train_loss += loss.item()
        
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()

    print("Loss after epoch {epoch}:", train_loss/len(train_dataloader))
    
    model.eval()
    val_loss = 0.0
    with torch.no_grad():
        for batch in eval_dataloader:
            # put batch on device
            batch = {k:v.to(device) for k,v in batch.items()}
            
            # forward pass
            outputs = model(**batch)
            loss = outputs.logits
            
            val_loss += loss.item()
                  
    print("Validation loss after epoch {epoch}:", val_loss/len(eval_dataloader))
  • PyTorch Lightning (PL). PyTorch Lightning is a framework that automates the training loop written above, by abstracting it away in a Trainer object. Users don't need to write the training loop themselves anymore, instead they can just do trainer = Trainer() and then trainer.fit(model). The advantage is that you can start training models very quickly (hence the name lightning), as all training-related code is handled by the Trainer object. The disadvantage is that it may be more difficult to debug your model, as the training and evaluation is now abstracted away.
  • HuggingFace Trainer. The HuggingFace Trainer API can be seen as a framework similar to PyTorch Lightning in the sense that it also abstracts the training away using a Trainer object. However, contrary to PyTorch Lightning, it is not meant not be a general framework. Rather, it is made especially for fine-tuning Transformer-based models available in the HuggingFace Transformers library. The Trainer also has an extension called Seq2SeqTrainer for encoder-decoder models, such as BART, T5 and the EncoderDecoderModel classes. Note that all PyTorch example scripts of the Transformers library make use of the Trainer.
  • HuggingFace Accelerate: Accelerate is a new project, that is made for people who still want to write their own training loop (as shown above), but would like to make it work automatically irregardless of the hardware (i.e. multiple GPUs, TPU pods, mixed precision, etc.).
Owner
ML @HuggingFace. Interested in deep learning, NLP. Contributed TAPAS, ViT, DeiT, LUKE, DETR, CANINE to HuggingFace Transformers
[NeurIPS 2021] Introspective Distillation for Robust Question Answering

Introspective Distillation (IntroD) This repository is the Pytorch implementation of our paper "Introspective Distillation for Robust Question Answeri

Yulei Niu 13 Jul 26, 2022
SoGCN: Second-Order Graph Convolutional Networks

SoGCN: Second-Order Graph Convolutional Networks This is the authors' implementation of paper "SoGCN: Second-Order Graph Convolutional Networks" in Py

Yuehao 7 Aug 16, 2022
This is the repo of the manuscript "Dual-branch Attention-In-Attention Transformer for speech enhancement"

DB-AIAT: A Dual-branch attention-in-attention transformer for single-channel SE

Guochen Yu 68 Dec 16, 2022
Supervised forecasting of sequential data in Python.

Supervised forecasting of sequential data in Python. Intro Supervised forecasting is the machine learning task of making predictions for sequential da

The Alan Turing Institute 54 Nov 15, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
A super lightweight Lagrangian model for calculating millions of trajectories using ERA5 data

Easy-ERA5-Trck Easy-ERA5-Trck Galleries Install Usage Repository Structure Module Files Version iteration Easy-ERA5-Trck is a super lightweight Lagran

Zhenning Li 26 Nov 19, 2022
2021:"Bridging Global Context Interactions for High-Fidelity Image Completion"

TFill arXiv | Project This repository implements the training, testing and editing tools for "Bridging Global Context Interactions for High-Fidelity I

Chuanxia Zheng 111 Jan 08, 2023
Instance-based label smoothing for improving deep neural networks generalization and calibration

Instance-based Label Smoothing for Neural Networks Pytorch Implementation of the algorithm. This repository includes a new proposed method for instanc

Mohamed Maher 1 Aug 13, 2022
Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique

AOS: Airborne Optical Sectioning Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned airc

JKU Linz, Institute of Computer Graphics 39 Dec 09, 2022
Wordplay, an artificial Intelligence based crossword puzzle solver.

Wordplay, AI based crossword puzzle solver A crossword is a word puzzle that usually takes the form of a square or a rectangular grid of white- and bl

Vaibhaw 4 Nov 16, 2022
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 06, 2023
GeoTransformer - Geometric Transformer for Fast and Robust Point Cloud Registration

Geometric Transformer for Fast and Robust Point Cloud Registration PyTorch imple

Zheng Qin 220 Jan 05, 2023
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

26 Oct 25, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

Dynamic Data Augmentation with Gating Networks This is an official PyTorch implementation of the paper Dynamic Data Augmentation with Gating Networks

九州大学 ヒューマンインタフェース研究室 3 Oct 26, 2022
An open source app to help calm you down when needed.

By: Seanpm2001, Et; Al. Top README.md Read this article in a different language Sorted by: A-Z Sorting options unavailable ( af Afrikaans Afrikaans |

Sean P. Myrick V19.1.7.2 2 Oct 24, 2022
GemNet model in PyTorch, as proposed in "GemNet: Universal Directional Graph Neural Networks for Molecules" (NeurIPS 2021)

GemNet: Universal Directional Graph Neural Networks for Molecules Reference implementation in PyTorch of the geometric message passing neural network

Data Analytics and Machine Learning Group 124 Dec 30, 2022
Computationally Efficient Optimization of Plackett-Luce Ranking Models for Relevance and Fairness

Computationally Efficient Optimization of Plackett-Luce Ranking Models for Relevance and Fairness This repository contains the code used for the exper

H.R. Oosterhuis 28 Nov 29, 2022
Python Interview Questions

Python Interview Questions Clone the code to your computer. You need to understand the code in main.py and modify the content in if __name__ =='__main

ClassmateLin 575 Dec 28, 2022