Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Overview

Multi-Task Meta-Learning Modification with Stochastic Approximation

This repository contains the code for the paper
"Multi-Task Meta-Learning Modification with Stochastic Approximation".

Method pipeline

Dependencies

This code has been tested on Ubuntu 16.04 with Python 3.8 and PyTorch 1.8.

To install the required dependencies:

pip install -r requirements.txt

Usage

To reproduce the results on benchmarks described in our article, use the following scripts. To vary types of the experiments, change the parameters of the scripts responsible for benchmark dataset, shot and way (e.g. miniImageNet 1-shot 5-way or CIFAR-FS 5-shot 2-way).

MAML

Multi-task modification (MTM) for Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017).

Multi-task modifications for MAML are trained on top of baseline MAML model which has to be trained beforehand.

To train MAML (reproduced) on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-miniimagenet \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name mini-imagenet-mtm-spsa-track \
    --load "./results/reproduced-miniimagenet/model.th" \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-track \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-tieredimagenet \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name tiered-imagenet-mtm-spsa \
    --load "./results/reproduced-tieredimagenet/model.th" \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-delta \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-fc100 \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Coarse on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name fc100-mtm-spsa-coarse \
    --load "./results/reproduced-fc100/model.th" \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --task-weighting spsa-per-coarse-class \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-cifar \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 600 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Inner First-Order on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-inner-first-order \
    --load "./results/reproduced-cifar/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --use-inner-optimizer \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Backprop on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-backprop \
    --load "./results/reproduced-cifar-5shot-5way/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To test any of the above-described benchmarks, run:

python maml/test.py ./results/path-to-config/config.json --num-steps 10 --use-cuda

For instance, to test MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/test.py ./results/mini-imagenet-mtm-spsa-track/config.json --num-steps 10 --use-cuda

Prototypical Networks

Multi-task modification (MTM) for Prototypical Networks (ProtoNet) (Snell et al., 2017).

To train ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset miniImageNet \
    --network ResNet12 \
    --tracking \
    --train-shot 1 \
    --train-way 5 \
    --val-shot 1 \
    --val-way 5

To test ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/test.py --dataset miniImageNet --network ResNet12 --shot 1 --way 5

To train ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset CIFAR_FS \
    --train-weights \
    --train-weights-layer \
    --train-shot 1 \
    --train-way 2 \
    --val-shot 1 \
    --val-way 2

To test ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 5-way benchmark, run:

python protonet/test.py --dataset CIFAR_FS --shot 1 --way 2

To train ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset FC100 \
    --train-weights \
    --train-weights-opt \
    --train-shot 10 \
    --train-way 5 \
    --val-shot 10 \
    --val-way 5

To test ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/test.py --dataset FC100 --shot 10 --way 5

To train ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset tieredImageNet \
    --train-shot 5 \
    --train-way 2 \
    --val-shot 5 \
    --val-way 2

To test ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/test.py --dataset tieredImageNet --shot 5 --way 2

Acknowledgments

Our code uses some dataloaders from Torchmeta.

Code in maml folder is based on the extended implementation from Torchmeta and pytorch-maml. The code has been updated so that baseline scores more closely follow those of the original MAML paper.

Code in protonet folder is based on the implementation from MetaOptNet. All .py files in this folder except for dataloaders.py and optimize.py were adopted from this implementation and modified afterwards. A copy of Apache License, Version 2.0 is available in protonet folder.

Owner
Andrew
Andrew
PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.5k Jan 02, 2023
EdiBERT, a generative model for image editing

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020)

Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020) Official implementation of: Forest R-CNN: Large-Vo

Jialian Wu 54 Jan 06, 2023
Xintao 1.4k Dec 25, 2022
Minimalist Error collection Service compatible with Rollbar clients. Sentry or Rollbar alternative.

Minimalist Error collection Service Features Compatible with any Rollbar client(see https://docs.rollbar.com/docs). Just change the endpoint URL to yo

Haukur Rósinkranz 381 Nov 11, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 08, 2023
Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

EMOShip This repository contains the EMO-Film dataset described in the paper "Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis

1 Nov 18, 2022
This package implements THOR: Transformer with Stochastic Experts.

THOR: Transformer with Stochastic Experts This PyTorch package implements Taming Sparsely Activated Transformer with Stochastic Experts. Installation

Microsoft 45 Nov 22, 2022
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 04, 2023
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

6 Dec 19, 2022
Running Google MoveNet Multipose Tracking models on OpenVINO.

MoveNet MultiPose Tracking on OpenVINO

60 Nov 17, 2022
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 07, 2022
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 03, 2023
Official Pytorch implementation of RePOSE (ICCV2021)

RePOSE: Iterative Rendering and Refinement for 6D Object Detection (ICCV2021) [Link] Abstract We present RePOSE, a fast iterative refinement method fo

Shun Iwase 68 Nov 15, 2022
Source code of the paper Meta-learning with an Adaptive Task Scheduler.

ATS About Source code of the paper Meta-learning with an Adaptive Task Scheduler. If you find this repository useful in your research, please cite the

Huaxiu Yao 16 Dec 26, 2022
The Multi-Mission Maximum Likelihood framework (3ML)

PyPi Conda The Multi-Mission Maximum Likelihood framework (3ML) A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.

The Multi-Mission Maximum Likelihood (3ML) 62 Dec 30, 2022
DISTIL: Deep dIverSified inTeractIve Learning.

DISTIL: Deep dIverSified inTeractIve Learning. An active/inter-active learning library built on py-torch for reducing labeling costs.

decile-team 110 Dec 06, 2022
Experiments with Fourier layers on simulation data.

Factorized Fourier Neural Operators This repository contains the code to reproduce the results in our NeurIPS 2021 ML4PS workshop paper, Factorized Fo

Alasdair Tran 57 Dec 25, 2022