Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

Overview

Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

1. Classification Task

PyTorch implementation of DisturbLabel: Regularizing CNN on the Loss Layer [CVPR 2016] extended with Directional DisturbLabel method.

This classification code is built on top of https://github.com/amirhfarzaneh/disturblabel-pytorch/blob/master/README.md project and utilizes implementation from ResNet 18 from https://github.com/huyvnphan/PyTorch_CIFAR10

Directional DisturbLabel

  if args.mode == 'ddl' or args.mode == 'ddldr':
      out = F.softmax(output, dim=1)
      norm = torch.norm(out, dim=1)
      out = out / norm[:, None]
      idx = []
      for i in range(len(out)):
          if out[i,target[i]] > .5:
              idx.append(i)
              
      if len(idx) > 0:
          target[idx] = disturb(target[idx]).to(device) 

Usage

python main_ddl.py --mode=dl --alpha=20

Most important arguments

--dataset - which data to use

Possible values:

value dataset
MNIST MNIST
FMNIST Fashion MNIST
CIFAR10 CIFAR-10
CIFAR100 CIFAR-100
ART Art Images: Drawing/Painting/Sculptures/Engravings
INTEL Intel Image Classification

Default: MNIST

-- mode - regularization method applied

Possible values:

value method
noreg Without any regularization
dl Vanilla DistrubLabel
ddl Directional DisturbLabel
dropout Dropout
dldr DistrubLabel+Dropout
ddldl Directional DL+Dropout

Default: ddl

--alpha - alpha for vanilla Distrub label and Directional DisturbLabel

Possible values: int from 0 to 100. Default: 20

--epochs - number of training epochs

Default: 100

2. Regression Task

DisturbValue

def noise_generator(x, alpha):
    noise = torch.normal(0, 1e-8, size=(len(x), 1))
    noise[torch.randint(0, len(x), (int(len(x)*(1-alpha)),))] = 0

    return noise

DisturbError

def disturberror(outputs, values):
    epsilon = 1e-8
    e = values - outputs
    for i in range(len(e)):
        if (e[i] < epsilon) & (e[i] >= 0):
            values[i] = values[i] + e[i] / 4
        elif (e[i] > -epsilon) & (e[i] < 0):
            values[i] = values[i] - e[i] / 4

    return values

Datasets

  1. Boston: 506 instances, 13 features
  2. Bike Sharing: 731 instances, 13 features
  3. Air Quality(AQ): 9357 instances, 10 features
  4. make_regression(MR): 5000 instances, 30 features (random sample for regression)
  5. Housing Price - Kaggle(HP): 1460 instances, 81 features
  6. Student Performance (SP): 649 instances, 13 features (20 - categorical were dropped)
  7. Superconductivity Dataset (SD): 21263 instances, 81 features
  8. Communities & Crime (CC): 1994 instances, 100 features
  9. Energy Prediction (EP): 19735 instancies, 27 features

Experiment Setting

Model: MLP which has 3 hidden layers

Result: Averaged over 20 runs

Hyperparameters: Using grid search options

Usage

python main_new.py --de y --dataset "bike" --dv_annealing y --epoch 100 --T 80
python main_new.py --de y --dv y --dataset "bike" -epoch 100
python main_new.py --de y --l2 y --dataset "air" -epoch 100
python main_new.py --dv y --dv_annealing y --dataset "air" -epoch 100 #for annealing setting dv should be "y"

--dataset: 'bike', 'air', 'boston', 'housing', 'make_sklearn', 'superconduct', 'energy', 'crime', 'students'
--dropout, --dv(disturbvalue), --de(disturberror), --l2, --dv_annealing: (string) y / n
--lr: (float)
--batch_size, --epoch, --T(cos annealing T): (int)
-- default dv_annealing: alpha_min = 0.05, alpha_max = 0.12, T_i = 80
Owner
Yongho Kim
Research Assistant
Yongho Kim
A Python implementation of active inference for Markov Decision Processes

A Python package for simulating Active Inference agents in Markov Decision Process environments. Please see our companion preprint on arxiv for an ove

235 Dec 21, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
Multi-robot collaborative exploration and mapping through Voronoi partition and DRL in unknown environment

Voronoi Multi_Robot Collaborate Exploration Introduction In the unknown environment, the cooperative exploration of multiple robots is completed by Vo

PeaceWord 6 Nov 22, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
Solution to the Weather4cast 2021 challenge

This code was used for the entry by the team "antfugue" for the Weather4cast 2021 Challenge. Below, you can find the instructions for generating predi

Jussi Leinonen 13 Jan 03, 2023
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data (NeurIPS 2021) This repository will provide the official PyTorch implementa

Liming Jiang 238 Nov 25, 2022
Learning to Identify Top Elo Ratings with A Dueling Bandits Approach

Learning to Identify Top Elo Ratings We propose two algorithms MaxIn-Elo and MaxIn-mElo to solve the top players identification on the transitive and

2 Jan 14, 2022
PyTorch Implementation of PIXOR: Real-time 3D Object Detection from Point Clouds

PIXOR: Real-time 3D Object Detection from Point Clouds This is a custom implementation of the paper from Uber ATG using PyTorch 1.0. It represents the

Philip Huang 270 Dec 14, 2022
Codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

DominoSearch This is repository for codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense n

11 Sep 10, 2022
The official MegEngine implementation of the ICCV 2021 paper: GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning

[ICCV 2021] GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning This is the official implementation of our ICCV2021 paper GyroFlow. Our pres

MEGVII Research 36 Sep 07, 2022
Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al.

nam-pytorch Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al. [abs, pdf] Installation You can access nam-pytorch vi

Rishabh Anand 11 Mar 14, 2022
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
Resources for our AAAI 2022 paper: "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification".

LOREN Resources for our AAAI 2022 paper (pre-print): "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification". DEMO System Check out o

Jiangjie Chen 37 Dec 27, 2022
Face Mask Detection on Image and Video using tensorflow and keras

Face-Mask-Detection Face Mask Detection on Image and Video using tensorflow and keras Train Neural Network on face-mask dataset using tensorflow and k

Nahid Ebrahimian 12 Nov 11, 2022
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 147 Jan 07, 2023
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.

Multi-Car Racing Gym Environment This repository contains MultiCarRacing-v0 a multiplayer variant of Gym's original CarRacing-v0 environment. This env

Igor Gilitschenski 56 Nov 01, 2022
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

Jittor: a Just-in-time(JIT) deep learning framework Quickstart | Install | Tutorial | Chinese Jittor is a high-performance deep learning framework bas

2.7k Jan 03, 2023
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

232 Dec 25, 2022
DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics

Dimensionality Reduction + Clustering + Unsupervised Score Metrics Introduction

11 Nov 15, 2022