Unadversarial Examples: Designing Objects for Robust Vision

Overview

Unadversarial Examples: Designing Objects for Robust Vision

This repository contains the code necessary to replicate the major results of our paper:

Unadversarial Examples: Designing Objects for Robust Vision
Hadi Salman*, Andrew Ilyas*, Logan Engstrom*, Sai Vemprala, Aleksander Madry, Ashish Kapoor
Paper
Blogpost (MSR)
Blogpost (Gradient Science)

@article{salman2020unadversarial,
  title={Unadversarial Examples: Designing Objects for Robust Vision},
  author={Hadi Salman and Andrew Ilyas and Logan Engstrom and Sai Vemprala and Aleksander Madry and Ashish Kapoor},
  journal={arXiv preprint arXiv:2012.12235},
  year={2020}
}

Getting started

The following steps will get you set up with the required packages (additional packages are required for the 3D textures setting, described below):

  1. Clone our repo: git clone https://github.com/microsoft/unadversarial.git

  2. Install dependencies:

    conda create -n unadv python=3.7
    conda activate unadv
    pip install -r requirements.txt
    

Generating unadversarial examples for CIFAR10

Here we show a quick example how to generate unadversarial examples for CIFAR-10. Similar procedure can be used with ImageNet. The entry point of our code is main.py (see the file for a full description of arguments).

1- Download a pretrained CIFAR10 models, e.g.,

mkdir pretrained-models & 
wget -O pretrained-models/cifar_resnet50.ckpt "https://www.dropbox.com/s/yhpp4yws7sgi6lj/cifar_nat.pt?raw=1"

2- Run the following script

python -m src.main \
      --out-dir OUT_DIR \
      --exp-name demo \
      --dataset cifar \
      --data /tmp \
      --arch resnet50 \
      --model-path pretrained-models/cifar_resnet50.ckpt \
      --patch-size 10 \
      --patch-lr 0.001 \
      --training-mode booster \
      --epochs 30 \
      --adv-train 0

You can see the trained patches images in outdir/demo/save/ as training evolves.

3- Now you can evaluate the pretrained model on a boosted CIFAR10-C dataset (trained patch overlaid on CIFAR-10, then corruptions are added). Simply run

python -m src.evaluate_corruptions \
      --out-dir OUT_DIR \
      --exp-name demo \
      --model-path OUT_DIR/demo/checkpoint.pt.best \
      --args-from-store data,dataset,arch,patch_size

This will evaluate the pretrained model on various corruptions and display the results in the terminal.

4- That's it!

Generating 3D unadversarial textures

The following steps were tested on these configurations:

  • Ubuntu 16.04, 8 x NVIDIA 1080Ti/2080Ti, 2x10-core Intel CPUs (w/ HyperThreading, 40 virtual cores), CUDA 10.2
  • Ubuntu 18.04, 2 x NVIDIA K80, 1x12-core Intel CPU, CUDA 10.2

1- Choose a dataset to use as background images; we used ImageNet in our paper, for which you will need to have ImageNet in PyTorch ImageFolder format somewhere on your machine. If you don't have that, you can just use solid colors as the backgrounds (though the results might not match the paper).

2- Install the requirements: you will need a machine with CUDA 10.2 installed (this process might work with other versions of CUDA but we only tested 10.2), as well as docker, nvidia-docker, and the requirements mentioned earlier in the README.

3- Go to the docker/ folder and run docker build --tag TAG ., changing TAG to your preferred name for your docker instance. This will build a docker instance with all the requirements installed!

4- Open launch.py and edit the IMAGENET_TRAIN and IMAGENET_VAL variables to point to the ImageNet dataset, if it's installed and you want to use it. Either way, change TAG on the last line of the file with whatever you named your docker instance in the last step.

5- Alter the parameters in src/configs/config.json according to your setup; the only things we would recommend altering are num_texcoord_renderers (this should not exceed the number of CPU cores you have available), exp_name (the name of the output folder, which will be created inside OUT_DIR from the previous step), and dataset (if you are using ImageNet, you can leave this be, otherwise change it to solids to use solid colors as the backgrounds).

6- From inside the docker folder, run python launch.py [--with-imagenet] --out-dir OUT_DIR --gpus GPUS from the same folder. The --with-imagenet argument should only be provided if you followed step four. The OUT_DIR argument should point to where you want the resulting models/output saved, and the GPUS argument should be a comma-separated list of GPU IDs that you would like to run the job on.

7- This process should open a new terminal (inside your docker instance). In this terminal, run GPU_MODE=0 bash run_imagenet.sh [bus|warplane|ship|truck|car] /src/configs/config.json /out

8- Your 3D unadversarial texture should now be generating! Output, including example renderings, the texture itself, and the model checkpoint will be saved to $(OUT_DIR)/$(exp_name).

An example texture that you would get for the warplane is

Simulating 3D Unadversarial Objects in AirSim

Coming soon!

Environments, 3D models, along with python API for controlling these objects and running online object recognition inside Microsoft's AirSim high-fidelity simulator.

Maintainers

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022)

Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022) Introdu

anonymous 14 Oct 27, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
[CVPR 2022] Unsupervised Image-to-Image Translation with Generative Prior

GP-UNIT - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Unsupervised Image-to-

Shuai Yang 125 Jan 03, 2023
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers

ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers Official implementation of ViewFormer. ViewFormer is a NeRF-free neural rend

Jonáš Kulhánek 169 Dec 30, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Introduction 1. Usage (For MSS) 1.1 Prepare running environment 1.2 Use pretrained model 1.3 Train new MSS models from scratch 1.3.1 How to train 1.3.

Leo 100 Dec 25, 2022
A New Open-Source Off-road Environment for Benchmark Generalization of Autonomous Driving

A New Open-Source Off-road Environment for Benchmark Generalization of Autonomous Driving Isaac Han, Dong-Hyeok Park, and Kyung-Joong Kim IEEE Access

13 Dec 27, 2022
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021) Overview of paths used in DIG and IG. w is the word being attributed. The

INK Lab @ USC 17 Oct 27, 2022
Code for our paper "Interactive Analysis of CNN Robustness"

Perturber Code for our paper "Interactive Analysis of CNN Robustness" Datasets Feature visualizations: Google Drive Fine-tuning checkpoints as saved m

Stefan Sietzen 0 Aug 17, 2021
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries an

Ivy 8.2k Jan 02, 2023
Interactive Terraform visualization. State and configuration explorer.

Rover - Terraform Visualizer Rover is a Terraform visualizer. In order to do this, Rover: generates a plan file and parses the configuration in the ro

Tu Nguyen 2.3k Jan 07, 2023
Self Driving RC Car Code

Derp Learning Derp Learning is a Python package that collects data, trains models, and then controls an RC car for track racing. Hardware You will nee

Not Karol 39 Dec 07, 2022
给yolov5加个gui界面,使用pyqt5,yolov5是5.0版本

博文地址 https://xugaoxiang.com/2021/06/30/yolov5-pyqt5 代码执行 项目中使用YOLOv5的v5.0版本,界面文件是project.ui pip install -r requirements.txt python main.py 图片检测 视频检测

Xu GaoXiang 215 Dec 30, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
The official repo of the CVPR2021 oral paper: Representative Batch Normalization with Feature Calibration

Representative Batch Normalization (RBN) with Feature Calibration The official implementation of the CVPR2021 oral paper: Representative Batch Normali

Open source projects of ShangHua-Gao 76 Nov 09, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transformer for Efficient Spatiotemporal Representation Learning". It curren

SenseTime X-Lab 573 Jan 04, 2023
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
converts nominal survey data into a numerical value based on a dictionary lookup.

SWAP RATE Converts nominal survey data into a numerical values based on a dictionary lookup. It allows the user to switch nominal scale data from text

Jake Rhodes 1 Jan 18, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Code for CVPR2019 Towards Natural and Accurate Future Motion Prediction of Humans and Animals

Motion prediction with Hierarchical Motion Recurrent Network Introduction This work concerns motion prediction of articulate objects such as human, fi

Shuang Wu 85 Dec 11, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022