Security evaluation module with onnx, pytorch, and SecML.

Overview

๐Ÿš€ ๐Ÿผ ๐Ÿ”ฅ PandaVision

Integrate and automate security evaluations with onnx, pytorch, and SecML!

Installation

Starting the server without Docker

If you want to run the server with docker, skip to the next section.

This project uses Redis-RQ for handling the queue of requested jobs. Please install Redis if you plan to run this Flask server without using Docker.

Then, install the Python requirements, running the following command in your shell:

pip install -r requirements.txt

Make sure your Redis server is running on your local machine. Test the Redis connection with the following command:

redis-cli ping

The response PONG should appear in the shell.

If the database servers is down, check the linked docs for finding out how to restart it in your system.

Notice: the code is expected to connect to the database through its default port, 6379 for Redis.

Now we are ready to start the server. Don't forget that this system uses external workers to process the long-running tasks, so we need to start the workers along with the sever. Run the following commands from the app folder:

python app/worker.py

Now open another shell and run the server:

python app/runserver.py

Starting the server with docker

If you already started the server locally, you can skip to the next section.

If you already started the server locally, but you want to start it with docker instead, you should stop the running services. On linux, press CTRL + C to stop the server and the worker, then stop the redis service on the machine.

sudo service redis stop

In order to use the docker-compose file provided, install Docker and start the Docker service.

Since this project uses different interconnected containers, it is recommended to install and use Docker Compose.

Once set up, Docker Compose will automatically take care of the setup process. Just type the following commands in your shell, from the app path:

docker build . -t pandavision && docker-compose build && docker-compose up

If you want to use more workers, the following command should be used(replace the number 2 with the number of workers you want to set up):

docker-compose up --scale worker=2

Usage

Quick start

For a demo example, you can download a sample containing few images of the imagenet dataset and a resnet50-pretrained model from the onnx zoo.

Download the files and place them in a known directory.

Supported models

You can export your own ONNX pretrained model from the library of your choice, and pass them to the module. This project uses onnx2pytorch as a dependency to load the ONNX models. Check out the supported operations if you encounter problems when importing the models. A list of pretrained models is also available in the main page.

Data preparation

The module accepts HDF5 files as data sources. The file should contain the samples as the format NCHV.

Note that, while the standardization can be performed through the APIs themselves (preferred), the preprocessing such as resize, reshape, rotation and normalization should be applied in this step.

An example, that creates a subset of the imagenet dataset, can be found in this gist.

How to start a security evaluation job

The easy way

You can access the APIs through the web interface by connecting at http://localhost:8080. You will be prompted to the home page of the service. Click then on the "Try it out!" button, and you will see a form to configure the security evaluation. Upload the model and the dataset of choice, then select the paramters. Finally, click "Submit", and wait for the evaluation to finish. As soon as the worker finishes processing the data, you will see the security evaluation curve on the interface.

You can follow this video tutorial (click for YouTube video) for configuring the security evaluation:

Demo PandaVision

Coming soon โžก๏ธ download data in csv format.

The nerdy way

A security evaluation job can be enqueued with a POST request to /security_evaluations. The API returns the job unique ID that can be used to access job status and results. Running workers will wait for new jobs in the queue and consume them with a FIFO rule.

The request should specify the following parameters in its body:

  • dataset (string): the path where to find the dataset to be loaded (validation dataset should be used, otherwise check out the "indexes" input parameter).
  • trained-model (string): the path of the onnx trained model.
  • performance-metric (string): the performance metric type that should be used to evaluate the system adversarial robustness. Currently implemented only the classification-accuracy metric.
  • evaluation-mode (string): one of 'fast', 'complete'. A fast evaluation will perform the experiment with a subset of the whole dataset (100 samples). For more info on the fast evaluation, see this paper.
  • task (string): type of task that the model is supposed to perform. This determines the attack scenario. (available: "classification" - support for more use cases will be provided in the future).
  • perturbation-type (string): type of perturbation to apply (available: "max-norm" or "random").
  • perturbation-values (Array of floats): array of values to use for crafting the adversarial examples. These are specified as percentage of the input range, fixed, in [0, 1] (e.g., a value of 0.05 will apply a perturbation of maximum 5% of the input scale).
  • indexes (Array of ints): if the list of indexes is specified, it will be used for creating a specific sample from the dataset.
  • preprocessing (dict): dictionary with keys "mean" and "std" for defining custom preprocessing. The values should be expressed as lists. If not set, standard imagenet preprocessing will be applied. Otherwise, specify an empty dict for no preprocessing.
{
  "dataset": "<dataset-path>.hdf5",
  "trained-model": "<model_path>.onnx",
  "performance-metric": "classification-accuracy",
  "evaluation-mode": "fast",
  "task": "classification",
  "perturbation-type": "max-norm",
  "perturbation-values": [
    0, 0.01, 0.02, 0.03, 0.04, 0.05
  ]
}

The API can also be tested with Postman (it is configured already to get the ID and use it for fetching results):

Run in Postman

Job status API

Job status can be retrieved by sending a GET request to /security_evaluations/{id}, where the id of the job should be replaced with the job ID of the previous point. A GET to /security_evaluations will return the status of all jobs found in the queues and in the finished job registries.

Job results API

Job results can be retrieved, once the job has entered the finished state, with a GET request to /security_evaluations/{id}/output. A request to this path with a job ID that is not yet in the finished status will redirect to the job status API.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

If you don't have time to contribute yourself, feel free to open an issue with your suggestions.

License

This project is licensed under the terms of the MIT license. See LICENSE for more information.

Credits

Based on the Security evaluation module - ALOHA.eu project

Comments
  • Adv examples api (PGD support)

    Adv examples api (PGD support)

    Changelog

    • [x] Add caching for PGD attack

    • [x] Add curve visualization for PGD attack

    • [x] Add adversarial example visualization for PGD attack

    • [x] Extend to other attacks

    • [x] Fix min-distance attacks and PGD caching

    • [x] Document the changes

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Updates - attack logging, adversarial example inspection, debugging.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) Major changes.

    • Other information:(does the pr fix some issues? Tag them with #)

      Fixes #6 .

    opened by maurapintor 0
  • Fix ram problems

    Fix ram problems

    Changelog

    • Fixed CW attack memory problem
    • Efficient computation of adversarial examples in maximum-norm case

    What kind of change does this PR introduce?

    • Clear cache for CW attack (temporary fix until secml is updated to support optional caching).

    • PGD attack is run, for each value of perturbation, only in the cases that were not found adversarial for smaller norms.

    • Other information:

      Fixes #21

    opened by maurapintor 0
  • Memory problems when running complete evaluation

    Memory problems when running complete evaluation

    Evaluation fails with some particular configuration of parameters. The reason seems to be related to cached adversarial examples.

    Expected Behavior

    The attack should not make the ram memory explode.

    Current Behavior

    The ram memory fills, then the swap memory, then everything freezes.

    Possible Solution

    Possibly free unused data, such as the attack paths.

    Steps to Reproduce

    The evaluation fails with the following set of parameters:

    • resnet 50 net
    • imagenet data from the demo data
    • L2 CW attack

    Context (Environment)

    • OS: Ubuntu 20.04 LTS
    • Python Version: 3.8
    • Pandavision Version: 0.3
    • Browser: Mozilla Firefox
    bug enhancement 
    opened by maurapintor 0
  • fixed conflict for picker

    fixed conflict for picker

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix

    • What is the current behavior? Now the GUI updates the attack selection and the perturbation size choices simultaneously.

    • Other information: Fixes #19

    opened by maurapintor 0
  • Attack selector bug

    Attack selector bug

    Attack choices not shown.

    Expected Behavior

    On the GUI, if the perturbation type is picked, the selector for the attack should visualize the attack choices for the specified perturbation model.

    Current Behavior

    The attack choices are not updated.

    Possible Solution

    Possible conflict with the jquery call that updates the perturbation values.

    bug 
    opened by maurapintor 0
  • Fix docker compose version

    Fix docker compose version

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix for docker container. Feature: picker for perturbation size.

    • What is the new behavior?

    • Now the docker-compose should at least be v1.16, as it supports the yaml file format used in this repo for building the pandavision architecture.
    • The GUI now allows to pick the perturbation sizes for the evaluation.
    • Other information: Fixes #14 Fixes #17
    opened by maurapintor 0
  • Docker compose problem with services key

    Docker compose problem with services key

    Docker compose file format is incompatible with old versions.

    Expected Behavior

    The command:

    docker build . -t pandavision && docker-compose build && docker-compose up
    

    should build the container and run smoothly.

    Current Behavior

    The command produces, with some Docker-compose versions, the following output:

    Successfully tagged pandavision:latest ERROR: The Compose file './docker-compose.yml' is invalid because: Unsupported config option for services: 'worker'

    Possible Solution

    The problem seems related to the docker-compose versions that have incompatible specifications for the expected yaml: https://docs.docker.com/compose/compose-file/compose-versioning/#versioning

    A suggested solution, from this StackOverflow question, is to upgrade the docker-compose version, and specify the version number in the top of the yaml file.

    Possible Implementation

    1. add line in the yaml file, stating version: "3" in the header.
    2. suggest minimum version required for docker-compose, i.e. at least 1.6, in the readme file.
    opened by maurapintor 0
  • Chart x-axis based on eps values rather than order

    Chart x-axis based on eps values rather than order

    The sec-eval curve is now presenting results in a "linspace" way. The possibility of adding scatter values should be added, so that the list of eps values can be dynamically adjusted to arbitrary ranges.

    bug enhancement 
    opened by maurapintor 0
  • GUI for security evaluations

    GUI for security evaluations

    Add visual interface for testing APIs. It should display at least the model and data selection, plus the results of the security evaluation when completed.

    enhancement 
    opened by maurapintor 0
  • Sequential attacks

    Sequential attacks

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    A multi-attack interface should be used. The interface should allow to specify a sequence of attacks that is used for testing the robustness of a model. The sequence will run the first attack on the whole dataset, then run the next attack in the sequence only on the points that fail for the given perturbation model.

    enhancement 
    opened by maurapintor 0
  • RobustBench models

    RobustBench models

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    Models from RobustBench should be available through the interface. The choice should be available next to the upload model button, where a dropdown menu should be displayed.

    enhancement 
    opened by maurapintor 0
  • Dataset samples

    Dataset samples

    I'm submitting a ...

    [x] feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    The interface should allow for selecting subsamples of commonly-used datasets without uploading them to the server. At least a sample from the following datasets should be included:

    • [ ] MNIST
    • [ ] CIFAR10
    • [ ] CIFAR100
    • [ ] ImageNet
    enhancement 
    opened by maurapintor 0
  • Feature request: other tasks

    Feature request: other tasks

    I'm submitting a ...

    • Feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    More use cases could be supported, as in https://gitlab.com/aloha.eu/security_evaluation. Possible use cases are:

    • detection
    • segmentation
    enhancement 
    opened by maurapintor 0
  • GPU support for container

    GPU support for container

    GPU can be currently used by running the server and worker locally. Using a container that also works with GPU might be beneficial for speedups and ease installation.

    enhancement help wanted 
    opened by maurapintor 0
Releases(v0.5)
Owner
Maura Pintor
๐Ÿผ Fighting evil adversarial pandas.
Maura Pintor
Complete system for facial identity system

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

4 May 02, 2022
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
PyMatting: A Python Library for Alpha Matting

Given an input image and a hand-drawn trimap (top row), alpha matting estimates the alpha channel of a foreground object which can then be composed onto a different background (bottom row).

PyMatting 1.4k Dec 30, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
This program automatically runs Python code copied in clipboard

CopyRun This program runs Python code which is copied in clipboard WARNING!! USE AT YOUR OWN RISK! NO GUARANTIES IF ANYTHING GETS BROKEN. DO NOT COPY

vertinski 4 Sep 10, 2021
code for Fast Point Cloud Registration with Optimal Transport

robot This is the repository for the paper "Accurate Point Cloud Registration with Robust Optimal Transport". We are in the process of refactoring the

28 Jan 04, 2023
Official PyTorch Implementation for "Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes"

PVDNet: Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes This repository contains the official PyTorch implementatio

Junyong Lee 98 Nov 06, 2022
Romanian Automatic Speech Recognition from the ROBIN project

RobinASR This repository contains Robin's Automatic Speech Recognition (RobinASR) for the Romanian language based on the DeepSpeech2 architecture, tog

RACAI 10 Jan 01, 2023
Pytorch reimplementation of PSM-Net: "Pyramid Stereo Matching Network"

This is a Pytorch Lightning version PSMNet which is based on JiaRenChang/PSMNet. use python main.py to start training. PSM-Net Pytorch reimplementatio

XIAOTIAN LIU 1 Nov 25, 2021
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

KoGPT KoGPT (Korean Generative Pre-trained Transformer) https://github.com/kakaobrain/kogpt https://huggingface.co/kakaobrain/kogpt Model Descriptions

Kakao Brain 799 Dec 28, 2022
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 03, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at [email protected]

TableParser Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at DS3 Lab 11 Dec 13, 2022

Springer Link Download Module for Python

โ™ž pupalink A simple Python module to search and download books from SpringerLink. ๐Ÿงช This project is still in an early stage of development. Expect br

Pupa Corp. 18 Nov 21, 2022
dualPC.R contains the R code for the main functions.

dualPC.R contains the R code for the main functions. dualPC_sim.R contains an example run with the different PC versions; it calls dualPC_algs.R whic

3 May 30, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022
[NeurIPS 2021] PyTorch Code for Accelerating Robotic Reinforcement Learning with Parameterized Action Primitives

Robot Action Primitives (RAPS) This repository is the official implementation of Accelerating Robotic Reinforcement Learning via Parameterized Action

Murtaza Dalal 55 Dec 27, 2022
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
Source code for deep symbolic optimization.

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. Th

Brenden Petersen 290 Dec 25, 2022
Personals scripts using ageitgey/face_recognition

HOW TO USE pip3 install requirements.txt Add some pictures of known people in the folder 'people' : a) Create a folder called by the name of the perso

Antoine Bollengier 1 Jan 06, 2022