[ICSE2020] MemLock: Memory Usage Guided Fuzzing

Overview

MemLock: Memory Usage Guided Fuzzing

MIT License

This repository provides the tool and the evaluation subjects for the paper "MemLock: Memory Usage Guided Fuzzing" accepted for the technical track at ICSE'2020. A pre-print of the paper can be found at ICSE2020_MemLock.pdf.

The repository contains three folders: tool, tests and evaluation.

Tool

MemLock is built on top of the fuzzer AFL. Check out AFL's website for more information details. We provide here a snapshot of MemLock. For simplicity, we provide shell script for the whole installation.

Requirements

  • Operating System: Ubuntu 16.04 LTS (We have tested the artifact on the Ubuntu 16.04)
  • Run the following command to install Docker (Docker version 18.09.7):
    $ sudo apt-get install docker.io
    (If you have any question on docker, you can see Docker's Documentation).
  • Run the following command to install required packages
    $ sudo apt-get install git build-essential python3 cmake tmux libtool automake autoconf autotools-dev m4 autopoint help2man bison flex texinfo zlib1g-dev libexpat1-dev libfreetype6 libfreetype6-dev

Clone the Repository

$ git clone https://github.com/wcventure/MemLock-Fuzz.git MemLock --depth=1
$ cd MemLock

Build and Run the Docker Image

Firstly, system core dumps must be disabled as with AFL.

$ echo core|sudo tee /proc/sys/kernel/core_pattern
$ echo performance|sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Run the following command to automatically build the docker image and configure the environment.

# build docker image
$ sudo docker build -t memlock --no-cache ./

# run docker image
$ sudo docker run --cap-add=SYS_PTRACE -it memlock /bin/bash

Usage

The running command line is similar to AFL.

To perform stack memory usage guided fuzzing, run following command line after use memlock-stack-clang to compile the program, as an example shown in tests/run_test1_MemLock.sh

tool/MemLock/build/bin/memlock-stack-fuzz -i testcase_dir -o findings_dir -d -- /path/to/program @@

To perform heap memory usage guided fuzzing, run following command line after use memlock-heap-clang to compile the program, as an example shown in tests/run_test2_MemLock.sh.

tool/MemLock/build/bin/memlock-heap-fuzz -i testcase_dir -o findings_dir -d -- /path/to/program @@

Tests

Before you use MemLock fuzzer, we suggest that you first use two simple examples provided by us to determine whether the Memlock fuzzer can work normally. We show two simple examples to shows how MemLock can detect excessive memory consumption and why AFL cannot detect these bugs easily. Example 1 demonstrates an uncontrolled-recursion bug and Example 2 demonstrates an uncontrolled-memory-allocation bug.

Run for testing example 1

Example 1 demonstrates an uncontrolled-recursion bug. The function fact() in example1.c is a recursive function. With a sufficiently large recursive depth, the execution would run out of stack memory, causing stack-overflow. You can perform fuzzing on this example program by following commands.

# enter the tests folder
$ cd tests

# run testing example 1 with MemLock
$ ./run_test1_MemLock.sh

# run testing example 1 with AFL (Open another terminal)
$ ./run_test1_AFL.sh

In our experiments for testing example 1, MemLock can find crashes in a few minutes while AFL can not find any crashes.

Run for testing example 2

Example 2 demonstrates an uncontrolled-memory-allocation bug. At line 25 in example2.c, the length of the user inputs is fed directly into new []. By carefully handcrafting the input, an adversary can provide arbitrarily large values, leading to program crash (i.e., std::bad_alloc) or running out of memory. You can perform fuzzing on this example program by following commands.

# enter the tests folder
$ cd tests

# run testing example 2 with MemLock
$ ./run_test2_MemLock.sh

# run testing example 2 with AFL (Open another terminal)
$ ./run_test2_AFL.sh

In our experiments for testing example 2, MemLock can find crashes in a few minutes while AFL can not find any crashes.

Evaluation

The fold evaluation contains all our evaluation subjects. After having MemLock installed, you can run the script to build and instrument the subjects. After instrument the subjects you can run the script to perform fuzzing on the subjects.

Build Target Program

In BUILD folder, You can run the script ./build_xxx.sh. It shows how to build and instrument the subject. For example:

# build cxxfilt
$ cd BUILD
$ ./build_cxxfilt.sh

Run for Fuzzing

After instrumenting the subjects, In FUZZ folder you can run the script ./run_MemLock_cxxfilt.sh to run a MemLock fuzzer instance on program cxxfilt. If you want to compare its performance with AFL, you can open another terminal and run the script ./run_AFL_cxxfilt.sh.

# build cxxfilt
$ cd FUZZ
$ ./run_MemLock_cxxfilt.sh

Publications

@inproceedings{wen2020memlock,
Author = {Wen, Cheng and Wang, Haijun and Li, Yuekang and Qin, Shengchao and Liu, Yang, and Xu, Zhiwu and Chen, Hongxu and Xie, Xiaofei and Pu, Geguang and Liu, Ting},
Title = {MemLock: Memory Usage Guided Fuzzing},
Booktitle= {2020 IEEE/ACM 42nd International Conference on Software Engineering},
Year ={2020},
Address = {Seoul, South Korea},
}

Practical Security Impact

CVE ID Assigned By This Work (26 CVEs)

Our tools have found several security-critical vulnerabilities in widely used open-source projects and libraries, such as Binutils, Elfutils, Libtiff, Mjs.

Vulnerability Package Program Vulnerability Type
CVE-2020-36375 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36374 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36373 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36372 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36371 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36370 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36369 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36368 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36367 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36366 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-18392 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2019-6293 Flex 2.6.4 flex CWE-674: Uncontrolled Recursion
CVE-2019-6292 Yaml-cpp v0.6.2 prase CWE-674: Uncontrolled Recursion
CVE-2019-6291 NASM 2.14.03rc1 nasm CWE-674: Uncontrolled Recursion
CVE-2019-6290 NASM 2.14.03rc1 nasm CWE-674: Uncontrolled Recursion
CVE-2018-18701 Binutils 2.31 nm CWE-674: Uncontrolled Recursion
CVE-2018-18700 Binutils 2.31 nm CWE-674: Uncontrolled Recursion
CVE-2018-18484 Binutils 2.31 c++filt CWE-674: Uncontrolled Recursion
CVE-2018-17985 Binutils 2.31 c++filt CWE-674: Uncontrolled Recursion
CVE-2019-7704 Binaryen 1.38.22 wasm-opt CWE-789: Uncontrolled Memory Allocation
CVE-2019-7698 Bento4 v1.5.1-627 mp4dump CWE-789: Uncontrolled Memory Allocation
CVE-2019-7148 Elfutils 0.175 eu-ar CWE-789: Uncontrolled Memory Allocation
CVE-2018-20652 Tinyexr v0.9.5 tinyexr CWE-789: Uncontrolled Memory Allocation
CVE-2018-18483 Binutils 2.31 c++filt CWE-789: Uncontrolled Memory Allocation
CVE-2018-20657 Binutils 2.31 c++filt CWE-401: Memory Leak
CVE-2018-20002 Binutils 2.31 nm CWE-401: Memory Leak

Video

Links

Owner
Cheng Wen
I am a Ph.D. student at Shenzhen University. My research interest is in the area of Cyber Security(SEC), Programming Language(PL), and Software Engineering(SE).
Cheng Wen
Instant-nerf-pytorch - NeRF trained SUPER FAST in pytorch

instant-nerf-pytorch This is WORK IN PROGRESS, please feel free to contribute vi

94 Nov 22, 2022
基于AlphaPose的TensorRT加速

1. Requirements CUDA 11.1 TensorRT 7.2.2 Python 3.8.5 Cython PyTorch 1.8.1 torchvision 0.9.1 numpy 1.17.4 (numpy版本过高会出报错 this issue ) python-package s

52 Dec 06, 2022
Spatial Action Maps for Mobile Manipulation (RSS 2020)

spatial-action-maps Update: Please see our new spatial-intention-maps repository, which extends this work to multi-agent settings. It contains many ne

Jimmy Wu 27 Nov 30, 2022
Rust bindings for the C++ api of PyTorch.

tch-rs Rust bindings for the C++ api of PyTorch. The goal of the tch crate is to provide some thin wrappers around the C++ PyTorch api (a.k.a. libtorc

Laurent Mazare 2.3k Dec 30, 2022
SigOpt wrappers for scikit-learn methods

SigOpt + scikit-learn Interfacing This package implements useful interfaces and wrappers for using SigOpt and scikit-learn together Getting Started In

SigOpt 73 Sep 30, 2022
Implements the training, testing and editing tools for "Pluralistic Image Completion"

Pluralistic Image Completion ArXiv | Project Page | Online Demo | Video(demo) This repository implements the training, testing and editing tools for "

Chuanxia Zheng 615 Dec 08, 2022
DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation

DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation This project hosts the code for implementing the DCT-MASK algorithms

Alibaba Cloud 57 Nov 27, 2022
A stock generator that assess a list of stocks and returns the best stocks for investing and money allocations based on users choices of volatility, duration and number of stocks

Stock-Generator Please visit "Stock Generator.ipynb" for a clearer view and "Stock Generator.py" for scripts. The stock generator is designed to allow

jmengnyay 1 Aug 02, 2022
Code to train models from "Paraphrastic Representations at Scale".

Paraphrastic Representations at Scale Code to train models from "Paraphrastic Representations at Scale". The code is written in Python 3.7 and require

John Wieting 71 Dec 19, 2022
Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision

Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision Project | PDF | Poster Fangyu Li, N. Dinesh Reddy, X

25 Dec 21, 2022
Tools for investing in Python

InvestOps Original repository on GitHub Original author is Magnus Erik Hvass Pedersen Introduction This is a Python package with simple and effective

24 Nov 26, 2022
S2s2net - Sentinel-2 Super-Resolution Segmentation Network

S2S2Net Sentinel-2 Super-Resolution Segmentation Network Getting started Install

Wei Ji 10 Nov 10, 2022
BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins

BT-Unet: A-Self-supervised-learning-framework-for-biomedical-image-segmentation-using-Barlow-Twins Deep learning has brought most profound contributio

Narinder Singh Punn 12 Dec 04, 2022
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

🍐 quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding 🍐 Installation $ git clone

Andrew Jesson 19 Jun 23, 2022
Tiny Kinetics-400 for test

Kinetics-400迷你数据集 English | 简体中文 该数据集旨在解决的问题:参照Kinetics-400数据格式,训练基于自己数据的视频理解模型。 数据集介绍 Kinetics-400是视频领域benchmark常用数据集,详细介绍可以参考其官方网站Kinetics。整个数据集包含40

38 Jan 06, 2023
Towards Debiasing NLU Models from Unknown Biases

Towards Debiasing NLU Models from Unknown Biases Abstract: NLU models often exploit biased features to achieve high dataset-specific performance witho

Ubiquitous Knowledge Processing Lab 22 Jun 14, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
This repository contains part of the code used to make the images visible in the article "How does an AI Imagine the Universe?" published on Towards Data Science.

Generative Adversarial Network - Generating Universe This repository contains part of the code used to make the images visible in the article "How doe

Davide Coccomini 9 Dec 18, 2022
An implementation of based on pytorch and mmcv

FisherPruning-Pytorch An implementation of Group Fisher Pruning for Practical Network Compression based on pytorch and mmcv Main Functions Pruning f

Peng Lu 15 Dec 17, 2022
Dynamic Token Normalization Improves Vision Transformers

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022