Skip to content

Commit

Permalink
Switch to Github actions.
Browse files Browse the repository at this point in the history
  • Loading branch information
vtjeng committed Mar 19, 2021
1 parent f386825 commit 9035e49
Show file tree
Hide file tree
Showing 4 changed files with 102 additions and 48 deletions.
78 changes: 78 additions & 0 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
name: CI

# From: https://github.com/JuliaDocs/Documenter.jl/blob/master/.github/workflows/CI.yml
# + https://discourse.julialang.org/t/easy-workflow-file-for-setting-up-github-actions-ci-for-your-julia-package/49765
on:
push:
branches:
- master
pull_request:
jobs:
test:
name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
version:
# we are excluding nightly builds because they can fail, and we want this to be
# a required check
- "1.0" # minimum Julia version that this package supports
- "1" # automatically expands to latest stable 1.x release of Julia
os:
- ubuntu-latest
- macos-latest
- windows-latest
arch:
- x64
include:
- os: ubuntu-latest
version: "1"
arch: x86
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@v1
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
show-versioninfo: true
- uses: actions/cache@v1
env:
cache-name: cache-artifacts
with:
path: ~/.julia/artifacts
key: ${{ runner.os }}-test-${{ env.cache-name }}-${{ hashFiles('**/Project.toml') }}
restore-keys: |
${{ runner.os }}-test-${{ env.cache-name }}-
${{ runner.os }}-test-
${{ runner.os }}-
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
- uses: julia-actions/julia-processcoverage@v1
- uses: codecov/codecov-action@v1
with:
file: lcov.info

docs:
name: Documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@v1
with:
version: "1"
- run: |
julia --project=docs -e '
using Pkg
Pkg.develop(PackageSpec(path=pwd()))
Pkg.instantiate()'
- run: |
julia --project=docs -e '
using Documenter: doctest
using MIPVerify
doctest(MIPVerify)'
- run: julia --project=docs docs/make.jl
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# I created a `DOCUMENTER_KEY` secret, via https://discourse.julialang.org/t/easy-workflow-file-for-setting-up-github-actions-ci-for-your-julia-package/49765/46
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }}
30 changes: 0 additions & 30 deletions .travis.yml

This file was deleted.

39 changes: 22 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# MIPVerify.jl

[![build status](https://travis-ci.com/vtjeng/MIPVerify.jl.svg?branch=master)](https://travis-ci.com/vtjeng/MIPVerify.jl)
[![CI](https://github.com/vtjeng/MIPVerify.jl/workflows/CI/badge.svg)](https://github.com/vtjeng/MIPVerify.jl/actions?query=workflow%3ACI)
[![code coverage](https://codecov.io/gh/vtjeng/MIPVerify.jl/branch/master/graph/badge.svg)](http://codecov.io/github/vtjeng/MIPVerify.jl?branch=master)
[![code formatting check status](https://github.com/vtjeng/MIPVerify.jl/workflows/JuliaFormatter/badge.svg?branch=master)](https://github.com/vtjeng/MIPVerify.jl/actions?query=workflow%3AJuliaFormatter+branch%3Amaster)
[![docs: stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://vtjeng.github.io/MIPVerify.jl/stable)
Expand All @@ -13,9 +13,11 @@ _Vincent Tjeng, Kai Xiao, Russ Tedrake_
https://arxiv.org/abs/1711.07356

## Getting Started

See the [documentation](https://vtjeng.github.io/MIPVerify.jl/latest) for [installation instructions](https://vtjeng.github.io/MIPVerify.jl/latest/#Installation-1), a [quick-start guide](https://nbviewer.jupyter.org/github/vtjeng/MIPVerify.jl/blob/master/examples/00_quickstart.ipynb), and [additional examples](https://nbviewer.jupyter.org/github/vtjeng/MIPVerify.jl/tree/master/examples/). Installation should only take a couple of minutes, including installing Julia itself.

## Why Verify Neural Networks?

Neural networks trained only to optimize for training accuracy have been shown to be vulnerable to _adversarial examples_, with small perturbations to input potentially leading to large changes in the output. In the context of image classification, the perturbed input is often indistinguishable from the original input, but can lead to misclassifications into any target category chosen by the adversary.

There is now a large body of work proposing defense methods to produce classifiers that are more robust to adversarial examples. However, as long as a defense is evaluated only via attacks that find local optima, we have no guarantee that the defense actually increases the robustness of the classifier produced.
Expand All @@ -25,32 +27,35 @@ Fortunately, we _can_ evaluate robustness to adversarial examples in a principle
Determining the minimum adversarial distortion for some input (or proving that no bounded perturbation of that input causes a misclassification) corresponds to solving an optimization problem. For piecewise-linear neural networks, the optimization problem can be expressed as a mixed-integer linear programming (MILP) problem.

## Features

`MIPVerify.jl` translates your query on the robustness of a neural network for some input into an MILP problem, which can then be solved by any optimizer supported by [JuMP](https://github.com/JuliaOpt/JuMP.jl). Efficient solves are enabled by tight specification of ReLU and maximum constraints and a progressive bounds tightening approach where time is spent refining bounds only if doing so could provide additional information to improve the problem formulation.

The package provides
+ High-level abstractions for common types of neural network layers:
+ Layers that are linear transformations (fully-connected, convolution, and average-pooling layers)
+ Layers that use piecewise-linear functions (ReLU and maximum-pooling layers)
+ Support for bounding perturbations to:
+ Perturbations of bounded l-infty norm
+ Perturbations where the image is convolved with an adversarial blurring kernel
+ Utility functions for:
+ Evaluating the robustness of a network on multiple samples in a dataset, with good support for pausing and resuming evaluation or running optimizers with different parameters
+ MNIST and CIFAR10 datasets for verification
+ Sample neural networks, including the networks verified in our paper.

- High-level abstractions for common types of neural network layers:
- Layers that are linear transformations (fully-connected, convolution, and average-pooling layers)
- Layers that use piecewise-linear functions (ReLU and maximum-pooling layers)
- Support for bounding perturbations to:
- Perturbations of bounded l-infty norm
- Perturbations where the image is convolved with an adversarial blurring kernel
- Utility functions for:
- Evaluating the robustness of a network on multiple samples in a dataset, with good support for pausing and resuming evaluation or running optimizers with different parameters
- MNIST and CIFAR10 datasets for verification
- Sample neural networks, including the networks verified in our paper.

## Results in Brief

Below is a modified version of Table 1 from our paper, where we report the adversarial error for classifiers to bounded perturbations with l-infinity norm-bound `eps`. For our verifier, a time limit of 120s per sample is imposed. Gaps between our bounds correspond to cases where the optimizer reached the time limit for some samples. Error is over the full MNIST test set of 10,000 samples.

| Dataset | Training Approach | `eps` | Lower<br>Bound<br>(PGD Error) | Lower<br>Bound<br>(ours) | Upper<br>Bound<br>(SOA)\^ | Upper<br>Bound<br>(ours)| Name in package\* |
|---|---|---|---|---|---|---|---|
| MNIST | [Wong et al. (2017)](https://arxiv.org/abs/1711.00851) | 0.1 | 4.11% | **4.38%** | 5.82% | **4.38%** | `MNIST.WK17a_linf0.1_authors` |
| MNIST | [Ragunathan et al. (2018)](https://arxiv.org/abs/1801.09344) | 0.1 | 11.51% | **14.36%** | 34.77% | **30.81%** | `MNIST.RSL18a_linf0.1_authors` |
| Dataset | Training Approach | `eps` | Lower<br>Bound<br>(PGD Error) | Lower<br>Bound<br>(ours) | Upper<br>Bound<br>(SOA)\^ | Upper<br>Bound<br>(ours) | Name in package\* |
| ------- | ------------------------------------------------------------ | ----- | ----------------------------- | ------------------------ | ------------------------- | ------------------------ | ------------------------------ |
| MNIST | [Wong et al. (2017)](https://arxiv.org/abs/1711.00851) | 0.1 | 4.11% | **4.38%** | 5.82% | **4.38%** | `MNIST.WK17a_linf0.1_authors` |
| MNIST | [Ragunathan et al. (2018)](https://arxiv.org/abs/1801.09344) | 0.1 | 11.51% | **14.36%** | 34.77% | **30.81%** | `MNIST.RSL18a_linf0.1_authors` |

\^ Values in this column represent previous state-of-the-art (SOA), as described in our paper.<br>
\* Neural network available for import via listed name using `get_example_network_params`.
\^ Values in this column represent previous state-of-the-art (SOA), as described in our paper.<br> \* Neural network available for import via listed name using `get_example_network_params`.

## Citing this Library

```
@article{tjeng2017evaluating,
title={Evaluating Robustness of Neural Networks with Mixed Integer Programming},
Expand Down
3 changes: 2 additions & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
MIPVerify = "e5e5f8be-2a6a-5994-adbb-5afbd0e30425"

[compat]
Documenter = "~0.23"
Documenter = "~0.23"

0 comments on commit 9035e49

Please sign in to comment.