Skip to content

Keras reimplementation of "One pixel attack for fooling deep neural networks" using differential evolution on cifar10

License

Notifications You must be signed in to change notification settings

RussellCloud/one-pixel-attack-keras

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

One Pixel Attack

使用 RussellCloud 两步复现 One Pixel Attack

步骤一:

搞定一个平台账号,点我,创建名为 one-pixel-attack-keraskeras 项目。

pip install -U russell-cli

步骤二:

克隆复现

git clone [email protected]:RussellCloud/one-pixel-attack-keras.git
cd one-pixel-attack-keras
russell login
russell init --name one-pixel-attack-keras
russell run --mode jupyter --data 9727e7f8109f46a49823ccc35b2d9959:cifar-10

Who would win?

How simple is it to cause a deep neural network to misclassify an image if we are only allowed to modify the color of one pixel and only see the prediction probability? Turns out it is very simple. In many cases, we can even cause the network to return any answer we want.

The following project is a Keras reimplementation and tutorial of "One pixel attack for fooling deep neural networks".

How It Works

For this attack, we will use the Cifar10 dataset. The task of the dataset is to correctly classify a 32x32 pixel image in 1 of 10 categories (e.g., bird, deer, truck). The black-box attack requires only the probability labels (the probability value for each category) that get outputted by the neural network. We generate adversarial images by selecting a pixel and modifying it to a certain color.

By using an Evolutionary Algorithm called Differential Evolution (DE), we can iteratively generate adversarial images to try to minimize the confidence (probability) of the neural network's classification.

Ackley GIF

First, generate several adversarial samples that modify a random pixel and run the images through the neural network. Next, combine the previous pixels' positions and colors together, generate several more adversarial samples from them, and run the new images through the neural network. If there were pixels that lowered the confidence of the network from the last step, replace them as the current best known solutions. Repeat these steps for a few iterations; then on the last step return the adversarial image that reduced the network's confidence the most. If successful, the confidence would be reduced so much that a new (incorrect) category now has the highest classification confidence.

See below for some examples of successful attacks:

Examples

Getting Started

A dedicated GPU suitable for running with Keras is recommended to run the tutorial. Alternatively, you can view the tutorial notebook on GitHub.

  1. Install the python packages in requirements.txt if you don't have them already.
pip install -r ./requirements.txt
  1. Clone the repository.
git clone https://github.com/Hyperparticle/one-pixel-attack-keras
cd ./one-pixel-attack-keras
  1. Run the iPython tutorial notebook with Jupyter.
jupyter notebook ./one-pixel-attack.ipynb

Results

Preliminary results after running several experiments:

Untargeted attack on 1,2,3 pixel perturbations of 100 samples each

model parameters test accuracy attack success rate
Lecun Net 62K 74.9% 34.4%
Pure CNN 1.4M 88.7% 26.3%
Network in Network 970K 90.7% 30.7%
ResNet 470K 92.3% 23.3%
CapsNet 12M 65.7% 21.0%

The success rate is much lower than demonstrated in the paper, but that's mostly due to an inefficient differential evolution implementation. This should be fixed soon.

It appears that the capsule network CapsNet, while more resilient to the one pixel attack than all other CNNs, is still vulnerable.

About

Keras reimplementation of "One pixel attack for fooling deep neural networks" using differential evolution on cifar10

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 70.2%
  • Python 29.8%