This is the official code base for the EMNLP 2021 paper, "Multi-granularity Textual Adversarial Attack with Behavior Cloning".
Here are some brief introductions for main folders.
This folder saves a file about pre-defined victim model structure (uniform interface to be called) and a sub-folder to save pre-trained victim-models.
This folder contains python files to attack victim model and a sub-folder to train a reinforcement learning model.
this folder contains different datasets and provide dataset templates in attack process.
This folder contains python files to evaluate adversarial samples.
This folder saves the GPT2 paraphrase model (not necessary when not use GPT2 model).
git clone https://github.com/Yangyi-Chen/MAYA
https://drive.google.com/drive/folders/1RmiXX8u1ojj4jorj_QgxOWWkryDIdie-
If you want to use the GPT2 paraphraser, download these files and put all downloaded files in the“paraphrase_models/style_transfer_paraphrase/”directory.
When do MAYA attack, you need to implement the uniform interface defined in “models.py”. All you have to do is just to implement the “_call_” function and there are some victim model classes you can directly use (BiLSTM, BERT and RoBERTa), so you only need to prepare your own victim model .
You can downloaded pretrained victim model here(SST-2 BERT), put it in “models/pretrained_models/bert_for_sst2”.
We use tsv format to save dataset. You can found more information in “dataset” folder.
You could visit https://fanyi-api.baidu.com/product/11 to get registration and then apply for your application id and secret key. And then fill them in “BaiduTransAPI_forPython3.py”in “MG” folder.
Now you could start MAYA attack by directly run the code “attack.py”.
You could set hyperparameters in “attack.py”such as ways of paraphrase, number of attacks, etc.
python attack.py
Note: Some packages are needed when running the python file. Just install these packages following corresponding prompts.
We provide several checkpoints of our pre-trained RL model for you to use (https://drive.google.com/drive/folders/1GfWs8YN9hRPwN7CmTrgfuh028KWqrH2M).
You could get how to use it in “attack.py”.
If you find it useful, please cite the following work:
@article{chen2021multi,
title={Multi-granularity Textual Adversarial Attack with Behavior Cloning},
author={Chen, Yangyi and Su, Jin and Wei, Wei},
journal={arXiv preprint arXiv:2109.04367},
year={2021}
}