Randomized smoothing adversarial defense for ASR models, with original enhancement and voting strategies to limit the drop in performance. This code goes along with the EMNLP 2021 article "Sequential Randomized Smoothing for Adversarially Robust Speech Recognition".
We use armory to run adversarial experiments in a controlled environment. Please refer to their README for setup.
To install all requirements we use a custom Docker image. You can set it up with command
docker build -t smoothing/pytorch-asr --build-arg armory_version=0.13.4 docker/
The tag is important, as it is refered to in our config files.
If you do not wish to use docker or armory, please refer to the Dockerfile for all package and libraries installation commands.
Using armory you can run any of our config file. For instance :
armory run configs/pgd/10/g1_trained_rover.json --num-eval-batches 100
Or write your own config files for custom experiments.
You will find all our pretrained models and some auxiliary files here. Dump them in the saved models folder you setup when configuring armory.
The export_samples field in armory configuration files lets you export audio adversarial examples. Here we share some files, generated with the attacks mentioned in our paper against our proposed defense.
If you use this code please cite our paper:
@inproceedings{olivier-raj-2021-sequential,
title = "Sequential Randomized Smoothing for Adversarially Robust Speech Recognition",
author = "Olivier, Raphael and
Raj, Bhiksha",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.514",
pages = "6372--6386",
}