This repository provides a new implementation of the paper Fooling Neural Network Interpretations via Adversarial Model Manipulation (https://arxiv.org/abs/1902.02041, https://github.com/rmrisforbidden/Fooling_Neural_Network-Interpretations). It includes code for fooling deep neural networks, including recently published architectures, as well as various CAM-based interpretation methods implemented in the TorchCAM (https://github.com/frgfm/torch-cam) repository. The implementation is built using PyTorch Lightning.
# clone project
git clone https://github.com/joshua840/AMM.git
# create environment
conda env create -f env.yaml
conda activate torch2.5_cuda12.4- Update
configs/AMM.yaml- Specify values for
dataset,data_dir,model, andh_target_layer.
- Specify values for
- Update
Loggeroption inconfigs/trainer.yaml- Using
NeptuneLogger- Set
api_key,project, andname, by following the instructions in (https://docs.neptune.ai/setup/).
- Set
- Using the other loggers
- Check the supported loggers in (https://lightning.ai/docs/pytorch/stable/extensions/logging.html)
- Loggers such as
WandB,Comet,Tensorboard, and others are available.
- Using
- (Optional) Mini ImageNet dataset (4GB) for demo run
- Download (https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000?resource=download)
- set
dataset=imagenetanddata_dir=PATH/TO/ImageNet
After finishing the above setting, you can directly run the following code:
# run experiments
bash scripts/amm.shThe checkpoints will be saved in .neptune directory.
In Lightining, the Trainer class includes arguments that are commonly used for model training. For more details on the Trainer class, please refer the following API documentation.:(https://lightning.ai/docs/pytorch/stable/common/trainer.html).
The argument lists are available as well using the following command:
python -m src.main -hIn Lightining, arguments of module classes inherenting from Lightning.pytorch.LightningModule are automatically registered in argparse lists. This feature keeps the codes clean by eliminating redundant argparse declarations.
Another advantage of Lightning is built-in support for using YAML files to pass the hyperparameters, including class-level arguments.
For more details, please refer to the Lightning tutorials
- (https://lightning.ai/docs/pytorch/stable/common/hyperparameters.html)
- (https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_intermediate.html)
@article{YourName,
title={Your Title},
author={Your team},
journal={Location},
year={Year}
}