Repository for benchmarking different post-hoc XAI explanation methods on image datasets. Here is a quick guide on how to install and use the repo. More information about installation and usage can be found in the documentation.
To install the project, follow these steps:
-
Clone the repository:
git clone https://github.com/yourusername/saliency-benchmark.git
-
Navigate to the project directory:
cd saliency-benchmark
-
Create a virtual environment:
python -m venv env
-
Activate the virtual environment:
env/Scripts/activate
-
Install the dependencies:
./setup.bat
These steps will set up your working environment, install necessary dependencies, and prepare you to run the project.
To train the networks using this repository, use the following command:
python3 train.py model=VGG11_Weights.IMAGENET1K_V1 dataset.name=cifar10 train.finetune=True
-
model
: Specifies the pre-trained model to use. The full list of available models can be found here -
dataset.name
: Specifies the dataset to use. The supported datasets are:cifar10
cifar100
caltech101
mnist
svhn
oxford-iiit-pet
-
train.finetune
: Determines the training mode.True
: Fine-tunes the entire model.False
: Uses the model as a feature extractor.
These parameters allow you to customize the training process according to your specific requirements. For a detailed configuration, you may refer to or modify the train section of the config.yaml file according to your specific requirements.
To evaluate the trained model, use the following command:
python3 test.py
You need to specify the following parameters in the config.yaml file:
model
: The pre-trained model to use.dataset.name
: The dataset used for testing.checkpoint
: Path to the model checkpoint. Choose from the model checkpoints available in the checkpoints folder.
After training and testing the model you can evaluate the explainability of the model by using the following command:
python3 evaluate/evaluate_saliency.py
You need to specify the following parameters in the config.yaml file:
model
: The pre-trained model to use.dataset.name
: The dataset used for testing.checkpoint
: Path to the model checkpoint. Choose from the model checkpoints available in the checkpoints folder.saliency.method
: Saliency method used for evaluating the model's explanations. The supported methods are:gradcam
,rise
,sidu
,lime
.metrics.output_file
: Specifies the file name for saving the evaluation metrics.
You can evaluate the object detection method by using the following command:
python3 evaluate/evaluate_detector.py