Skip to content

serre-lab/tvsd-benchmark

Repository files navigation

TVSD Benchmark

This repo contains tools for loading and benchmarking models on the TVSD (THINGS Ventral Stream Spiking Dataset) from Papale et. al. 2025.

Setup

Begin by cloning the repository.

git clone [email protected]:serre-lab/tvsd-benchmark.git
cd tvsd-benchmark

Next, create a conda environment with our requirements.

conda create -n tvsd-benchmark
conda activate tvsd-benchmark
pip install -r requirements.txt

Alternatively, you can use a venv environment.

python -m venv env
source env/bin/activate
pip install -r requirements.txt

To obtain the TVSD dataset, run

chmod +x scripts/download_tvsd.sh
./scripts/download_tvsd.sh

Which will download the normalized MUA and metadata .mat files into a new data directory. To obtain the THINGS dataset, you should analogously run the following snippet. You will be prompted by osfclient to provide a password in order to unzip the dataset. You can easily obtain this password here.

chmod +x scripts/download_things.sh
./scripts/download_things.sh

Benchmarking a Model

Ensure that you have your virtual envirovnment activated, and run

sbatch scripts/generate_activations.sh [MODEL_CONFIG_PATH]

When this completes, run

sbatch scripts/benchmark.sh [MODEL_CONFIG_PATH]

(We separate the two jobs, as only the former requires a GPU.) The results will populate outputs/results/[model].

About

adapting papale et. al. 2025 into a neural alignment benchmark

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published