Skip to content

Low accuracy when finetuning with pretrained weights on TUAB dataset #16

@alskdjfasdfsadf

Description

@alskdjfasdfsadf

Hello! Thank you for sharing this excellent work and making the code publicly available. I'm having some difficulty reproducing the results reported in the paper when finetuning with the provided pretrained weights on the TUAB dataset.

Setup:
I followed the exact same setup as described in the paper and used the pretrained weights directly from this repository. However, I noticed that the learning rate for finetuning wasn't specified in the paper, so I used 0.0001.

Command used:
python3 finetune_main.py
--downstream_dataset TUAB
--datasets_dir '/remotenas0/database/TUH_Corpus/tuh_eeg_abnormal/v3.0.1/edf/process_refine'
--foundation_dir '/home/jwkim/Desktop/CBraMod-main/pretrained_weights.pth'
--model_dir './results'
--num_of_classes 2
--cuda 0
--epochs 50
--batch_size 16
--lr 0.0001
--use_pretrained_weights True

Issue:
The paper reports an accuracy of 0.8289 for the TUAB dataset, but I'm getting:
Val Evaluation: acc: 0.80386, pr_auc: 0.89600, roc_auc: 0.87455

Questions:

  1. Could you please share the learning rate used for finetuning in your experiments?
  2. Are there any additional hyperparameters or preprocessing steps that might not be documented but are crucial for reproducing the results?

I would be very grateful for any guidance you could provide to help me reproduce the paper's performance. Thank you for your time and for making this valuable research available to the community!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions