Comparison between (a) sufficient representation and (b) minimal sufficient representation. In conventional contrastive learning,
Feature visualization
- Python 3.9
- Cuda 12.1
- Pytorch 2.31
- Required libraries are listed in requirements.txt.
pip install -r requirements.txt
Download the SleepEDF20, and MASS3 and put them the data dir.
Convert the data to .npz format.
python Preprocessing.py
Our model consist of pretrain and fintuing part.
First, model's feature extractor learn the domain-invarint feature via multi-scale minimal sufficient learning.
python Pretrain.py
Second, To demonstrate the performance of the feature extractor, we train a transformer-based classifier while keeping the parameters of the feature extractor fixed. The transformer-based classifier follows the model proposed in prior work SleePyCo for sleep scoring. You can eddit the config .json file batch size = 1024, seq_len = 1, mode = pretrain
python FineTuning.py
The code is inspired by prior awesome works:
SleePyCo: Automatic sleep scoring with feature pyramid and contrastive learning (Expert Systems with Applications 2024)
MVEB: Self-Supervised Learning With Multi-View Entropy Bottleneck (Transactions on Pattern Analysis and Machine Intelligence 2024)