-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit ce0789a
Showing
159 changed files
with
22,243 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
*.pyc* | ||
*log* | ||
.idea | ||
data | ||
*.pkl |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
MIT License | ||
|
||
Copyright (c) 2024 Haitao Wen | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in all | ||
copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,107 @@ | ||
## CLearning is a general continual learning framework. | ||
|
||
### 1. Features | ||
- Supported incremental learning (IL) scenarios: | ||
|
||
Task-IL, Domain-IL, and Class-IL | ||
|
||
- Supported datasets: | ||
|
||
MNIST, PMNIST, RMNIST, FIVE, SVHN, CIFAR10, CIFAR100, SUPER, TinyImageNet, miniImageNet, DTD, ImageNet100, CUB, ImageNet1000 | ||
|
||
- Supported training modes: | ||
|
||
Single GPU, Distributed Parallel (DP) | ||
|
||
- Support flexibly specify model: | ||
|
||
model specified in '*.yaml' > default model | ||
|
||
|
||
### 2. Methods | ||
The current repository implements the following continual learning methods: | ||
|
||
- Baseline | ||
- FineTune | ||
- Regularization | ||
- EWC: [Overcoming catastrophic forgetting in neural networks](https://www.pnas.org/doi/abs/10.1073/pnas.1611835114) | ||
- LwF: [Learning without forgetting](https://ieeexplore.ieee.org/abstract/document/8107520/) | ||
- GPM: [Gradient projection memory for continual learning](https://arxiv.org/abs/2103.09762) | ||
- Replay | ||
- AFC: [Class-incremental learning by knowledge distillation with adaptive feature consolidation](http://openaccess.thecvf.com/content/CVPR2022/html/Kang_Class-Incremental_Learning_by_Knowledge_Distillation_With_Adaptive_Feature_Consolidation_CVPR_2022_paper.html) | ||
- ANCL: [Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning](http://openaccess.thecvf.com/content/CVPR2023/html/Kim_Achieving_a_Better_Stability-Plasticity_Trade-Off_via_Auxiliary_Networks_in_Continual_CVPR_2023_paper.html) | ||
- CSCCT: [Class-incremental learning with cross-space clustering and controlled transfer](https://link.springer.com/chapter/10.1007/978-3-031-19812-0_7) | ||
- iCaRL: [icarl: Incremental classifier and representation learning](http://openaccess.thecvf.com/content_cvpr_2017/html/Rebuffi_iCaRL_Incremental_Classifier_CVPR_2017_paper.html) | ||
- LUCIR: [Learning a unified classifier incrementally via rebalancing](http://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html) | ||
- MTD: [Class Incremental Learning with Multi-Teacher Distillation](https://openaccess.thecvf.com/content/CVPR2024/html/Wen_Class_Incremental_Learning_with_Multi-Teacher_Distillation_CVPR_2024_paper.html) | ||
- OPC: [Optimizing mode connectivity for class incremental learning](https://proceedings.mlr.press/v202/wen23b.html) | ||
- PODNet: [Podnet: Pooled outputs distillation for small-tasks incremental learning](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_6) | ||
- SSIL: [Ss-il: Separated softmax for incremental learning](http://openaccess.thecvf.com/content/ICCV2021/html/Ahn_SS-IL_Separated_Softmax_for_Incremental_Learning_ICCV_2021_paper.html) | ||
- Structure | ||
- AANet: [Adaptive aggregation networks for class-incremental learning](http://openaccess.thecvf.com/content/CVPR2021/html/Liu_Adaptive_Aggregation_Networks_for_Class-Incremental_Learning_CVPR_2021_paper.html) | ||
|
||
### 3. Environment | ||
|
||
#### 3.1. Code Environment | ||
|
||
- python==3.9 | ||
- pytorch==1.11.0 | ||
- continuum==1.2.4 | ||
- ... | ||
|
||
The required packages are listed in files 'config/requirements.txt' and 'config/env.yml'. We recommend using Anaconda to manage packages, such as: | ||
``` | ||
conda create -n torch1.11.0 python=3.9 | ||
conda activate torch1.11.0 | ||
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch | ||
cd config | ||
pip install -r requirements.txt | ||
``` | ||
|
||
#### 3.2. Dataset | ||
|
||
Folder Structures | ||
``` | ||
├── data | ||
│ ├── cifar-100-python | ||
│ ├── ImageNet | ||
│ ├── ImageNet100 | ||
``` | ||
The CIFAR-100 dataset can be automatically downloaded by running an arbitrary script of CIFAR-100 experiments. The [ImageNet](https://image-net.org) dataset must be pre-downloaded. The ImageNet-100 dataset can be generated by the script 'tool/gen_imagenet100.py'. | ||
|
||
|
||
### 4. Command | ||
```` python | ||
python main.py --cfg config/baseline/finetune/finetune_cifar100.yaml --device 0 --note test | ||
```` | ||
|
||
### 5. Acknowledgements | ||
|
||
We appreciate the following GitHub repos a lot for their valuable code base: | ||
- https://github.com/arthurdouillard/incremental_learning.pytorch | ||
- https://github.com/yaoyao-liu/POD-AANets | ||
|
||
|
||
### 6. Citation | ||
We hope that our research can help you and promote the development of continual learning. | ||
If you find this work useful, please consider citing the corresponding paper: | ||
|
||
``` | ||
@inproceedings{wen2024class, | ||
title={Class Incremental Learning with Multi-Teacher Distillation}, | ||
author={Wen, Haitao and Pan, Lili and Dai, Yu and Qiu, Heqian and Wang, Lanxiao and Wu, Qingbo and Li, Hongliang}, | ||
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, | ||
pages={28443--28452}, | ||
year={2024} | ||
} | ||
@InProceedings{pmlr-v202-wen23b, | ||
title = {Optimizing Mode Connectivity for Class Incremental Learning}, | ||
author = {Wen, Haitao and Cheng, Haoyang and Qiu, Heqian and Wang, Lanxiao and Pan, Lili and Li, Hongliang}, | ||
booktitle = {Proceedings of the 40th International Conference on Machine Learning}, | ||
pages = {36940--36957}, | ||
year = {2023}, | ||
volume = {202}, | ||
publisher = {PMLR} | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
# Dataset | ||
dataset: CIFAR10 | ||
scenario: class | ||
init: 0 | ||
tasks: 5 | ||
# General setting | ||
scheme: FineTune | ||
optim: SGD | ||
lr: 0.00003 | ||
decay: 5.0e-4 | ||
momentum: 0.9 | ||
steps: [80, 120] | ||
gamma: 0.1 | ||
bs: 128 | ||
epochs: 10 | ||
# Device | ||
mode: GPU | ||
gpuid: 0 | ||
workers: 4 | ||
seed: 1993 | ||
# Output | ||
name: finetune/cifar10/init2_5tasks |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
# Dataset | ||
dataset: CIFAR100 | ||
scenario: class | ||
order: 0 | ||
init: 50 | ||
tasks: 6 | ||
# General setting | ||
scheme: FineTune | ||
optim: SGD | ||
lr: 0.1 | ||
decay: 5.0e-4 | ||
momentum: 0.9 | ||
steps: [80, 120] | ||
gamma: 0.1 | ||
bs: 128 | ||
epochs: 160 | ||
# Device | ||
mode: GPU | ||
gpuid: 0 | ||
workers: 4 | ||
seed: 1993 | ||
# Output | ||
name: finetune/cifar100/init50_6tasks |
Oops, something went wrong.