Skip to content

Commit 0a5a7af

Browse files
committed
no message
1 parent cd71a3d commit 0a5a7af

File tree

4 files changed

+5
-3
lines changed

4 files changed

+5
-3
lines changed

classification/README.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ python main.py --cfg ./config/CIFAR100_LT/causal_norm.yaml
3131

3232
If you want to change any hyper-parameter, you can find them in the corresponding yaml config file. **IMPORTANT: if you just want to change the TDE trade-off parameter alpha, you don't need to re-train the model, you can directly use different alphas during testing, because it's not involved in training. I also have a useful trick to pick up alpha: when you are testing on a dataset without additional val set (or directly testing on val set), you can choose the alpha that makes alpha times cos approximate to 1.0 in average.**.
3333

34-
For Long-Tailed CIFAR-10/-100, if you want to change the imbalance ratio, you can set "cifar_imb_ratio" in the corresponding yaml file, e.g., cifar_imb_ratio=0.01/0.02/0.1 means imbalance ratio = 100, 50, 10 in the paper. To compare our methods with [BBN](https://github.com/Megvii-Nanjing/BBN) using Long-tailed CIFAR-10/-100, we copy their [ResNet32](https://github.com/Megvii-Nanjing/BBN/tree/master/lib/backbone) as your backbone.
34+
For Long-Tailed CIFAR-10/-100, if you want to change the imbalance ratio, you can set "cifar_imb_ratio" in the corresponding yaml file, e.g., cifar_imb_ratio=0.01/0.02/0.1 means imbalance ratio = 100, 50, 10 in the paper. To compare our methods with [BBN](https://github.com/Megvii-Nanjing/BBN) using Long-tailed CIFAR-10/-100, we adopted their dataloader, [ResNet32](https://github.com/Megvii-Nanjing/BBN/tree/master/lib/backbone) backbone, the number of training epochs and the same warm-up scheduler.
3535

3636
### Testing
3737
For ImageNet_LT:
@@ -63,6 +63,8 @@ python main.py --cfg ./config/CIFAR100_LT/causal_norm.yaml --test --model_dir ./
6363

6464
![alt text](imagenet-lt.png "from 'Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect'")
6565

66+
As to the experiment results on Long-Tailed CIFAR-10/-100, we would like to thank @ZhangMingliangAI for reminding us that our previous setting is slightly different with BBN. This experiment was added during rebuttal, so we didn't have enough time to check all the BBN's code at that time. After I thoroughly went through the BBN's project, I turned off the pretrained initialization and adopted the same warm-up scheduler and number of training epochs. Since Long-Tailed CIFAR-10/-100 would sample different subsets at each time, we run each experiment twice and reported their mean results in the following table. Although the results are changed a little bit, we still outperform previous state-of-the-arts in the BBN's setting.
67+
6668
![alt text](long-tailed-cifar.png "from 'Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect'")
6769

6870
## Citation

classification/config/CIFAR100_LT/causal_norm.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ networks:
1616
def_file: ./models/CausalNormClassifier.py
1717
optim_params: {lr: 0.2, momentum: 0.9, weight_decay: 0.0005}
1818
scheduler_params: {coslr: false, endlr: 0.0, gamma: 0.1, step_size: 30, warmup: true, lr_step: [120, 160], lr_factor: 0.01, warm_epoch: 5}
19-
params: {dataset: CIFAR100_LT, feat_dim: 2048, num_classes: 100, stage1_weights: false, use_effect: true, num_head: 2, tau: 16.0, alpha: 2.0, gamma: 0.03125}
19+
params: {dataset: CIFAR100_LT, feat_dim: 2048, num_classes: 100, stage1_weights: false, use_effect: true, num_head: 2, tau: 16.0, alpha: 1.5, gamma: 0.03125}
2020
feat_model:
2121
def_file: ./models/ResNext50Feature.py
2222
fix: false

classification/config/CIFAR100_LT/causal_norm_32.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ networks:
1616
def_file: ./models/CausalNormClassifier.py
1717
optim_params: {lr: 0.2, momentum: 0.9, weight_decay: 0.0005}
1818
scheduler_params: {coslr: false, endlr: 0.0, gamma: 0.1, step_size: 30, warmup: true, lr_step: [120, 160], lr_factor: 0.01, warm_epoch: 5}
19-
params: {dataset: CIFAR100_LT, feat_dim: 128, num_classes: 100, stage1_weights: false, use_effect: true, num_head: 2, tau: 16.0, alpha: 2.0, gamma: 0.03125}
19+
params: {dataset: CIFAR100_LT, feat_dim: 128, num_classes: 100, stage1_weights: false, use_effect: true, num_head: 2, tau: 16.0, alpha: 1.5, gamma: 0.03125}
2020
feat_model:
2121
def_file: ./models/ResNet32Feature.py
2222
fix: false

classification/long-tailed-cifar.png

-549 Bytes
Loading

0 commit comments

Comments
 (0)