Skip to content

Commit fb9e705

Browse files
feat: Remove logging gradients by default (#131)
* refactor: Disable logging gradients in default callbacks * build: Upgrade version, update changelog * refactor: Remove gradient logging from default callback, add automatic batch size callback to all callback * docs: Update changelog
1 parent bc88342 commit fb9e705

File tree

6 files changed

+22
-8
lines changed

6 files changed

+22
-8
lines changed

CHANGELOG.md

+7
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,13 @@
22
# Changelog
33
All notable changes to this project will be documented in this file.
44

5+
### [2.2.6]
6+
7+
#### Updated
8+
9+
- Remove gradients logging callback from default configs to avoid slowing down the training process
10+
- Add automatic batch size calculator as default in `all` callback configuration
11+
512
### [2.2.5]
613

714
#### Updated

pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "quadra"
3-
version = "2.2.5"
3+
version = "2.2.6"
44
description = "Deep Learning experiment orchestration library"
55
authors = [
66
"Federico Belotti <[email protected]>",

quadra/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
__version__ = "2.2.5"
1+
__version__ = "2.2.6"
22

33

44
def get_version():

quadra/configs/callbacks/all.yaml

+13
Original file line numberDiff line numberDiff line change
@@ -30,3 +30,16 @@ progress_bar:
3030
lightning_trainer_setup:
3131
_target_: quadra.callbacks.lightning.LightningTrainerBaseSetup
3232
log_every_n_steps: 1
33+
34+
batch_size_finder:
35+
_target_: quadra.callbacks.lightning.BatchSizeFinder
36+
mode: power
37+
steps_per_trial: 3
38+
init_val: 2
39+
max_trials: 5 # Max 64
40+
batch_arg_name: batch_size
41+
disable: false
42+
find_train_batch_size: true
43+
find_validation_batch_size: false
44+
find_test_batch_size: false
45+
find_predict_batch_size: false

quadra/configs/callbacks/default.yaml

-3
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,6 @@ model_checkpoint:
99
filename: "epoch_{epoch:03d}"
1010
auto_insert_metric_name: False
1111

12-
log_gradients:
13-
_target_: quadra.callbacks.mlflow.LogGradients
14-
norm: 2
1512
lr_monitor:
1613
_target_: pytorch_lightning.callbacks.LearningRateMonitor
1714
logging_interval: "epoch"

quadra/configs/callbacks/default_anomalib.yaml

-3
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,6 @@ upload_ckpts_as_artifact:
4444
upload_best_only: true
4545
delete_after_upload: true
4646
upload: false
47-
log_gradients:
48-
_target_: quadra.callbacks.mlflow.LogGradients
49-
norm: 2
5047
lr_monitor:
5148
_target_: pytorch_lightning.callbacks.LearningRateMonitor
5249
logging_interval: "epoch"

0 commit comments

Comments
 (0)