Skip to content

Commit 6ab013e

Browse files
committed
minor
Signed-off-by: realAsma <[email protected]>
1 parent 2d8ad4d commit 6ab013e

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

modelopt/torch/quantization/algorithms.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -789,6 +789,7 @@ def register_custom_support(
789789
"""(Optional) Register custom support for `AutoQuantize` score estimation.
790790
791791
This custom support is used to enable memory/compute efficient backward gradient propagation. This involves:
792+
792793
- `grad_ckpt_context`: backward pass with gradient checkpointing enabled
793794
- `is_param_grad_enabled`: AutoQuantize only needs activation gradients to be computed (not weight
794795
gradients). `is_param_grad_enabled` is used to select which parameters should have gradients enabled,

0 commit comments

Comments
 (0)