-
Notifications
You must be signed in to change notification settings - Fork 199
[2/N] Added KDLoss based AutoQuantize #592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
realAsma
merged 2 commits into
asma/auto_quantize_improvements
from
asma/auto_quantize_kd_loss_sensitivity
Nov 25, 2025
Merged
[2/N] Added KDLoss based AutoQuantize #592
realAsma
merged 2 commits into
asma/auto_quantize_improvements
from
asma/auto_quantize_kd_loss_sensitivity
Nov 25, 2025
+614
−66
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
197d4d6 to
9134ca9
Compare
9ebd69f to
b7bd107
Compare
dc15dae to
48b0423
Compare
60a0f26 to
0275c61
Compare
meenchen
reviewed
Nov 21, 2025
48b0423 to
73fc080
Compare
meenchen
reviewed
Nov 24, 2025
realAsma
commented
Nov 25, 2025
6405f2b to
d08a403
Compare
46670e1 to
09d8a29
Compare
meenchen
approved these changes
Nov 25, 2025
d08a403 to
2d8ad4d
Compare
09d8a29 to
75f83da
Compare
6ab013e to
4b72089
Compare
Signed-off-by: Asma Kuriparambil Thekkumpate <[email protected]> minor Signed-off-by: Asma Kuriparambil Thekkumpate <[email protected]> cheery-picked final PR changes changelog updates Signed-off-by: realAsma <[email protected]> minor Signed-off-by: realAsma <[email protected]> KL Div formula fix Signed-off-by: realAsma <[email protected]>
75f83da to
1b52477
Compare
Some improvements for KLDiv Signed-off-by: realAsma <[email protected]> changelog update Signed-off-by: realAsma <[email protected]> minor Signed-off-by: realAsma <[email protected]> doc updates Signed-off-by: realAsma <[email protected]>
6e3ad6f to
0aada4e
Compare
realAsma
added a commit
that referenced
this pull request
Nov 26, 2025
…AutoQuantizeGradientSearcher; seperated quant modules and score modules (#586) ## What does this PR do? **Type of change:** Refator; Minor new feature **Overview:** ? 1. Refactored AutoQuantizeSearcher to _AutoQuantizeBaseSearcher & AutoQuantizeGradientSearcher - Prepares architecture for additional search methods. 2. seperated quant modules and score modules - separate quantization modules from scoring modules, enabling auto-quantization to measure sensitivity at parent layers (e.g., MLP output for MoE experts) rather than individual ops. 3. Also see #592 and #588 ## Testing See unittests; `tests/unit/torch/quantization/test_autoquant.py` and `tests/unit/torch/quantization/plugins/test_huggingface.py` ## Before your PR is "*Ready for review*" <!-- If you haven't finished some of the above items you can still open `Draft` PR. --> - **Make sure you read and follow [Contributor guidelines](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CONTRIBUTING.md)** and your commits are signed. - **Is this change backward compatible?**: Yes - **Did you write any new necessary tests?**: Yes - **Did you add or update any necessary documentation?**: Yes - **Did you update [Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**: Not Required ## Additional Information <!-- E.g. related issue. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added support for score modules in quantization workflows. * Added optional naming for quantization recipes. * **Bug Fixes** * Improved quantization grouping rules documentation with clearer configuration examples. * **Refactor** * Renamed quantization module parameters for improved clarity. * Enhanced quantization search architecture for better scalability. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: realAsma <[email protected]> Co-authored-by: Asma Kuriparambil Thekkumpate <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Type of change: ? New Feature
Overview:
This PR extends AutoQuantize with KL Divergence Loss-based sensitivity measurement as an alternative to the existing gradient-based approach. KD Loss uses a binary searcher similar to the binary searcher in FastNAS.
AutoQuantize gradient is faster than KL Divergence based AutoQuantize. However KL Divergence does not need the model implementation to support gradient backward. In addition, AutoQuantize collected KL Divergence is useful for sensitivity analysis of the model. KL Divergence is a more direct measure of sensitivity than gradient scores.
Usage
see
tests/unit/torch/quantization/test_autoquant.pyTesting
Testes with unit tests.
Result for Qwen3 8B

Before your PR is "Ready for review"
Additional Information