-
Notifications
You must be signed in to change notification settings - Fork 264
[Torch FX] Compress PT2E Support #3663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
…match in signatures in prepare_pt2e.
src/nncf/experimental/quantization/algorithms/weight_compression/algorithm.py
Outdated
Show resolved
Hide resolved
src/nncf/experimental/torch/fx/quantization/quantizer/__init__.py
Outdated
Show resolved
Hide resolved
daniil-lyakhov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I see the PR with OpenVINOQuantizer?
src/nncf/experimental/quantization/algorithms/weight_compression/algorithm.py
Outdated
Show resolved
Hide resolved
src/nncf/experimental/quantization/algorithms/weight_compression/algorithm.py
Outdated
Show resolved
Hide resolved
| ) -> torch.fx.GraphModule: | ||
| self._quantizer = quantizer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typehints an docstring are missing
src/nncf/quantization/algorithms/weight_compression/algorithm.py
Outdated
Show resolved
Hide resolved
src/nncf/experimental/quantization/algorithms/weight_compression/algorithm.py
Outdated
Show resolved
Hide resolved
...o_export_compression_OpenVINOQuantizer/LlamaDecoderOnly/int4wo_sym_gs32_all_layers_False.dot
Show resolved
Hide resolved
src/nncf/experimental/torch/fx/quantization/quantizer/openvino_adapter.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Daniil Lyakhov <[email protected]>
nikita-savelyevv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huge work, thanks @anzr299!
Mostly minor comments from my side. Overall the updated approach in src/nncf/quantization/algorithms/weight_compression/algorithm.py looks good in my opinion and does not change the logic of the algorithm.
The only significant difference I noticed is that ratio_defining_params are initialized with primary_config from the start, and then some of the parameters are converted back to backup precision after mixed precision algorithm. Before, it was the other way around. It looks a bit cumbersome during mixed precision assignment, but allows to avoid passing group_size_values which is an improvement compared to the previous approach.
|
|
||
| return { | ||
| "mode": mode, | ||
| "mode": mode if isinstance(mode, nncf.CompressWeightsMode) else nncf.CompressWeightsMode(mode), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the scenario when this is needed, why can't we provide an instance of nncf.CompressWeightsMode instead of a string?
|
|
||
| return ratio_defining_params | ||
|
|
||
| def _get_backup_config(self, weight_dtype: TensorDataType) -> WeightCompressionConfig: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| def _get_backup_config(self, weight_dtype: TensorDataType) -> WeightCompressionConfig: | |
| def _get_backup_config(self, weight_dtype: TensorDataType) -> Optional[WeightCompressionConfig]: |
| model: TModel, | ||
| graph: NNCFGraph, | ||
| statistics_points: StatisticPointsContainer, | ||
| group_size_values: dict[str, int], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove group_size_values from the docstring
| # ratio_defining_params are all in primary precision. Update parameters | ||
| # which need to be set to backup precision |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # ratio_defining_params are all in primary precision. Update parameters | |
| # which need to be set to backup precision | |
| # At this point ratio_defining_params are all in primary precision. Below we update parameters | |
| # which need to be set to the backup precision. |
|
|
||
| # ratio_defining_params are all in primary precision. Update parameters | ||
| # which need to be set to backup precision | ||
| for weight_param in ratio_defining_params: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| for weight_param in ratio_defining_params: | |
| primary_precision_weight_params = set(primary_precision_weight_params) | |
| for weight_param in ratio_defining_params: |
This avoids square complexity
| Applies Weight Compression to the torch.fx.GraphModule provided model | ||
| using provided torch.ao quantizer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Applies Weight Compression to the torch.fx.GraphModule provided model | |
| using provided torch.ao quantizer. | |
| Applies Weight Compression to the torch.fx.GraphModule model using provided torch.ao quantizer. |
| :param dataset: A representative dataset for the | ||
| calibration process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| :param dataset: A representative dataset for the | |
| calibration process. | |
| :param dataset: A representative dataset for the calibration process. |
| pt2e_params = PT2E_PARAMS | ||
| if qparam.get("mode") in {QuantizationMode.INT8WO_ASYM, QuantizationMode.INT8WO_SYM}: | ||
| pt2e_params = [{}] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pt2e_params = PT2E_PARAMS | |
| if qparam.get("mode") in {QuantizationMode.INT8WO_ASYM, QuantizationMode.INT8WO_SYM}: | |
| pt2e_params = [{}] | |
| pt2e_params = [{}] if qparam.get("mode") in INT8_COMPRESSION_MODES else PT2E_PARAMS |
| ModelCase(LlamaDecoderOnly, "LlamaDecoderOnly", [1, 3, 64]), | ||
| ModelCase(partial(ShortTransformer, 64, 128, True), "short_transformer_shared", [5]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ModelCase(LlamaDecoderOnly, "LlamaDecoderOnly", [1, 3, 64]), | |
| ModelCase(partial(ShortTransformer, 64, 128, True), "short_transformer_shared", [5]), | |
| ModelCase(LlamaDecoderOnly, "LlamaDecoderOnly", (1, 3, 64)), | |
| ModelCase(partial(ShortTransformer, 64, 128, True), "short_transformer_shared", (5,)), |
| quantizer_builder: Callable[..., OpenVINOQuantizer], | ||
| model_case: ModelCase, | ||
| quantizer_params, | ||
| pt2e_params, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that in this and some other cases below, pt2e_params argument is not used. Is this on purpose? Won't this result in unnecessary duplication of tests?
Changes
Introduced a new API to offer weights compression algorithm for quantizers defined in torch.ao.
Currently only supports OpenVINO Quantizer.
Reason for changes
To support Quantizers defined in torch ao.
Related tickets
169342