-
Notifications
You must be signed in to change notification settings - Fork 732
Enable 16-bit activations and 8-bit weights in Cadence Quantizer for Conv #15928
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…for linear (pytorch#15901) Summary: # Context We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. # Current Behavior Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25 and the function macro(`quantized_linear` chosen for example): https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41 so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. # This Diff We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations. Reviewed By: hsharma35 Differential Revision: D86538176
…Conv Summary: # Context We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. # Current Behavior Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25 and the function macro(`quantized_linear` chosen for example): https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41 so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. # This Diff We fix this checking for our datatypes, and calling the functions with the correct data types, as in D86538176. Reviewed By: hsharma35 Differential Revision: D86643471
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15928
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit e064f8e with merge base d2c011e ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enables support for 16-bit activations with 8-bit weights (W8A16) in the Cadence Quantizer for Conv and Linear operations. The implementation extends the existing quantization infrastructure to handle heterogeneous quantization types by dispatching to generic implementations when int16 activations are combined with int8 weights.
Key changes include:
- Added a new
CadenceWith16BitConvActivationsQuantizerclass that applies 16-bit quantization to Conv1d and Conv2d patterns - Extended Conv2d and Linear operators to detect and handle the W8A16 heterogeneous type combination
- Updated build dependencies to include generic quantization operators for int16 support
- Added comprehensive test coverage for the new functionality
Reviewed Changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
backends/cadence/aot/quantizer/quantizer.py |
Adds new CadenceWith16BitConvActivationsQuantizer class to enable 16-bit activations for Conv patterns |
backends/cadence/hifi/operators/op_quantized_linear_out.cpp |
Extends both quantized_linear_out and quantized_linear_per_tensor_out functions to handle W8A16 heterogeneous types |
backends/cadence/hifi/operators/op_quantized_conv2d_nchw_out.cpp |
Adds W8A16 type dispatch logic to both NCHW conv2d variants with proper early returns |
backends/cadence/hifi/operators/op_quantized_conv2d_nhwc_out.cpp |
Adds W8A16 type dispatch logic to both NHWC conv2d variants with proper early returns |
backends/cadence/hifi/operators/targets.bzl |
Updates build configuration to add dependencies for generic quantization operators supporting int16 |
backends/cadence/hifi/operators/tests/test_op_quantized_linear_out.cpp |
Adds new test case for int16 activations with int8 weights in quantized linear operations |
backends/cadence/hifi/operators/tests/test_op_quantized_conv2d_out.cpp |
Adds test cases for int16 activations with int8 weights in both NCHW and NHWC conv2d operations |
backends/cadence/vision/kernels/kernels.cpp |
Modifies memory allocation to add alignment parameter and error logging (includes debug code that should be removed) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| int64_t out_shift, | ||
| int64_t out_zero_point, | ||
| __ET_UNUSED const optional<Tensor>& offset, | ||
| Tensor& out) { |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment before this if-block to explain the purpose, similar to the conv2d implementations. For example: // Handle W8A16 heterogeneous type (int16_t activations, int8_t weights). This improves code consistency and readability.
| Tensor& out) { | |
| Tensor& out) { | |
| // Handle W8A16 heterogeneous type (int16_t activations, int8_t weights) |
| void* allocate_temp_memory(KernelRuntimeContext& ctx, size_t size) { | ||
| Result<void*> temp_mem_res = ctx.allocate_temp(size); | ||
| return temp_mem_res.ok() ? temp_mem_res.get() : nullptr; | ||
| ET_LOG(Info, "yo"); |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Debug log statement 'ET_LOG(Info, "yo");' should be removed before merging to production. This appears to be a leftover debugging statement.
| ET_LOG(Info, "yo"); |
| return temp_mem_res.ok() ? temp_mem_res.get() : nullptr; | ||
| ET_LOG(Info, "yo"); | ||
| constexpr size_t kAlignment = | ||
| 8; // 16-byte alignment for vectorized operations |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment says "16-byte alignment" but the code sets kAlignment to 8. This inconsistency should be corrected - either update the comment to say "8-byte alignment" or change kAlignment to 16 if 16-byte alignment is actually needed.
| 8; // 16-byte alignment for vectorized operations | |
| 8; // 8-byte alignment for vectorized operations |
| out_zero_point, | ||
| offset, | ||
| out); | ||
| } |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing return statement after handling the int16 case. Without a return, execution will fall through to the subsequent else if checks, potentially executing the wrong code path or triggering an incorrect error message. Add return; after line 295.
| } | |
| return; |
| int64_t out_zero_point, | ||
| __ET_UNUSED const optional<Tensor>& offset, | ||
| const optional<Tensor>& offset, | ||
| Tensor& out) { |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment before this if-block to explain the purpose, similar to the conv2d implementations. For example: // Handle W8A16 heterogeneous type (int16_t activations, int8_t weights). This improves code consistency and readability.
| Tensor& out) { | |
| Tensor& out) { | |
| // Handle W8A16 heterogeneous type (int16_t activations and output, int8_t weights) |
| int64_t out_zero_point, | ||
| __ET_UNUSED const optional<Tensor>& offset, | ||
| Tensor& out) { |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The __ET_UNUSED annotations on the ctx parameter (line 270) and offset parameter (line 279) are now incorrect since both are used in the int16 case (lines 285 and 294). Remove the __ET_UNUSED annotations from these parameters.
| out_zero_point, | ||
| offset, | ||
| out); | ||
| } |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing return statement after handling the int16 case. Without a return, execution will fall through to the subsequent else if checks, potentially executing the wrong code path or triggering an incorrect error message. Add return; after line 236.
| } | |
| return; |
Summary:
Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions.
Current Behavior
Right now, we're composing two macros together, the
ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16macro:https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25
and the function macro(
quantized_linearchosen for example):https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41
so together, it just becomes a switch statement, calling the
quantized_linearfunction with the correct template parameter.However, note that it assumes that both the input activations and weights are the same dtype, which is not the case.
This Diff
We fix this checking for our datatypes, and calling the functions with the correct data types, as in D86538176.
Reviewed By: hsharma35
Differential Revision: D86643471