Skip to content

Conversation

@RahulC7
Copy link
Contributor

@RahulC7 RahulC7 commented Nov 20, 2025

Summary:

Context

We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions.

Current Behavior

Right now, we're composing two macros together, the ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16 macro:

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25

and the function macro(quantized_linear chosen for example):

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41

so together, it just becomes a switch statement, calling the quantized_linear function with the correct template parameter.

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case.

This Diff

We fix this checking for our datatypes, and calling the functions with the correct data types, as in D86538176.

Reviewed By: hsharma35

Differential Revision: D86643471

…for linear (pytorch#15901)

Summary:

# Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. 


# Current Behavior
Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25


and the function macro(`quantized_linear` chosen for example): 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41



so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. 

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. 

# This Diff
We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations.

Reviewed By: hsharma35

Differential Revision: D86538176
…Conv

Summary:
# Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. 


# Current Behavior
Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25


and the function macro(`quantized_linear` chosen for example): 

https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41


so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. 

However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. 

# This Diff
We fix this checking for our datatypes, and calling the functions with the correct data types, as in D86538176.

Reviewed By: hsharma35

Differential Revision: D86643471
Copilot AI review requested due to automatic review settings November 20, 2025 21:56
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 20, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15928

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit e064f8e with merge base d2c011e (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 20, 2025
@meta-codesync
Copy link

meta-codesync bot commented Nov 20, 2025

@RahulC7 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D86643471.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copilot finished reviewing on behalf of RahulC7 November 20, 2025 22:00
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR enables support for 16-bit activations with 8-bit weights (W8A16) in the Cadence Quantizer for Conv and Linear operations. The implementation extends the existing quantization infrastructure to handle heterogeneous quantization types by dispatching to generic implementations when int16 activations are combined with int8 weights.

Key changes include:

  • Added a new CadenceWith16BitConvActivationsQuantizer class that applies 16-bit quantization to Conv1d and Conv2d patterns
  • Extended Conv2d and Linear operators to detect and handle the W8A16 heterogeneous type combination
  • Updated build dependencies to include generic quantization operators for int16 support
  • Added comprehensive test coverage for the new functionality

Reviewed Changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
backends/cadence/aot/quantizer/quantizer.py Adds new CadenceWith16BitConvActivationsQuantizer class to enable 16-bit activations for Conv patterns
backends/cadence/hifi/operators/op_quantized_linear_out.cpp Extends both quantized_linear_out and quantized_linear_per_tensor_out functions to handle W8A16 heterogeneous types
backends/cadence/hifi/operators/op_quantized_conv2d_nchw_out.cpp Adds W8A16 type dispatch logic to both NCHW conv2d variants with proper early returns
backends/cadence/hifi/operators/op_quantized_conv2d_nhwc_out.cpp Adds W8A16 type dispatch logic to both NHWC conv2d variants with proper early returns
backends/cadence/hifi/operators/targets.bzl Updates build configuration to add dependencies for generic quantization operators supporting int16
backends/cadence/hifi/operators/tests/test_op_quantized_linear_out.cpp Adds new test case for int16 activations with int8 weights in quantized linear operations
backends/cadence/hifi/operators/tests/test_op_quantized_conv2d_out.cpp Adds test cases for int16 activations with int8 weights in both NCHW and NHWC conv2d operations
backends/cadence/vision/kernels/kernels.cpp Modifies memory allocation to add alignment parameter and error logging (includes debug code that should be removed)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

int64_t out_shift,
int64_t out_zero_point,
__ET_UNUSED const optional<Tensor>& offset,
Tensor& out) {
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment before this if-block to explain the purpose, similar to the conv2d implementations. For example: // Handle W8A16 heterogeneous type (int16_t activations, int8_t weights). This improves code consistency and readability.

Suggested change
Tensor& out) {
Tensor& out) {
// Handle W8A16 heterogeneous type (int16_t activations, int8_t weights)

Copilot uses AI. Check for mistakes.
void* allocate_temp_memory(KernelRuntimeContext& ctx, size_t size) {
Result<void*> temp_mem_res = ctx.allocate_temp(size);
return temp_mem_res.ok() ? temp_mem_res.get() : nullptr;
ET_LOG(Info, "yo");
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug log statement 'ET_LOG(Info, "yo");' should be removed before merging to production. This appears to be a leftover debugging statement.

Suggested change
ET_LOG(Info, "yo");

Copilot uses AI. Check for mistakes.
return temp_mem_res.ok() ? temp_mem_res.get() : nullptr;
ET_LOG(Info, "yo");
constexpr size_t kAlignment =
8; // 16-byte alignment for vectorized operations
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment says "16-byte alignment" but the code sets kAlignment to 8. This inconsistency should be corrected - either update the comment to say "8-byte alignment" or change kAlignment to 16 if 16-byte alignment is actually needed.

Suggested change
8; // 16-byte alignment for vectorized operations
8; // 8-byte alignment for vectorized operations

Copilot uses AI. Check for mistakes.
out_zero_point,
offset,
out);
}
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing return statement after handling the int16 case. Without a return, execution will fall through to the subsequent else if checks, potentially executing the wrong code path or triggering an incorrect error message. Add return; after line 295.

Suggested change
}
return;

Copilot uses AI. Check for mistakes.
int64_t out_zero_point,
__ET_UNUSED const optional<Tensor>& offset,
const optional<Tensor>& offset,
Tensor& out) {
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment before this if-block to explain the purpose, similar to the conv2d implementations. For example: // Handle W8A16 heterogeneous type (int16_t activations, int8_t weights). This improves code consistency and readability.

Suggested change
Tensor& out) {
Tensor& out) {
// Handle W8A16 heterogeneous type (int16_t activations and output, int8_t weights)

Copilot uses AI. Check for mistakes.
Comment on lines 278 to 280
int64_t out_zero_point,
__ET_UNUSED const optional<Tensor>& offset,
Tensor& out) {
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __ET_UNUSED annotations on the ctx parameter (line 270) and offset parameter (line 279) are now incorrect since both are used in the int16 case (lines 285 and 294). Remove the __ET_UNUSED annotations from these parameters.

Copilot uses AI. Check for mistakes.
out_zero_point,
offset,
out);
}
Copy link

Copilot AI Nov 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing return statement after handling the int16 case. Without a return, execution will fall through to the subsequent else if checks, potentially executing the wrong code path or triggering an incorrect error message. Add return; after line 236.

Suggested change
}
return;

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant