-
Notifications
You must be signed in to change notification settings - Fork 91
[GAUDISW-245117] add b2b matmul #770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -29,6 +29,17 @@ def __init__(self): | |
| def forward(self, x, y, **kwargs): | ||
| return torch.matmul(x, y, **kwargs) | ||
|
|
||
| class B2BMatmul(Matmul): | ||
| """Specialized alias for batch2block and batch2block matmul operations. | ||
|
|
||
| This class is intentionally kept functionally identical to ``Matmul``. | ||
| It exists to provide semantic distinction in the codebase (e.g., for | ||
| patterns that specifically require batch2block and block2batch matmul) and to allow | ||
| future customization without changing call sites. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe edit the comment to be more specific, change back-to-back to batch2block/block2batch and explain the reasoning for it, that it is used by the INC to adjust the scale to the needed values of the input tensor as some of them are discarded by the 2nd input which is kind of a mask mapping |
||
| """ | ||
| def __init__(self): | ||
| super().__init__() | ||
|
|
||
|
|
||
| class Softmax(torch.nn.Module): | ||
|
|
||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.