⚡️ Speed up method GraphormerGraphAttnBias.forward by 5%
#93
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 5% (0.05x) speedup for
GraphormerGraphAttnBias.forwardinsrc/transformers/models/deprecated/graphormer/modeling_graphormer.py⏱️ Runtime :
10.4 milliseconds→9.84 milliseconds(best of60runs)📝 Explanation and details
The optimized code achieves a 5% speedup through several key memory and computation optimizations:
Primary Optimization - Memory-Efficient Tensor Creation:
attn_bias.clone().unsqueeze(1).repeat(1, self.num_heads, 1, 1)withattn_bias.unsqueeze(1).expand(-1, self.num_heads, -1, -1).clone()In-Place Operations for Large Tensors:
masked_fill_instead of tensor indexing assignment for setting padding valuesclamp_for in-place clamping of spatial positionsdiv_for in-place division in the multi-hop path+=operators throughoutMemory Layout Optimizations:
.contiguous()calls strategically to ensure optimal memory stride patterns for subsequent operationsFused Operations:
torch.sum().div_()instead of separate operations+=for graph attention bias updates instead of creating intermediate tensorsThese optimizations are most effective for larger graphs and batch sizes (as shown in test cases with 50+ nodes achieving 10%+ speedups), where memory allocation overhead and tensor copying costs become more significant. The improvements target the most computationally expensive parts: spatial position encoding, multi-hop edge processing, and attention bias construction.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-GraphormerGraphAttnBias.forward-mhhaeuc9and push.