⚡️ Speed up function repeat_kv by 7%
#96
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 7% (0.07x) speedup for
repeat_kvinsrc/transformers/models/hunyuan_v1_dense/modeling_hunyuan_v1_dense.py⏱️ Runtime :
3.59 milliseconds→3.34 milliseconds(best of121runs)📝 Explanation and details
The optimized version replaces the tensor slicing operation
hidden_states[:, :, None, :, :]withhidden_states.unsqueeze(2)and splits the expand operation into separate steps.Key optimizations:
Replaced slicing with unsqueeze: The original code uses
[:, :, None, :, :]slicing to add a dimension, which requires PyTorch to compute new strides and memory layout. The optimized version usesunsqueeze(2), which is a more direct operation that PyTorch can optimize better internally.Separated expand from the indexing chain: Instead of chaining the slicing and expand operations in one line, the optimized version performs unsqueeze first, then expand in a separate step. This allows PyTorch's memory layout optimizer to handle each operation more efficiently.
Why this is faster:
unsqueeze()is a more optimized tensor view operation compared to slice-based dimension insertionPerformance characteristics:
The optimization shows consistent 7-44% speedups across test cases, with particularly strong gains for:
The optimization maintains identical functionality while leveraging PyTorch's internal optimizations for tensor view operations.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-repeat_kv-mhjpvyp1and push.