[KernelGen] Optimize pow_scalar operator with 3.15x speedup#2188
Open
zacliu2023 wants to merge 4 commits intoflagos-ai:masterfrom
Open
[KernelGen] Optimize pow_scalar operator with 3.15x speedup#2188zacliu2023 wants to merge 4 commits intoflagos-ai:masterfrom
zacliu2023 wants to merge 4 commits intoflagos-ai:masterfrom
Conversation
- Implement exponential_ in-place random distribution operator - Uses Philox RNG for reproducible randomness - Support float16, bfloat16, float32, float64 dtypes - Optimized for Iluvatar with precise log computation - Added empty tensor protection (N == 0) - Pass all 6 accuracy tests (exponential_ and fast_exponential_) - Pass all 4 performance tests (Status: SUCCESS) - Registered in _iluvatar backend ops Features: - Uses tl.philox for parallel random number generation - Separate kernels for float32 (4x unroll) and float64 (2x unroll) - Autotune configs optimized for Iluvatar architecture - Proper handling of non-contiguous tensors Test Results: - Accuracy: 6/6 passed (100%) - Performance: 4/4 SUCCESS (100%) - Mean distribution check: ~1.0 (correct for lambda=1) Files Changed: - src/flag_gems/runtime/backend/_iluvatar/ops/exponential_.py (new) - src/flag_gems/runtime/backend/_iluvatar/ops/__init__.py (register operator)
- Implement pow_scalar/pow_scalar_ operators using FlagGems pointwise_dynamic - Uses tl_extra_shim.pow for hardware-compatible power computation - Follow FlagGems standard patterns for scalar-tensor operations - Register operators in _iluvatar backend __init__.py Note: Some precision test cases show issues with extreme values (e.g., base=0.001, exp=-1.6 produces inf instead of expected value) This may require follow-up investigation for edge case handling. Generated with kernelgen MCP v2.0
- Replace pointwise_dynamic with hand-written Triton kernel - Add pow_scalar_kernel and pow_scalar_inplace_kernel - Optimize BLOCK_SIZE to 2048 for better parallelism - Add empty tensor protection via volume() check - Use tl.program_id(0) for Iluvatar compatibility - Maintain same function signature as baseline Performance: Achieved 3.15x speedup (target 1.5x) Test: 8/8 tests passed Generated with kernelgen MCP v2.0
- Remove unused 'device' import from exponential_.py - Remove unused 'device' and 'torch_device_fn' imports from pow.py - Fix isort import ordering in __init__.py - Apply black formatting to pow.py function calls Co-Authored-By: Claude Opus 4.6 <[email protected]>
tengqm
reviewed
Mar 31, 2026
Contributor
tengqm
left a comment
There was a problem hiding this comment.
Please split this PR into two, each one focusing on one operator.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Optimize
pow_scalaroperator for Iluvatar (Tianshu) platform using hand-written Triton kernel, achieving 3.15x speedup over PyTorch baseline.Generated with kernelgen MCP v2.0 and validated on Iluvatar CoreX BI-V150.
Changes
pointwise_dynamicgeneric implementation with optimized Triton kernelpow_scalar_kernelandpow_scalar_inplace_kernelfor normal and in-place operationsBLOCK_SIZEto 2048 for better parallelism on Iluvatar hardwarevolume()checktl.program_id(0)native API for Iluvatar compatibilityPerformance
Files Changed
src/flag_gems/runtime/backend/_iluvatar/ops/pow.py