Skip to content

Conversation

@ZT-AIA
Copy link

@ZT-AIA ZT-AIA commented Dec 5, 2025

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces new Triton-based operators, fused_qkvzba_split_reshape and rope_forward_triton, to optimize performance for the qwen3_next model on Ascend hardware. The changes include a new Triton kernel for fused QKVZBA splitting and reshaping, and a new RoPE implementation using Triton. The review identifies two critical issues: a potential bug in rotary_embedding.py that might break graph fusion due to a removed return value assignment, and an import typo in patch_qwen3_next.py that will lead to a runtime error.

key = key.contiguous().view(1, key.shape[0], -1, self.head_size)
# If cos and sin are generated outside, use npu_apply_rotary_pos_emb to avoid redundant calculation.
# This method requires head_size and rotary_dim equal 128 and neox_style is True
torch_npu.npu_apply_rotary_pos_emb(query, key, self.cos, self.sin)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The previous implementation assigned the result of torch_npu.npu_apply_rotary_pos_emb back to query, key. A comment in the original code indicated this was crucial for graph fusion to work correctly, even though the function modifies tensors in-place. This assignment has been removed. If this is no longer a concern, it would be helpful to add a comment explaining why. Otherwise, this could be a critical bug and the assignment should be restored.

Suggested change
torch_npu.npu_apply_rotary_pos_emb(query, key, self.cos, self.sin)
query, key = torch_npu.npu_apply_rotary_pos_emb(query, key, self.cos, self.sin)

from vllm.model_executor.models.qwen3_next import Qwen3NextGatedDeltaNet
from vllm_ascend.ops.triton.fla.fused_qkvzba_split_reshape import fused_qkvzba_split_reshape_cat
from vllm.triton_utils import tl, triton
from vll.config import (CUDAGraphMode, get_current_vllm_config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a typo in the import path. It should be vllm.config instead of vll.config. This will cause an ImportError at runtime.

Suggested change
from vll.config import (CUDAGraphMode, get_current_vllm_config)
from vllm.config import (CUDAGraphMode, get_current_vllm_config)

@github-actions
Copy link

github-actions bot commented Dec 5, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@ZT-AIA ZT-AIA changed the title add qwen3_next ops: fused_qkvzba_split_reshape and rope_forward_triton add qwen3_next ops: fused_qkvzba_split_reshape Dec 8, 2025
@ZT-AIA ZT-AIA closed this Dec 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant