Skip to content

Conversation

@raindaywhu
Copy link

@raindaywhu raindaywhu commented Dec 6, 2025

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

mercykid and others added 5 commits December 4, 2025 17:09
@github-actions
Copy link

github-actions bot commented Dec 6, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mix_placement feature for Mixture-of-Experts models, primarily targeting DeepseekV2/V3 on Ascend hardware. The changes involve modifications to configuration, expert selection logic, and weight loading patches. While the overall direction seems correct for enabling this new functionality, I've identified a critical bug in the expert group calculation within moe_mlp.py that will lead to incorrect behavior. I've also pointed out a high-severity maintainability issue related to a hardcoded magic number in the expert selection logic. Please address these points to ensure correctness and code quality.

Comment on lines 130 to 131
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a bug in the calculation of new_group. When group_list is a cumulative sum of token counts, this logic incorrectly calculates the token count for the first group. For example, if group_list is [10, 25, 30], the token counts per group are [10, 15, 5]. The current code produces [15, 15, 5], using the second group's count for the first group. Additionally, this will raise an IndexError if group_list has fewer than two elements.

The correct approach is to combine the first element of group_list with the differences of the rest of the list.

Suggested change
group_diff = torch.diff(group_list)
new_group = torch.cat([group_diff[0].unsqueeze(0), group_diff], dim=0)
group_diff = torch.diff(group_list)
new_group = torch.cat([group_list[:1], group_diff])

device=topk_ids.device)

pad_shared_expert_weights = torch.full((topk_weights.shape[0], 1),
0.4,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The value 0.4 is hardcoded as the weight for the padded shared expert. This is a magic number which harms readability and maintainability. It should be defined as a named constant with a descriptive name at the top of the file (e.g., SHARED_EXPERT_DEFAULT_WEIGHT), or passed as a parameter if it's meant to be configurable.

self.expert_load_balancer.check_expert_map_tensor()
self.global_redundant_expert_num = (
self.expert_load_balancer.get_global_redundant_expert_num())
# self.global_redundant_expert_num = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unused code instead of commenting it out

Comment on lines +517 to +463
if self._shared_experts is None:
shared_out = None
else:
shared_out = self._shared_experts(hidden_states)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be condensed into a one-liner: shared_out = self._shared_experts(hidden_states) if self._shared_experts is not None else None.

fused_moe_out = self.experts(
hidden_states=hidden_states, router_logits=router_logits
)
ascend_config = get_ascend_config()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused ascend_config

Comment on lines +150 to +152
if self.shared_experts is None:
assert shared_output is None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a bare assert doesn't provide enough context for debugging if this fails. Please replace this with an explicit check that raises an exception with a descriptive error message.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants