Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance, Hardware] MoE weights padding to AMD MI300x GPUs #1836

Merged
merged 5 commits into from
Oct 30, 2024

Conversation

HaiShaw
Copy link
Collaborator

@HaiShaw HaiShaw commented Oct 29, 2024

Motivation

Padding MoE weights (last dim) to minimize Memory Channel Contention (only to AMD Instinct GPUs)
Test shows approximate performance boost of prefill +2.2%, decode +3.0% for Grok-1 on setting: b32/i1024/o512

Modifications

As mentioned: fused_moe.py and layer.py
To enable this feature, set binary flag MOE_PADDING=1 at command line, or export MOE_PADDING=1 in console.

Checklist

  • [+] Format your code according to the Contributor Guide.
  • [+] Add unit tests as outlined in the Contributor Guide.
  • [+] Update documentation as needed, including docstrings or example tutorials.

python/sglang/srt/layers/fused_moe/__init__.py Outdated Show resolved Hide resolved
python/sglang/srt/layers/fused_moe/layer.py Outdated Show resolved Hide resolved
@merrymercy
Copy link
Contributor

merrymercy commented Oct 30, 2024

Please fix the CI.

@HaiShaw
Copy link
Collaborator Author

HaiShaw commented Oct 30, 2024

@merrymercy Fixed the CI just now. Thanks!

@HaiShaw HaiShaw requested a review from merrymercy October 30, 2024 05:30
@@ -572,6 +588,18 @@ def process_weights_after_loading(self, layer: Module) -> None:
start += shard_size

layer.w13_scale = torch.nn.Parameter(max_w13_scales, requires_grad=False)
# If ROCm, apply weight padding (min. Mem channel contention) only if set
if is_hip() and bool(int(os.getenv("MOE_PADDING", "0"))):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move all is_hip under a single branch.
e.g., L555

Copy link
Collaborator Author

@HaiShaw HaiShaw Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@merrymercy understand, the order of data crunching makes me intend to keep dummy padding at very last to avoid error prone situation from intervening normalize_, _dequantize. _fp8_quant, etc., and easier to read.

@merrymercy merrymercy merged commit 5f65e2b into sgl-project:main Oct 30, 2024
11 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants