Skip to content

Conversation

@Edwardf0t1
Copy link
Contributor

What does this PR do?

Type of change: Bug fix

Overview: The Llama-4-Scout-17B-16E-Instruct model uses Llama4TextExperts, which stores expert weights in a BMM (batch matrix multiply) layout: (num_experts, input_dim, output_dim). This is different from standard MoE models. The FP8_PC_PT (FP8 per-channel per-token) quantization code didn't handle this layout properly, causing shape mismatches.

Usage

python3 hf_ptq.py --pyt_ckpt_path /home/scratch.omniml_data_2/models/Llama-4-Scout-17B-16E-Instruct --qformat fp8_pc_pt --export_path /home/scratch.omniml_data_2/zhiyuc/checkpoints/llama4-scout-fp8_pc_pt --trust_remote_code

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: No
  • Did you add or update any necessary documentation?: No
  • Did you update Changelog?: No

Additional Information

@Edwardf0t1 Edwardf0t1 requested a review from a team as a code owner November 5, 2025 23:09
@Edwardf0t1 Edwardf0t1 requested a review from sugunav14 November 5, 2025 23:09
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 5, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@codecov
Copy link

codecov bot commented Nov 5, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.45%. Comparing base (a703e22) to head (5078573).
⚠️ Report is 5 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #515   +/-   ##
=======================================
  Coverage   74.45%   74.45%           
=======================================
  Files         182      182           
  Lines       18250    18250           
=======================================
  Hits        13588    13588           
  Misses       4662     4662           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Edwardf0t1 Edwardf0t1 force-pushed the zhiyu/fix-bmm-moe-fp8-pc-pt-export branch from 4bb85d1 to fbd5417 Compare November 6, 2025 07:54
@cjluo-nv
Copy link
Collaborator

Could you share what models have you tested?

@Edwardf0t1 Edwardf0t1 force-pushed the zhiyu/fix-bmm-moe-fp8-pc-pt-export branch from 8d7fe0b to b4b6d9c Compare November 20, 2025 19:00
@Edwardf0t1
Copy link
Contributor Author

Could you share what models have you tested?

Llama4 scout is tested

Copy link
Contributor

@meenchen meenchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Will defer to @cjluo-nv for approval.

return (weight / weights_scaling_factor[:, None, None]).to(torch.float8_e4m3fn)
elif weights_scaling_factor.dim() == 2:
# Per-channel scaling: check which dimension matches
if weights_scaling_factor.shape[0] != weight.shape[0]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just do assert here instead for simplification? Same for line 794

@Edwardf0t1 Edwardf0t1 merged commit a5025a2 into main Nov 22, 2025
27 checks passed
@Edwardf0t1 Edwardf0t1 deleted the zhiyu/fix-bmm-moe-fp8-pc-pt-export branch November 22, 2025 00:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants