Skip to content

Conversation

intervitens
Copy link
Contributor

@intervitens intervitens commented Jun 12, 2025

This PR adds support for Qwen3 MoE (30B-A3B and 235B-A22B) models. Loss looked reasonable from a simple test with 30B-A3B on the Alpaca dataset.

TODO:

  • Tensor/Expert parallel
  • Test 235B model
  • Verify loss curves against HF implementation
  • LoRA support
  • Documentation
  • Tests

Copy link

pytorch-bot bot commented Jun 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2820

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 12, 2025
Comment on lines +218 to +227
if "experts.0" in key:
new_key = get_mapped_key_moe(key, _FROM_HF)
converted_state_dict[new_key] = torch.stack(
[
state_dict[str(i).join(key.rsplit("0", 1))].T
for i in range(num_experts)
]
)
elif "experts" in key:
continue
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is kinda hacky and slow, but stacking experts into a single tensor lets us use the existing MoE implementation and _grouped_mm for a very significant speed boost.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this to be prohibitively slow for larger models when testing with DeepSeek. I ended up following the torchtitan approach which uses a nn.ModuleDict for storing expert weights and grouping activated experts on-the-fly for grouped gemm.

Copy link
Contributor Author

@intervitens intervitens Jun 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can see, torchtitan also needs to stack expert weights to use grouped gemm, they just do it at a slightly later point. They also don't support grouped gemm for training for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh you're totally right - I was thinking of EP.
If this doesn't take ages to convert the state dict it may be fine, otherwise only stacking the activated experts at runtime does make more sense to me.

self.selected_experts_indices = selected_experts_indices
# top_scores /= top_scores.sum(dim=-1, keep_dim=True).to(x.dtype)
if self.norm_topk_prob:
top_scores /= top_scores.sum(dim=-1, keepdim=True).to(x.dtype)
Copy link
Contributor

@SalmanMohammadi SalmanMohammadi Jun 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

YMMV but I ended up having to do something like

            denominator = top_scores.sum(dim=-1, keepdim=True) + 1e-20
            top_scores /= denominator

@dz1iang
Copy link

dz1iang commented Aug 6, 2025

@intervitens Hi,Does it support training with fp8-compatible checkpoints?

@dz1iang
Copy link

dz1iang commented Aug 13, 2025

Hello, I see the training process prompts: "Saving Qwen3 MoE adapter weights to PEFT format is not supported, saving to torchtune format instead."
May I ask how to obtain the Hugging Face (hf) checkpoint? Is there any code example for reference?
Additionally, I only set lora_attn_modules: ['q_proj', 'v_proj', 'output_proj'], apply_lora_to_mlp: False, and apply_lora_to_output: False—can the tune_to_peft_adapter_weights logic be used in this case? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants