-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training ops kernels: Speeding up the Llama-based MoE architectures #6734
base: master
Are you sure you want to change the base?
Training ops kernels: Speeding up the Llama-based MoE architectures #6734
Conversation
@RezaYazdaniAminabadi please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
This PR aims to add several kernels to increase the performance of training Llama-based MoE architectures.