Skip to content

Benchmark HF optimum-executorch #11450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Benchmark HF optimum-executorch #11450

wants to merge 7 commits into from

Conversation

guangy10
Copy link
Contributor

@guangy10 guangy10 commented Jun 6, 2025

Benchmark LLMs from optimum-executorch. With all the work recently happening in optimum-executorch, we are able to boost the out-of-the-box performance. Putting these models on benchmark infra to gather perf numbers and understand the remaining perf gaps between the in-house generated model via export_llama.

We are able to do apple-to-apple comparison for CPU backend by introducing quant, custom SPDA, custom KV cache to native Hugging Face models in optimum-executorch: hf_xnnpack_custom_spda_kv_cache_8da4w represents the recipe used by optimum-et, et_xnnpack_custom_spda_kv_cache_8da4w is the counterpart for etLLM.

Here are the benchmark jobs in our infra:

Note there may be failures when running optimum-et models on-device due to lack of support HF tokenizers in the benchmark apps. I have to remove packing tokenizer.json from the .zip in order to unblock collecting raw latency on forward() call.

Copy link

pytorch-bot bot commented Jun 6, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11450

Note: Links to docs will display an error until the docs builds have been completed.

❌ 6 New Failures, 1 Pending

As of commit b25c0d2 with merge base cbd3874 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 6, 2025
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 6, 2025 19:38 — with GitHub Actions Failure
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 6, 2025 19:38 — with GitHub Actions Failure
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 6, 2025 19:40 — with GitHub Actions Failure
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 19:43 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from e4718b0 to fff15c6 Compare June 6, 2025 19:46
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 19:51 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 20:37 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 20:45 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from fff15c6 to 00149f2 Compare June 6, 2025 20:46
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 20:51 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from 00149f2 to 112eb2b Compare June 6, 2025 21:14
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 21:19 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 21:38 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from 112eb2b to a38a694 Compare June 6, 2025 22:02
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from a38a694 to a0f636f Compare June 6, 2025 22:50
@guangy10 guangy10 marked this pull request as ready for review June 6, 2025 22:50
@guangy10 guangy10 changed the title Benchmark optimum-executorch Benchmark HF optimum-executorch Jun 6, 2025
@guangy10 guangy10 added the release notes: none Do not include this in the release notes label Jun 6, 2025
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 23:43 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 6, 2025 23:50 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from a0f636f to 5d6dd04 Compare June 7, 2025 01:23
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 7, 2025 01:25 — with GitHub Actions Inactive
@guangy10
Copy link
Contributor Author

guangy10 commented Jun 7, 2025

@huydhn In the apple's workflow, though I have specified the python version to be "3.11", it still install the python "3.13". Then when trying to pip install executorch, it ended with no package found. That's because we only publish with python 3.10, 3.11, and 3.12. https://github.com/pytorch/executorch/actions/runs/15500604843/job/43647388676#step:9:13372

Okay, it turns out that I need to run install with ${CONDA_RUN}

@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 7, 2025 02:15 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 7, 2025 02:16 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from 74583ba to 88a0a19 Compare June 11, 2025 17:15
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 18:14 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 18:17 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 18:33 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 18:46 — with GitHub Actions Inactive
@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from 88a0a19 to e1340e3 Compare June 11, 2025 18:56
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 11, 2025 19:36 — with GitHub Actions Failure
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 19:48 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 19:49 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 20:00 — with GitHub Actions Inactive
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 11, 2025 20:42 — with GitHub Actions Failure
@guangy10
Copy link
Contributor Author

@guangy10 for the reported 5x higher numbers compared to etLLM, could this be comparing the results for the HF pte running non-LLM benchmark tests vs the etLLM pte. running LLM benchmark test? Just a guess.

Now I can TPS reported from the Android app. I manually pull the results and it turns seems the TPS is almost same between the etLLM generated Qwen3 and the optimum-et generated Qwen3:
Screenshot 2025-06-11 at 1 52 50 PM

@guangy10 guangy10 force-pushed the optimum_et_benchmark branch from b96797d to b25c0d2 Compare June 11, 2025 21:09
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 21:51 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 21:55 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 22:10 — with GitHub Actions Inactive
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 11, 2025 22:14 — with GitHub Actions Failure
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 22:19 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 22:29 — with GitHub Actions Inactive
@guangy10 guangy10 temporarily deployed to upload-benchmark-results June 11, 2025 23:31 — with GitHub Actions Inactive
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 11, 2025 23:55 — with GitHub Actions Failure
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 11, 2025 23:57 — with GitHub Actions Failure
@guangy10 guangy10 had a problem deploying to upload-benchmark-results June 12, 2025 00:18 — with GitHub Actions Failure
@guangy10
Copy link
Contributor Author

For some reason doesn't see the results from the hf recipe got uploaded here. Upon manual checking the results, same perf on iOS devices as well.
Screenshot 2025-06-11 at 5 41 52 PM

@guangy10 guangy10 deployed to upload-benchmark-results June 12, 2025 00:48 — with GitHub Actions Active
@guangy10 guangy10 requested a review from kimishpatel June 12, 2025 00:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants