-
Notifications
You must be signed in to change notification settings - Fork 807
Qualcomm AI Engine Direct - Llama Inference Refactor & Support SQNR Evaluation #16506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Qualcomm AI Engine Direct - Llama Inference Refactor & Support SQNR Evaluation #16506
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16506
Note: Links to docs will display an error until the docs builds have been completed. ❌ 6 New FailuresAs of commit 7b64dff with merge base 7492d0d ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
277794d to
44aaad8
Compare
|
Hi @cccclai,
|
|
@YIWENX14 can you take a look at the PR and see if it meets the need? |
|
@winskuo-quic Could you clarify whether the comparison is being made using nn.Module rather than the exported model? IIUC nn.Module is the floating point model before quantization —is that the case? |
Hi @YIWENX14, |
Thank you for clarifying. As a follow-up, is it possible to obtain the exported program (quantized) results? Essentially, the process involves two steps: fp32 → quantized → delegated. It would be helpful if we could get the SNR (quantized, delegated) in addition to the end-to-end SNR, so we can better understand the delegation gap. |
Sure, that is possible. However, please notice that the one of the reasons I did not use quantized model is that user can quickly compare nn.Module's result with a pre-generated delegated model's result quickly, since it takes only a couple of seconds to retrieve a nn.Module graph. On the other hand, if a user wants to compare quantized cpu result with pre-generated delegated model's result, they will need to perform |
|
Thanks for the explanation. Yeah there's a way to save and load the quantized exported program (https://docs.pytorch.org/docs/stable/export.html#serialization). We can do: This can be done as a follow up, which can help to better understand the gap between eager and delegation. Thank you so much for the support! |
Refactor, Extract nn.Module Static Llama, Improve CI coverage with evaluating Static LLM SQNR
44aaad8 to
7b64dff
Compare
Thanks for the suggestion and the help. I have pushed a new commit that should support:
Please have a look. Thanks |
|
Thank you for adding the detailed comparison on quantized model. The changes LGTM! |
Summary
This PR supports following:
Unify the inference flow to allow users to add their own inference method.
Currently supports:
Allow Mainline CI to check SQNR scores. Before, we eval PPL during nightly since it took too long. With SQNR evaluation, we can perform CI test for each commit pushed since it is a lot faster (Only runs a couple of tokens). We will still keep PPL eval in nightly and introduce a new test for SQNR evaluation.
Example Script:
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s DEVICE -m SM8750 --temperature 0 --model_mode kv --max_seq_len 1024 --decoder_model smollm2_135m --prompt "I would like to learn python, could you teach me with a simple example?" --artifact ./smollm2_135m/ --eval_methods tasks_eval sqnr_eval --tasks wikitext --limit 1Test plan
For x86 external CI usage:
python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleLLMScript.test_static_llm_model --model_name smollm2_135m --device DEVICE --model SM8750 --build_folder build-x86/ --executorch_root . --artifact_dir . --error_only --static_llm_eval_method sqnr --enable_x86_64For Android Internal CI usage:
python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleLLMScript.test_static_llm_model --model_name smollm2_135m --device DEVICE --model SM8750 --build_folder build-android/ --executorch_root . --artifact_dir . --error_only --static_llm_eval_method sqnr