Skip to content

Conversation

@winskuo-quic
Copy link
Collaborator

@winskuo-quic winskuo-quic commented Jan 8, 2026

Summary

This PR supports following:

  1. Unify the inference flow to allow users to add their own inference method.
    Currently supports:

    • Output Prompt, Given a prompt, return the output.
    • Tasks Evaluation: Evaluate tasks such as perplexity.
    • SQNR Evaluation: Evaluate SQNR score based on prompt's logit.
  2. Allow Mainline CI to check SQNR scores. Before, we eval PPL during nightly since it took too long. With SQNR evaluation, we can perform CI test for each commit pushed since it is a lot faster (Only runs a couple of tokens). We will still keep PPL eval in nightly and introduce a new test for SQNR evaluation.

Example Script:
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s DEVICE -m SM8750 --temperature 0 --model_mode kv --max_seq_len 1024 --decoder_model smollm2_135m --prompt "I would like to learn python, could you teach me with a simple example?" --artifact ./smollm2_135m/ --eval_methods tasks_eval sqnr_eval --tasks wikitext --limit 1

Test plan

For x86 external CI usage:
python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleLLMScript.test_static_llm_model --model_name smollm2_135m --device DEVICE --model SM8750 --build_folder build-x86/ --executorch_root . --artifact_dir . --error_only --static_llm_eval_method sqnr --enable_x86_64

For Android Internal CI usage:
python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleLLMScript.test_static_llm_model --model_name smollm2_135m --device DEVICE --model SM8750 --build_folder build-android/ --executorch_root . --artifact_dir . --error_only --static_llm_eval_method sqnr

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 8, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16506

Note: Links to docs will display an error until the docs builds have been completed.

❌ 6 New Failures

As of commit 7b64dff with merge base 7492d0d (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 8, 2026
@github-actions
Copy link

github-actions bot commented Jan 8, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@winskuo-quic winskuo-quic marked this pull request as draft January 9, 2026 01:09
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/sqnr_decoder_evaluation branch from 277794d to 44aaad8 Compare January 9, 2026 04:37
@winskuo-quic winskuo-quic marked this pull request as ready for review January 14, 2026 01:49
@winskuo-quic
Copy link
Collaborator Author

winskuo-quic commented Jan 14, 2026

Hi @cccclai,
This PR is to:

  1. Enable SQNR evaluation for Static LLM models. Please refer to test_qnn_delegate.py for the sqnr scores for each model.
  2. Added SQNR eval in mainline CI. It only takes 30 min to lower and eval sqnr for smollm2_135m on x86 emulator, so we can put this under pull.yml, making the CI more robust.
  3. Refactored the inference flow for llama.py, making it easier for users to support new eval methods in the future.
    Please have a look.
    Thanks.

@cccclai
Copy link
Contributor

cccclai commented Jan 14, 2026

@YIWENX14 can you take a look at the PR and see if it meets the need?

@YIWENX14
Copy link
Contributor

YIWENX14 commented Jan 14, 2026

@winskuo-quic Could you clarify whether the comparison is being made using nn.Module rather than the exported model? IIUC nn.Module is the floating point model before quantization —is that the case?

@winskuo-quic
Copy link
Collaborator Author

@winskuo-quic Could you clarify whether the comparison is being made using nn.Module rather than the exported model? IIUC nn.Module is the floating point model before quantization —is that the case?

Hi @YIWENX14,
Your understanding is correct. We are using nn.Module FP32 as the golden to compare against QNN.

@YIWENX14
Copy link
Contributor

@winskuo-quic Could you clarify whether the comparison is being made using nn.Module rather than the exported model? IIUC nn.Module is the floating point model before quantization —is that the case?

Hi @YIWENX14, Your understanding is correct. We are using nn.Module FP32 as the golden to compare against QNN.

Thank you for clarifying. As a follow-up, is it possible to obtain the exported program (quantized) results? Essentially, the process involves two steps: fp32 → quantized → delegated. It would be helpful if we could get the SNR (quantized, delegated) in addition to the end-to-end SNR, so we can better understand the delegation gap.

@winskuo-quic
Copy link
Collaborator Author

@winskuo-quic Could you clarify whether the comparison is being made using nn.Module rather than the exported model? IIUC nn.Module is the floating point model before quantization —is that the case?

Hi @YIWENX14, Your understanding is correct. We are using nn.Module FP32 as the golden to compare against QNN.

Thank you for clarifying. As a follow-up, is it possible to obtain the exported program (quantized) results? Essentially, the process involves two steps: fp32 → quantized → delegated. It would be helpful if we could get the SNR (quantized, delegated) in addition to the end-to-end SNR, so we can better understand the delegation gap.

Sure, that is possible. However, please notice that the one of the reasons I did not use quantized model is that user can quickly compare nn.Module's result with a pre-generated delegated model's result quickly, since it takes only a couple of seconds to retrieve a nn.Module graph. On the other hand, if a user wants to compare quantized cpu result with pre-generated delegated model's result, they will need to perform export -> prepare_pt2e -> calibration -> convert_pt2e every time before we start comparing, which could take some time.
If this seems fine to you, I can have the feature supported.
If you have some recommendations on the official flow to save quantized model (so we don't have to quant every time when comparing sqnr), it will also be appreciated.

@YIWENX14
Copy link
Contributor

Thanks for the explanation. Yeah there's a way to save and load the quantized exported program (https://docs.pytorch.org/docs/stable/export.html#serialization). We can do:

torch.export.save(exported_program, 'exported_program.pt2')
saved_exported_program = torch.export.load('exported_program.pt2')
res = saved_exported_program.module()(*example_inputs)

This can be done as a follow up, which can help to better understand the gap between eager and delegation. Thank you so much for the support!

Refactor, Extract nn.Module Static Llama, Improve CI coverage with
evaluating Static LLM SQNR
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/sqnr_decoder_evaluation branch from 44aaad8 to 7b64dff Compare January 20, 2026 05:51
@winskuo-quic
Copy link
Collaborator Author

winskuo-quic commented Jan 20, 2026

Thanks for the explanation. Yeah there's a way to save and load the quantized exported program (https://docs.pytorch.org/docs/stable/export.html#serialization). We can do:

torch.export.save(exported_program, 'exported_program.pt2')
saved_exported_program = torch.export.load('exported_program.pt2')
res = saved_exported_program.module()(*example_inputs)

This can be done as a follow up, which can help to better understand the gap between eager and delegation. Thank you so much for the support!

Thanks for the suggestion and the help. I have pushed a new commit that should support:

  • FP32 nn.Module VS QNN
  • FP32 nn.Module VS CPU QDQ
  • CPU QDQ VS QNN

Please have a look. Thanks

@YIWENX14
Copy link
Contributor

Thank you for adding the detailed comparison on quantized model. The changes LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants