Skip to content

Conversation

@LHXuuu
Copy link

@LHXuuu LHXuuu commented Nov 20, 2025

SUMMARY:
Support npu platform

Signed-off-by: LHXuuu <[email protected]>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @LHXuuu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the llmcompressor library by integrating support for Neural Processing Unit (NPU) platforms. The change allows the system to detect and utilize NPU devices for model execution, expanding the range of hardware environments where models can be efficiently deployed and processed. This broadens the compatibility and potential performance benefits for users with NPU-equipped systems.

Highlights

  • NPU Platform Support: This pull request introduces support for NPU (Neural Processing Unit) platforms within the llmcompressor library, enabling models to leverage NPU devices for execution.
  • Device Dispatch Logic: A new conditional check for torch.npu.is_available() has been added to the dispatch_for_sequential helper function, allowing models to be offloaded to npu:0 if an NPU device is detected.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for NPU devices in the sequential pipeline's dispatch logic. The implementation is a good step, but I've identified a potential runtime error if the environment's PyTorch build doesn't include NPU support, and I've provided a suggestion to make the check safer. For completeness, it would be beneficial to also update the function's docstring and the warning log message to include 'NPU' alongside 'CUDA' and 'XPU', although these are outside the current diff.

Signed-off-by: LHXuuu <[email protected]>
@HDCharles
Copy link
Collaborator

Does this just work? Were you able to test this?

Copy link
Collaborator

@HDCharles HDCharles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add test details to show how/that this works?

@LHXuuu
Copy link
Author

LHXuuu commented Nov 21, 2025

Can you add test details to show how/that this works?

Hi. The following is a screenshot of the qwen3-32b GPTQ quantization process on the NPU platform.
image

@LHXuuu LHXuuu requested a review from HDCharles November 21, 2025 08:46
@HDCharles
Copy link
Collaborator

did the run complete and give reasonable output? Can you provide the full test script?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants