Skip to content

Add Remote LLM Support for Perturbation-Based Attribution via RemoteLLMAttribution and VLLMProvider #1544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

saichandrapandraju
Copy link

@saichandrapandraju saichandrapandraju commented Apr 15, 2025

This PR introduces support for applying Captum's perturbation-based attribution algorithms to remotely hosted large language models (LLMs). It enables users to perform interpretability analyses on models served via APIs, such as those using vLLM, without requiring access to model internals.

Motivation:

Captum’s current LLM attribution framework requires access to local models, limiting its usability in production and hosted environments. With the rise of scalable remote inference backends and OpenAI-compatible APIs, this PR allows Captum to be used for black-box interpretability with hosted models, as long as they return token-level log probabilities.

This integration also aligns with ongoing efforts like llama-stack, which aims to provide a unified API layer for inference (and also for RAG, Agents, Tools, Safety, Evals, and Telemetry) across multiple backends—further expanding Captum’s reach for model explainability.

Key Additions:

  • RemoteLLMProvider Interface:
    A generic interface for fetching log probabilities from remote LLMs, making it easy to plug in various inference backends.
  • VLLMProvider Implementation:
    A concrete subclass of RemoteLLMProvider tailored for models served using vLLM, handling the specifics of communicating with vLLM endpoints to retrieve necessary data for attribution.
  • RemoteLLMAttribution class:
    A subclass of LLMAttribution that overrides internal methods to work with remote providers. It enables all perturbation-based algorithms (e.g., Feature Ablation, Shapley Values, KernelSHAP) using only the output logprobs from a remote LLM.
  • OpenAI-Compatible API Support:
    Used openai client under the hood for querying remote models, as many LLM serving solutions now support the OpenAI-compatible API format (e.g., vLLM OpenAI server and projects like llama-stack(see here for ongoing work related to this).

Issue(s) related to this:

… hosted models that provide logprobs (like vLLM)
@facebook-github-bot
Copy link
Contributor

Hi @saichandrapandraju!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@saichandrapandraju
Copy link
Author

Hi @vivekmig @yucu @aobo-y could you check this PR please. Let me know if I have to change anything

@aobo-y
Copy link
Contributor

aobo-y commented Apr 18, 2025

Thank you @saichandrapandraju for the great effort! Generally, i agree this idea makes a lot of sense.
But our team may need some time to look into the code changes and get back to you.

@craymichael can you take a look at it? since you have studied the integration with llama-stack before

@saichandrapandraju
Copy link
Author

saichandrapandraju commented Apr 18, 2025

Thank you for the positive feedback @aobo-y ! Happy to hear the direction makes sense. Please take your time reviewing — I’ll be around to clarify or iterate on anything as needed. Looking forward to it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants