-
Notifications
You must be signed in to change notification settings - Fork 549
feat(benchmark): Create mock LLM server for use in benchmarks #1403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tgasser-nv
wants to merge
15
commits into
develop
Choose a base branch
from
feat/mock-llm-server
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #1403 +/- ##
===========================================
+ Coverage 71.66% 71.88% +0.22%
===========================================
Files 171 174 +3
Lines 17020 17154 +134
===========================================
+ Hits 12198 12332 +134
Misses 4822 4822
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
… of this into endpoints
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds a Mock LLM and Guardrails and example Content-Safety configuration to use this end-to-end with Guardrails. I have a follow-on PR using Locust to run performance benchmarks on Guardrails on a laptop without any NVCF function calls, local GPUs, or modifications to the Guardrails code.
Description
This PR includes an OpenAI-compatible Mock LLM Fast API app. This is intended to mock production LLMs for performance-testing purposes. The configuration file comes from a .env file, such as below for the Content Safety mock.
The Mock LLM first decides randomly if it should return a safe response or not, using the
UNSAFE_PROBABILITY
probability. This determines whetherSAFE_TEXT
orUNSAFE_TEXT
is returned when the model responds. The Mock LLM then samples latency for the response from a normal distribution (parameterized byLATENCY_MEAN_SECONDS
andLATENCY_STD_SECONDS
), and clips the minimum and maximum values againstLATENCY_MIN_SECONDS
andLATENCY_MAX_SECONDS
respectively.After waiting, it then responds with the text.
Test Plan
This test-plan shows how the Mock LLM can be integrated with Guardrails seamlessly. As long as we characterize our Nemoguard and Application LLM latency correctly and can represent them with a distribution, we can use this to perform performance testing.
Terminal 1 (Content Safety Mock)
Terminal 2 (Content Safety Mock)
Terminal 3 (Guardrails production code)
Terminal 4 (Client issuing request)
Related Issue(s)
Checklist