-
Notifications
You must be signed in to change notification settings - Fork 8
Load model proto with secrets once at server initialization #837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
wemoveon2
merged 4 commits into
master
from
copilot/fix-8550654-20333051-52e6908a-f5da-4a7d-a127-14e93f6da774
Nov 4, 2025
Merged
Load model proto with secrets once at server initialization #837
wemoveon2
merged 4 commits into
master
from
copilot/fix-8550654-20333051-52e6908a-f5da-4a7d-a127-14e93f6da774
Nov 4, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Co-authored-by: wemoveon2 <[email protected]>
…uest Co-authored-by: wemoveon2 <[email protected]>
Copilot
AI
changed the title
[EAGLE-5341] Load model proto once to stop expecting it from predict request
the server start servicer path is missing the model proto since it's only added to the ModelRunner. should we GetModel in the self._current_model so it's proto and model cla...
Load model proto with secrets once at server initialization
Nov 3, 2025
Copilot finished work on behalf of
wemoveon2
November 3, 2025 15:59
|
@copilot I dont think we need the test |
Co-authored-by: wemoveon2 <[email protected]>
Copilot finished work on behalf of
wemoveon2
November 3, 2025 17:45
The test file shown in the diff was already removed in commit bbb4ea9 per your earlier feedback. The implementation files (model_builder.py, model_servicer.py, server.py) are properly formatted - verified with |
Copilot finished work on behalf of
wemoveon2
November 3, 2025 18:08
wemoveon2
approved these changes
Nov 3, 2025
Minimum allowed line rate is |
zeiler
approved these changes
Nov 3, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR fixes EAGLE-5341 by ensuring the model proto with secrets is loaded once during server initialization and is available for all predict requests.
Problem Statement
The server start servicer path was missing the model proto since it's only added to the ModelRunner. Request-type secrets were not being fetched in the GetModel call.
Solution
ModelBuilder.get_model_proto()to includeadditional_fields=["secrets"]in GetModelRequestModelServer.start_servicer()to pass model_proto to ModelServicerModelServicer.__init__()to accept and store model_proto parameterModelServicermethods to inject model_proto when request doesn't have itChanges Made
1. ModelBuilder.get_model_proto() (
clarifai/runners/models/model_builder.py).append()sinceadditional_fieldsis a repeated field in protobuf2. ModelServicer (
clarifai/runners/models/model_servicer.py)model_protoparameter for backward compatibilityPostModelOutputs: Single request injectionGenerateModelOutputs: Single request injectionStreamModelOutputs: Multiple request injection for streaming3. ModelServer.start_servicer() (
clarifai/runners/server.py)Benefits
✅ Model proto with secrets loaded once during server initialization
✅ Eliminates need to send model proto with every predict request
✅ Consistent behavior between ModelRunner and ModelServicer paths
✅ Better performance by avoiding redundant model proto fetches
✅ Request-type secrets properly included in model proto
✅ Maintains backward compatibility
Original prompt
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.