feature: Kthena runtime adaptation for sglang#812
feature: Kthena runtime adaptation for sglang#812katara-Jayprakash wants to merge 3 commits intovolcano-sh:mainfrom
Conversation
…ccroding to its type Signed-off-by: katara-Jayprakash <[email protected]>
Signed-off-by: katara-Jayprakash <[email protected]>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Kthena runtime adaptation for SGLang by adding crucial model discovery capabilities and extending its observability with a new metric. These changes allow the system to dynamically identify models served by SGLang instances and provide more comprehensive insights into the engine's operational state, improving resource management and monitoring. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for sglang inference engines by implementing getPodModels and adding the RequestRunningNum metric. The changes are generally in the right direction. However, the new implementation of GetPodModels and its associated data structures introduces significant code duplication with the existing vllm engine implementation. I've provided suggestions to refactor this duplicated code into shared components to improve maintainability. I also pointed out a potential issue with an HTTP request not having a timeout, which could lead to hanging processes. Finally, a minor comment on code documentation was added.
| func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) { | ||
| return nil, nil | ||
| url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort) | ||
| resp, err := http.Get(url) |
There was a problem hiding this comment.
The call to http.Get does not have a timeout. This could cause the request to hang indefinitely if the server is unresponsive. It's recommended to use an http.Client with a configured timeout to prevent this. Note that this change will require importing the time package.
| resp, err := http.Get(url) | |
| resp, err := (&http.Client{Timeout: 5 * time.Second}).Get(url) |
| type Model struct { | ||
| ID string `json:"id"` | ||
| } | ||
|
|
||
| type ModelList struct { | ||
| Data []Model `json:"data"` | ||
| } |
There was a problem hiding this comment.
These Model and ModelList structs are duplicates of the ones defined in pkg/kthena-router/backend/vllm/models.go. To improve maintainability and avoid code duplication, these structs should be moved to a shared package, for example, a new pkg/kthena-router/backend/types.go file, and reused in both vllm and sglang packages.
| } | ||
|
|
||
| // TODO: Methods to get Models from sglang | ||
| // implemented the get models function for sglang engine, |
There was a problem hiding this comment.
| func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) { | ||
| return nil, nil | ||
| url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort) | ||
| resp, err := http.Get(url) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| defer resp.Body.Close() | ||
|
|
||
| if resp.StatusCode != http.StatusOK { | ||
| return nil, fmt.Errorf("failed to get models from pod %s/%s: HTTP %d", pod.GetNamespace(), pod.GetName(), resp.StatusCode) | ||
| } | ||
|
|
||
| body, err := io.ReadAll(resp.Body) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
|
|
||
| var modelList ModelList | ||
| err = json.Unmarshal(body, &modelList) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
|
|
||
| models := make([]string, 0, len(modelList.Data)) | ||
| for _, model := range modelList.Data { | ||
| models = append(models, model.ID) | ||
| } | ||
| return models, nil | ||
| } |
There was a problem hiding this comment.
This function GetPodModels is identical to the implementation for vllm in pkg/kthena-router/backend/vllm/models.go. To avoid code duplication, this logic should be extracted into a shared helper function. This function could be placed in a common backend package and called by both sglang and vllm implementations. Additionally, for consistency with the vllm backend, consider moving model-related logic into a new pkg/kthena-router/backend/sglang/models.go file.
There was a problem hiding this comment.
Pull request overview
Adds SGLang parity with existing vLLM runtime integration by exposing pod model discovery and extending SGLang engine metrics.
Changes:
- Add
RequestRunningNumto SGLang metrics and include it in metric name mapping. - Update SGLang count-metric extraction to support both Prometheus GAUGE and COUNTER metric families.
- Implement
GetPodModelsfor SGLang by calling the pod’s/v1/modelsendpoint and parsing the response.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort) | ||
| resp, err := http.Get(url) | ||
| if err != nil { | ||
| return nil, err | ||
| } |
| } | ||
|
|
||
| // TODO: Methods to get Models from sglang | ||
| // implemented the get models function for sglang engine, |
There was a problem hiding this comment.
fixed in next commit,
Signed-off-by: katara-Jayprakash <[email protected]>
| for _, metric := range metricInfo.Metric { | ||
| metricValue := metric.GetCounter().GetValue() | ||
| var metricValue float64 | ||
| switch metricInfo.GetType() { |
There was a problem hiding this comment.
the reason for adding this switch statement is i'm not actually sure what kind of metricsType i am going to get like gauge or counter from sglang engine, so i ended up using switch case for both condtional's but would love to what exactly kind of data we are going to get? @YaoZengzeng @LiZhenCheng9527!
hzxuzhonghu
left a comment
There was a problem hiding this comment.
One thing I would like to clarify before this merges: what happens when /v1/models is temporarily unavailable on a pod?
The new GetPodModels implementation returns an error on any non-200, and in the router store updatePodModels still calls podInfo.UpdateModels(models) even when GetPodModels failed. Since models is then nil/empty, that effectively clears the pod's model set.
If every SGLang pod that the router watches always exposes /v1/models, then this is fine. But if there are SGLang topologies where some pods only expose metrics/health and not model listing, a transient failure here would silently drop model associations for that pod. It may be worth either preserving the previous model set on error or adding a test that proves this endpoint is guaranteed for the pods we register.
Makes sense. I think we can’t test these PRs due to resource constraints. I’ll still try to add helpful reviews and suggestions. |
|
@katara-Jayprakash Is there anything else that needs to be revised in this PR? |
hzxuzhonghu
left a comment
There was a problem hiding this comment.
Sorry, i forgot to submit my previous review
| // GetPodModels retrieves the list of models from a pod running the sglang engine. | ||
| func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) { | ||
| return nil, nil | ||
| url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort) |
There was a problem hiding this comment.
can we abstract a common helper function from vllm backend,
| func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) { | ||
| return nil, nil | ||
| url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort) | ||
| resp, err := http.Get(url) |
| GPUCacheUsage = "sglang:token_usage" | ||
| RequestWaitingNum = "sglang:num_queue_reqs" | ||
| RequestRunningNum = "sglang:num_running_reqs" | ||
| TPOT = "sglang:time_per_output_token_seconds" |
There was a problem hiding this comment.
this metric has been replaced by sglang:inter_token_latency_seconds https://github.com/sgl-project/sglang/blob/385a35bd118d2e420be1fd6978c703d22a3bcc77/python/sglang/srt/observability/metrics_collector.py#L188
@hzxuzhonghu @LiZhenCheng9527 i think we can work on this thing, i am currently working on e2e test, but have this also in my bucket list sorry for delay responses! i'm gonna make sure to finish it asap |
What type of PR is this?
This is a feature PR that adds the getPodModels functionality for SGLang inference engines, similar to the existing implementation for vLLM engines.It also adds a new metrics field, RequestRunningNum, to the SGLang engine metrics.
What are the learning i got from this issue:
Special thanks to @YaoZengzeng and @LiZhenCheng9527 for discussing the problem with me, answering my questions, and helping me understand the system better.
Which issue(s) this PR fixes:
Fixes #788
Special notes for your reviewer:
Does this PR introduce a user-facing change?: No