Skip to content

feature: Kthena runtime adaptation for sglang#812

Open
katara-Jayprakash wants to merge 3 commits intovolcano-sh:mainfrom
katara-Jayprakash:Kthena-runtime-adaptation-for-sglang
Open

feature: Kthena runtime adaptation for sglang#812
katara-Jayprakash wants to merge 3 commits intovolcano-sh:mainfrom
katara-Jayprakash:Kthena-runtime-adaptation-for-sglang

Conversation

@katara-Jayprakash
Copy link
Copy Markdown
Member

@katara-Jayprakash katara-Jayprakash commented Mar 13, 2026

What type of PR is this?

This is a feature PR that adds the getPodModels functionality for SGLang inference engines, similar to the existing implementation for vLLM engines.It also adds a new metrics field, RequestRunningNum, to the SGLang engine metrics.

What are the learning i got from this issue:

  • Thanks to this issue, I had the opportunity to explore the Kthena runtime, sidecar, router, and autoscaler codebases more deeply. I learned how Kthena collects metrics from inference engines and how those metrics are used differently for autoscaling and intelligent routing.
    Special thanks to @YaoZengzeng and @LiZhenCheng9527 for discussing the problem with me, answering my questions, and helping me understand the system better.

Which issue(s) this PR fixes:
Fixes #788

Special notes for your reviewer:

Does this PR introduce a user-facing change?: No

NONE

Copilot AI review requested due to automatic review settings March 13, 2026 20:19
@volcano-sh-bot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign hzxuzhonghu for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the Kthena runtime adaptation for SGLang by adding crucial model discovery capabilities and extending its observability with a new metric. These changes allow the system to dynamically identify models served by SGLang instances and provide more comprehensive insights into the engine's operational state, improving resource management and monitoring.

Highlights

  • SGLang Model Discovery: Implemented getPodModels functionality for SGLang inference engines, enabling the discovery of available models, similar to the existing vLLM engine support.
  • New SGLang Metric: Introduced a new metric, RequestRunningNum, to track the number of currently running requests in the SGLang engine.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • pkg/kthena-router/backend/sglang/metrics.go
    • Added RequestRunningNum metric constant and included it in metric mappings.
    • Introduced Model and ModelList structs for JSON unmarshalling of model data.
    • Updated metric collection logic to correctly handle both gauge and counter metric types.
    • Implemented the GetPodModels function to retrieve model IDs from SGLang pods via HTTP.
Activity
  • No human activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@katara-Jayprakash katara-Jayprakash changed the title feat: Kthena runtime adaptation for sglang feature: Kthena runtime adaptation for sglang Mar 13, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for sglang inference engines by implementing getPodModels and adding the RequestRunningNum metric. The changes are generally in the right direction. However, the new implementation of GetPodModels and its associated data structures introduces significant code duplication with the existing vllm engine implementation. I've provided suggestions to refactor this duplicated code into shared components to improve maintainability. I also pointed out a potential issue with an HTTP request not having a timeout, which could lead to hanging processes. Finally, a minor comment on code documentation was added.

func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) {
return nil, nil
url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort)
resp, err := http.Get(url)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The call to http.Get does not have a timeout. This could cause the request to hang indefinitely if the server is unresponsive. It's recommended to use an http.Client with a configured timeout to prevent this. Note that this change will require importing the time package.

Suggested change
resp, err := http.Get(url)
resp, err := (&http.Client{Timeout: 5 * time.Second}).Get(url)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Comment on lines +61 to +67
type Model struct {
ID string `json:"id"`
}

type ModelList struct {
Data []Model `json:"data"`
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These Model and ModelList structs are duplicates of the ones defined in pkg/kthena-router/backend/vllm/models.go. To improve maintainability and avoid code duplication, these structs should be moved to a shared package, for example, a new pkg/kthena-router/backend/types.go file, and reused in both vllm and sglang packages.

}

// TODO: Methods to get Models from sglang
// implemented the get models function for sglang engine,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is informal and grammatically incorrect. Please replace it with a standard Go doc comment explaining the function's purpose.

Suggested change
// implemented the get models function for sglang engine,
// GetPodModels retrieves the list of models from a pod running the sglang engine.

Comment on lines 141 to 169
func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) {
return nil, nil
url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort)
resp, err := http.Get(url)
if err != nil {
return nil, err
}
defer resp.Body.Close()

if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("failed to get models from pod %s/%s: HTTP %d", pod.GetNamespace(), pod.GetName(), resp.StatusCode)
}

body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}

var modelList ModelList
err = json.Unmarshal(body, &modelList)
if err != nil {
return nil, err
}

models := make([]string, 0, len(modelList.Data))
for _, model := range modelList.Data {
models = append(models, model.ID)
}
return models, nil
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function GetPodModels is identical to the implementation for vllm in pkg/kthena-router/backend/vllm/models.go. To avoid code duplication, this logic should be extracted into a shared helper function. This function could be placed in a common backend package and called by both sglang and vllm implementations. Additionally, for consistency with the vllm backend, consider moving model-related logic into a new pkg/kthena-router/backend/sglang/models.go file.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds SGLang parity with existing vLLM runtime integration by exposing pod model discovery and extending SGLang engine metrics.

Changes:

  • Add RequestRunningNum to SGLang metrics and include it in metric name mapping.
  • Update SGLang count-metric extraction to support both Prometheus GAUGE and COUNTER metric families.
  • Implement GetPodModels for SGLang by calling the pod’s /v1/models endpoint and parsing the response.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +142 to +146
url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort)
resp, err := http.Get(url)
if err != nil {
return nil, err
}
}

// TODO: Methods to get Models from sglang
// implemented the get models function for sglang engine,
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next commit,

for _, metric := range metricInfo.Metric {
metricValue := metric.GetCounter().GetValue()
var metricValue float64
switch metricInfo.GetType() {
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason for adding this switch statement is i'm not actually sure what kind of metricsType i am going to get like gauge or counter from sglang engine, so i ended up using switch case for both condtional's but would love to what exactly kind of data we are going to get? @YaoZengzeng @LiZhenCheng9527!

Copy link
Copy Markdown
Member

@hzxuzhonghu hzxuzhonghu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I would like to clarify before this merges: what happens when /v1/models is temporarily unavailable on a pod?

The new GetPodModels implementation returns an error on any non-200, and in the router store updatePodModels still calls podInfo.UpdateModels(models) even when GetPodModels failed. Since models is then nil/empty, that effectively clears the pod's model set.

If every SGLang pod that the router watches always exposes /v1/models, then this is fine. But if there are SGLang topologies where some pods only expose metrics/health and not model listing, a transient failure here would silently drop model associations for that pod. It may be worth either preserving the previous model set on error or adding a test that proves this endpoint is guaranteed for the pods we register.

@katara-Jayprakash
Copy link
Copy Markdown
Member Author

One thing I would like to clarify before this merges: what happens when /v1/models is temporarily unavailable on a pod?

The new GetPodModels implementation returns an error on any non-200, and in the router store updatePodModels still calls podInfo.UpdateModels(models) even when GetPodModels failed. Since models is then nil/empty, that effectively clears the pod's model set.

If every SGLang pod that the router watches always exposes /v1/models, then this is fine. But if there are SGLang topologies where some pods only expose metrics/health and not model listing, a transient failure here would silently drop model associations for that pod. It may be worth either preserving the previous model set on error or adding a test that proves this endpoint is guaranteed for the pods we register.

Makes sense. I think we can’t test these PRs due to resource constraints. I’ll still try to add helpful reviews and suggestions.

@LiZhenCheng9527
Copy link
Copy Markdown
Contributor

LiZhenCheng9527 commented Apr 7, 2026

@katara-Jayprakash Is there anything else that needs to be revised in this PR?
If this PR is no longer needed, can I close this PR?

Copy link
Copy Markdown
Member

@hzxuzhonghu hzxuzhonghu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, i forgot to submit my previous review

// GetPodModels retrieves the list of models from a pod running the sglang engine.
func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) {
return nil, nil
url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we abstract a common helper function from vllm backend,

func (engine *sglangEngine) GetPodModels(pod *corev1.Pod) ([]string, error) {
return nil, nil
url := fmt.Sprintf("http://%s:%d/v1/models", pod.Status.PodIP, engine.MetricPort)
resp, err := http.Get(url)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

GPUCacheUsage = "sglang:token_usage"
RequestWaitingNum = "sglang:num_queue_reqs"
RequestRunningNum = "sglang:num_running_reqs"
TPOT = "sglang:time_per_output_token_seconds"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@katara-Jayprakash
Copy link
Copy Markdown
Member Author

katara-Jayprakash commented Apr 8, 2026

Sorry, i forgot to submit my previous review

@hzxuzhonghu @LiZhenCheng9527 i think we can work on this thing, i am currently working on e2e test, but have this also in my bucket list sorry for delay responses! i'm gonna make sure to finish it asap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Kthena runtime adaptation for sglang

5 participants