Skip to content

[Feature] More configability for ingresses #3871

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

Tom-Newton
Copy link

@Tom-Newton Tom-Newton commented Jul 15, 2025

Why are these changes needed?

There are scenarios where having the kuberay operator automatically create ingresses is very useful. Currently the configration options avialable are quite limitted, so this PR adds some extra configurability. For me its important to be able to configure this on the helm level, so that I can set it in one place and it will work for all RayClusters in our kubernetes cluster.

Related issue number

This PR does not exactly address any particular issue, but there are a few related issues:

  1. [Feature] Support networking.k8s.io.IngressSpec as ingress config for ray head #1475
  2. [Feature] Allow for specifying options when creating ingresses #1436
  3. [Feature] custom annotation support for built-in ingress #648

I've implemented this PR to solve my variant of this problem. I can't say for certain if it solves the problems of the authors of these other issues, but I hope it does.

Checks

  • I've made sure the tests are passing - at least the unit tests are passing; I had some e2e test failures when I tried to run on master
  • Testing Strategy
    • Unit tests
    • Manual tests - currently using this change internally
    • This PR is not tested :(

Hopefully this approach makes sense. If not I'm happy to discuss.

@win5923
Copy link
Contributor

win5923 commented Jul 16, 2025

I'm curious — with this approach, does it mean all RayClusters will share the same Ingress configuration? If I want to use a different host, path or TLS for a specific RayCluster, would that no longer be possible?

@Tom-Newton
Copy link
Author

Tom-Newton commented Jul 16, 2025

I'm curious — with this approach, does it mean all RayClusters will share the same Ingress configuration? If I want to use a different host or path for a specific RayCluster, would that no longer be possible?

Well, currently the host and path are not configurable at all, so we don't loose anything with this change. The annotations was the only bit that was already configurable. In that case the new configuration acts as the default and any per RayCluster annotations get merged and override the new configuration.

This just reminded me that I forgot to fill in comments for what the new options do, so will explain this there.

@win5923
Copy link
Contributor

win5923 commented Jul 16, 2025

Sorry, my previous comment might have been misleading or unclear. I realize that it introduces new configuration variables to the operator for customizing the built-in ingress behavior, but these are global options—it's either enabled for all RayClusters or none at all.

I'm wondering would it cause any issues if a user manually creates their own ingress for a RayCluster while the built-in ingress is also enabled? For example, could it lead to conflicts (e.g., duplicate paths, annotations, or TLS secrets) or unexpected behavior due to overlapping resources?

Example with NGINX Ingress: https://github.com/ray-project/kuberay/blob/master/docs/guidance/rayclient-nginx-ingress.md

@Tom-Newton
Copy link
Author

Tom-Newton commented Jul 16, 2025

I'm wondering would it cause any issues if a user manually creates their own ingress for a RayCluster while the built-in ingress is also enabled? For example, could it lead to conflicts (e.g., duplicate paths, annotations, or TLS secrets) or unexpected behavior due to overlapping resources?

There are ways the user could configure it such that the built in and their manually added ingresses conflict with each other. For example if they use the same TLS secret name for different hosts. It's totally possible though to have multiple functioning ingresses for the same service.

For this PR to cause problems though all of the following must be true:

  1. User has enabled the ingress that they don't want to use on their RayCluster https://github.com/ray-project/kuberay/blob/master/docs/reference/api.md#headgroupspec
  2. User has chosen to use the config options added in this PR
  3. The config options added in this PR and the manually created ingress are configured in one of the relatively few ways that they conflict.

So personally I'm not really worried about this.

Perhaps some users would prefer new configurations on a per RayCluster basis. If those new configurations ever get added they can override the global configs that this PR adds.

@win5923
Copy link
Contributor

win5923 commented Jul 16, 2025

Thank you for the clear explanation!

@Tom-Newton
Copy link
Author

I will also point out that the built-in ingresses today can easily cause problems. If using an NGINX ingress controller with externalTrafficPolicy=cluster (the default when installing with helm), then a single kuberay built-in ingress breaks the entire ingress controller.

Copy link
Contributor

@win5923 win5923 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants