-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stable/locust]: Updates the service and ingress to accept more port configurability #631
[stable/locust]: Updates the service and ingress to accept more port configurability #631
Conversation
@max-rocket-internet I created this PR out of some needs from working with this chart in GKE. It required a few service/ingress specific settings that this would help me out greatly for container native loadbalancing in GKE. The readiness/liveness changes apparently control the health checks. I also have the need to expose it under a different port. I hope these changes are acceptable. Since it was a little larger of a change, I hope I made the right judgment call on versioning. I tried to do my part in local testing, though I don't have a personal GKE or AWS account to actually run it through a more enterprise setting. Let me know what you think. |
stable/locust/README.md
Outdated
| master.livenessProbe.enabled | bool | `false` | | | ||
| master.livenessProbe.failureThreshold | int | `2` | | | ||
| master.livenessProbe.initialDelaySeconds | int | `60` | | | ||
| master.livenessProbe.path | string | `"/"` | | | ||
| master.livenessProbe.periodSeconds | int | `30` | | | ||
| master.livenessProbe.port | int | `8089` | | | ||
| master.livenessProbe.scheme | string | `"HTTP"` | | | ||
| master.livenessProbe.successThreshold | int | `1` | | | ||
| master.livenessProbe.timeoutSeconds | int | `30` | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These probe values are quite complicated now.
Do you think it would be simpler to just make master.livenessProbe
an object? And put some commented examples in values.yaml
for a user to see?
For example, like what is commonly done with securityContext. Then if someone wants to deviate from the defaults, like change the container port, it's on them to update the probe config. And if someone wants to disable a probe, then they need to set it to master.livenessProbe: {}
in values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent! Yes, the thought had also crossed my mind to do it this way. I've seen it done this way in other public charts too. I'll refactor this later to fit your suggestion. Thanks for your time in reviewing this and making solid suggestions. I know doing this for all the charts you have in here probably is no trivial task.
One last question before I make this change later. Do you want me to additionally make master.readinessProbe
to also be one object? I'm ok with it either way. But I'd say keep them similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want me to additionally make master.readinessProbe to also be one object?
I would say yes, for consistency 🙂
Thanks for the effort and explanation @byronmansfield! I think it's mergeable as is but raised one point here |
Current CI error:
🤔 |
@max-rocket-internet thanks. Either I can't see the CI output because I'm not part of the group/org which is understandable. Or I've been in gitlab/jenkins ci space too long I can't seem to figure out where to look. Either way. Here is what I can give you from my knowledge (don't claim to be a k8s ninja) and local testing. Plus some good ol fashion googling. I've been using Many times when working on helm charts. I lean on large feature full complex projects such as vault's or jenkin's helm charts, just to see how they are handling it. They have years of development behind them and people doing all kinds of wild configurations. I cannot see anything immediately from their variations of how they handle setting up their probes. Jenkins does one more like this style. Is it possible to share a bit more as to what the CI jobs are testing with so that I can try to troubleshoot? I tried looking at the ci/ct.yaml (I'm assuming this is the failing test) although I'm confused as to what it actual does. Sorry I can't be more help. |
Oh, last thing. Once we figure this out. How would you feel about me adding in a startupProbe just to cover all basis. I don't particular need it for my project. But just seems to similar and a trivial feature to add once we get past the current error. |
Any logged in user can see it: https://github.com/deliveryhero/helm-charts/actions/runs/11905140650/job/33175111907#step:9:23
Do it 🚀
I'll take a look 🕒 |
Yes, quite annoying this error is not catchable until it gets sent to a real k8s API. Well, I guess that's why we have the "chart testing" CI test. I've suggested change on your PR that should fix it 🙂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK nice, thanks @byronmansfield
Description
Updates the service and ingress to accept more port configurability
Checklist
[stable/mychartname]
)