Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add resources requests and limits management #127

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jusabatier
Copy link
Contributor

This PR add the ability to set for each app requests and limits for CPU and RAM.

Related to this issue : #102

@edevosc2c
Copy link
Member

edevosc2c commented Jan 21, 2025

Thank you Julien, my points at the end of my last comment in the issue #102 about java and memory limits still apply in order to merge this PR.

There is a real need to investigate, last thing I want is to get apps that restart every hour because the app doesn't limit its memory itself in order to avoid an OOMKill.


The tests are failing probably because you need to use helm lint in order to fix the linting issues: https://github.com/georchestra/helm-georchestra/actions/runs/12882804969/job/35915756058?pr=127

@jusabatier
Copy link
Contributor Author

Thank you Julien, my points at the end of my last comment in the issue #102 about java and memory limits still apply in order to merge this PR.

Not sure to understand how to test this, does it imply to modify docker's images used ?
Is it a test you will perform on your instance ? Or I have to test it by myself ?

As explained in README update, the default values provided are provided for testing purposes and for prod, they have to be tuned depending on platform usage and traffic.

The tests are failing probably because you need to use helm lint in order to fix the linting issues

I tried but nothing is wrong :

$ helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended
[WARNING] /home/jusabatier/Workspace/git/helm-georchestra: chart directory is missing these dependencies: postgresql,rabbitmq

1 chart(s) linted, 0 chart(s) failed

@edevosc2c
Copy link
Member

edevosc2c commented Jan 21, 2025

Not sure to understand how to test this, does it imply to modify docker's images used ? Is it a test you will perform on your instance ? Or I have to test it by myself ?

As explained in README update, the default values provided are provided for testing purposes and for prod, they have to be tuned depending on platform usage and traffic.

You have multiple ways to test that.

The major outcome is to see if java is going to limit its memory usage. So you can either set a low memory limit or/and stress test the application so that it uses as much memory as possible, then see if it reacts correctly without any OOMKill.

I tried but nothing is wrong :

Ha, yes indeed, it's not helm lint but ct lint. You have the output here: https://github.com/georchestra/helm-georchestra/actions/runs/12882804969/job/35915756058?pr=127

image

@jusabatier
Copy link
Contributor Author

I investigated a little over JVM parameters to use with containers, here is what I found :

  • Java 10 introduced +UseContainerSupport (enabled by default) which makes the JVM use sane defaults in a container environment. This feature is backported to Java 8 since 8u191
  • -XX:+UseContainerSupport allows the JVM to read cgroup limits like available CPUs and RAM from the host machine and configure itself accordingly. Doing so allows the JVM to die with an OutOfMemoryError instead of the container being killed
  • The old (and somewhat broken) flags -XX:{Min|Max}RAMFraction are now deprecated. There is a new flag -XX:MaxRAMPercentage, that takes a value between 0.0 and 100.0 and defaults to 25.0
  • Setting -Xmx and -Xms disables the automatic heap sizing

Sources :

Summary :

We should remove -Xmx and -Xms and replace with +UseContainerSupport combined with -XX:MaxRAMPercentage=75.0

@edevosc2c
Copy link
Member

Thank you for the details @jusabatier. I wasn't aware of Setting -Xmx and -Xms disables the automatic heap sizing which is probably what posed some issues with geo2france geoserver when I tried to limit the memory.

Actually, we have an open issue for removing -Xmx and -Xms here: georchestra/georchestra#4354

We should remove -Xmx and -Xms and replace with +UseContainerSupport combined with -XX:MaxRAMPercentage=75.0

Yes, very good idea, this will be a requirement for merging the PR. We can continue this discussion at georchestra/georchestra#4354

Is +UseContainerSupport really important to add if it's enabled by default?

@jusabatier
Copy link
Contributor Author

Is +UseContainerSupport really important to add if it's enabled by default?

I don't think, but if for whatever reason the default change, it's more safe to add it.
Also it can suggest new devops to search what it implies and document themselves.

In the article, they also suggest it can be benefic to :

  • setup visible processor number to 2x the container limit (so it should be customisable with ENV var or entrypoint.sh)
    => -XX:ActiveProcessorCount=<2x yourCpuLimit>
  • setup the garbage collector with -XX:+UseParallelGC

Depending on how the app uses threading.

@jusabatier
Copy link
Contributor Author

I read an article @pmauduit share to me on the IRC and they suggest it's better to set :

  • No CPU limit, only request : Pods always get the CPU requested by their CPU request. They can take advantage of excess CPU too if they have no limit.

Source : https://home.robusta.dev/blog/stop-using-cpu-limits

Do you think we should remove the CPU limits ?

@edevosc2c
Copy link
Member

I don't think, but if for whatever reason the default change, it's more safe to add it.

If it says that it's default, we shouldn't try to override default with the same value. That would be confusing.

In the article, they also suggest it can be benefic to :

* setup visible processor number to 2x the container limit (so it should be customisable with ENV var or entrypoint.sh)
  => `-XX:ActiveProcessorCount=<2x yourCpuLimit>`

* setup the garbage collector with `-XX:+UseParallelGC`

After reading paketo-buildpacks/libjvm#136 (comment), we can add ActiveProcessorCount but only for the apps still on java 8/11.

  • No CPU limit, only request : Pods always get the CPU requested by their CPU request. They can take advantage of excess CPU too if they have no limit.

As of right now, we recommend the helm chart to only be deployed on a single node in a Kubernetes cluster.

Requests are useful when you have multiple nodes in the cluster so that Kubernetes understand how to spread all the different apps across all the nodes. So in our case, Requests are useless (even though we are still required to set them for limiting the resources).

I'm still in favor of having limits because there are some apps like geoserver that are very resource hungry and in all of our clients infra, if these apps consume too many resources then this leads to a slow-down of the other apps which is not ideal. So for users using the default settings, we set a good practice to restrict the apps from consuming all the resources and leading to a general slow-down of the geOrchestra platform.

  • setup the garbage collector with -XX:+UseParallelGC

No idea about this one if we should add it or not. Any ideas @pmauduit?

@jusabatier
Copy link
Contributor Author

As of right now, we recommend the helm chart to only be deployed on a single node in a Kubernetes cluster.

I didn't know about that. Is there a specific reason that forces to deploy on a single node ?

Also, I wonder if current values aren't too high for testing purposes ?
I think we should set low default values (allow to use for test on small clusters like minikube or k3s), and may provide a table like the one above for different usage (medium prod, high prod, ...)
Because IMHO, a prod instance must have customized values apapted to the traffic it handle.

@edevosc2c
Copy link
Member

Also, I wonder if current values aren't too high for testing purposes ?

Well if someone is using the helm chart in Kubernetes it's to deploy geOrchestra in production. If he wants to test georchestra in a test environment, we have the docker composition: https://github.com/georchestra/docker.

So I would be inclined to continue building this helm chart for production purpose. At least that's why Camptocamp develops this helm chart. In my opinion, the default values should reflect what an average production environment might be.

But we can for sure guide people to adapt these limits.

I didn't know about that. Is there a specific reason that forces to deploy on a single node ?

We just haven't tested it. At camptocamp we rely on the fact that it's a single node because it makes managing the data much easier, we have a central "SFTP" server in order to access all the various PVC created by the modules.

And usually in a multi-node scenario, you need to use NFS in order to have the data accessible from any server in the cluster. And NFS is slower than accessing the data from the drive itself because it's over the network. While you can use longhorn to be able to move the data from a server to another, there is still a performance hit.

Since all the modules weren't designed with Kubernetes in mind nor multiserver in mind, it's possible that you could get worse performance than using a single server. Finally, the majority of the modules can't be highly available so you won't benefit from multiple servers compared to a single server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants