-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sig-node e2e tests machine hardware requirements #7339
Comments
tagging some relevant sig-node people: @kannon92 @PiotrProkop @klueska |
slack thread for context: https://kubernetes.slack.com/archives/CCK68P2Q2/p1727202732284529 |
We have EC2 and GCE pretty well setup in particular at the moment, do any of the machine types available there meet your requirements? Please make sure any new resources you use on any platform are handled by the kubernetes-sigs/boskos cleanup scripts. |
@catblade kindly pointed out equinix donated cloud credits and their offering seems also interesting and maybe we can use it. Some CNCF TAGs already make use if it. This is the reference I got: https://github.com/cncf-tags/green-reviews-tooling/ |
@ffromani why can't we use AWS EC2 instances to run those tests ? |
I think we totally can, I'm not aware of any blocker. The efforts in this area have been somehow sparse, we're taking the chance of sig-node 1.32 planning to re-evaluate and improve the current state. Will review the GCP/AWS offerings and comment. |
Yes, however we generally are running critical infra on Kubernetes specific resource allocations, and we don't currently have a lot setup to manage this. For equinix SIG K8s Infra doesn't currently have observability into the amount of resources available and the spending trends which has bitten us in the past (see reports like https://kubernetes.slack.com/archives/CCK68P2Q2/p1727127173398879 for some of the others). (@dims does have cs.k8s.io running on equinix currently, we also have some presence in DO and Azure but not as mature yet, and Fastly for CDN) It would be easier if we can use one of vendors for which we already have tooling (like https://github.com/kubernetes-sigs/boskos) setup to avoid resource leaks etc. Otherwise we need help to invest in and onboard new resource types, observability into utilization and remaining credits, etc |
review of the GCP offering:
considering the SMT support status c2d seems better for our purposes Intel: https://cloud.google.com/compute/docs/cpu-platforms#intel_processors
AI: review current usage of intel n2 instances and comment here |
Can we also get the same analysis on AWS ? and possibly on Azure ? |
yes, ongoing |
review of the AWS offering: compute intensive: memory intensive: Intel: likewise, with the usual caveat, |
so I need to figure out how's the multi-NUMA and multi-socket situation on both AWS and GCP. Chances are high that if we require "high enough" cores, we exceed the silicon limits and then the cloud provider is forced to give us multi-socket instances (a single physical socket can only have so many cores), but this is and likely will be us second-guessing them. In addition, need to review the current usage for periodic non-presubmit jobs. IIRC we have n1 or n2 intances, will check and comment here and crosslink old conversations. Coming up next: azure. |
/sig node |
sig-node owns a set of features related to exposing and using hardware details which require some hardware features to exercise the code. Examples are exclusive CPU allocation (cpumanager), device allocation (device manager), NUMA alignment (topology manager), NUMA alignment considering distances between NUMA zones (topology manager).
Note: some requirement overlap. Easy example: a powerful high end (at time of writing) server CPU can have at the same time multi core count, exposing multiple NUMA nodes, and have split L3, satisfying in one go all cpumanager requirements
Hardware requirements, driven by feature, rationale
this list will be updated after more review of the ongoing sig-node features
The text was updated successfully, but these errors were encountered: