Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sig-node e2e tests machine hardware requirements #7339

Open
ffromani opened this issue Sep 25, 2024 · 15 comments
Open

sig-node e2e tests machine hardware requirements #7339

ffromani opened this issue Sep 25, 2024 · 15 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@ffromani
Copy link

ffromani commented Sep 25, 2024

sig-node owns a set of features related to exposing and using hardware details which require some hardware features to exercise the code. Examples are exclusive CPU allocation (cpumanager), device allocation (device manager), NUMA alignment (topology manager), NUMA alignment considering distances between NUMA zones (topology manager).

Note: some requirement overlap. Easy example: a powerful high end (at time of writing) server CPU can have at the same time multi core count, exposing multiple NUMA nodes, and have split L3, satisfying in one go all cpumanager requirements

Hardware requirements, driven by feature, rationale

this list will be updated after more review of the ongoing sig-node features

@ffromani ffromani added the sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. label Sep 25, 2024
@ffromani
Copy link
Author

tagging some relevant sig-node people: @kannon92 @PiotrProkop @klueska

@ffromani
Copy link
Author

@ameukam
Copy link
Member

ameukam commented Sep 25, 2024

cc @dims @upodroid @BenTheElder

@BenTheElder
Copy link
Member

BenTheElder commented Sep 25, 2024

We have EC2 and GCE pretty well setup in particular at the moment, do any of the machine types available there meet your requirements?

Please make sure any new resources you use on any platform are handled by the kubernetes-sigs/boskos cleanup scripts.
If you're using GCP projects / AWS accounts with VMs that should already work.

@ffromani
Copy link
Author

@catblade kindly pointed out equinix donated cloud credits and their offering seems also interesting and maybe we can use it. Some CNCF TAGs already make use if it. This is the reference I got: https://github.com/cncf-tags/green-reviews-tooling/

@ameukam
Copy link
Member

ameukam commented Sep 25, 2024

@ffromani why can't we use AWS EC2 instances to run those tests ?

@ffromani
Copy link
Author

@ffromani why can't we use AWS EC2 instances to run those tests ?

I think we totally can, I'm not aware of any blocker. The efforts in this area have been somehow sparse, we're taking the chance of sig-node 1.32 planning to re-evaluate and improve the current state. Will review the GCP/AWS offerings and comment.

@BenTheElder
Copy link
Member

@catblade kindly pointed out equinix donated cloud credits and their offering seems also interesting and maybe we can use it. Some CNCF TAGs already make use if it. This is the reference I got: https://github.com/cncf-tags/green-reviews-tooling/

Yes, however we generally are running critical infra on Kubernetes specific resource allocations, and we don't currently have a lot setup to manage this. For equinix SIG K8s Infra doesn't currently have observability into the amount of resources available and the spending trends which has bitten us in the past (see reports like https://kubernetes.slack.com/archives/CCK68P2Q2/p1727127173398879 for some of the others).

(@dims does have cs.k8s.io running on equinix currently, we also have some presence in DO and Azure but not as mature yet, and Fastly for CDN)

It would be easier if we can use one of vendors for which we already have tooling (like https://github.com/kubernetes-sigs/boskos) setup to avoid resource leaks etc.

Otherwise we need help to invest in and onboard new resource types, observability into utilization and remaining credits, etc

@ffromani
Copy link
Author

ffromani commented Oct 21, 2024

review of the GCP offering:
AMD: https://cloud.google.com/compute/docs/cpu-platforms#amd_processors
We do have some epycs which seems to fit all the requirements. We need perhaps to doublecheck the > 2 NUMA nodes requirement which brings us to C2D or Tau-T2D instances which are surely good:

  • c2d >= c2d-standard-32 <- note SMT is enabled here
  • tau-t2d >= t2d-standard-32 <- note SMT is disabled here

considering the SMT support status c2d seems better for our purposes

Intel: https://cloud.google.com/compute/docs/cpu-platforms#intel_processors
IIRC we already have lanes running on n1 or n2 instances, which should be good for everything but split L3 instances and > 2 NUMA instances:

  • n2 >= n2-standard-32

AI: review current usage of intel n2 instances and comment here

@ameukam
Copy link
Member

ameukam commented Oct 21, 2024

Can we also get the same analysis on AWS ? and possibly on Azure ?

@ffromani
Copy link
Author

Can we also get the same analysis on AWS ? and possibly on Azure ?

yes, ongoing

@ffromani
Copy link
Author

ffromani commented Oct 21, 2024

review of the AWS offering:
Looking at the available instance types the best candidates (all factors considered, including budget-aware selection) seems to be "compute intensive" or "memory intensive".

compute intensive:
Intel: c5 >= c5.12xlarge seems OK but is unlikely (docs seems more opaque) to be multi-numa and multi-socket. Also unclear if it is split-L3.
AMD: managed to find some deep-dive information and c5a >= c5a.8xlarge seems OK. Noteworthy that these instances seems to DO HAVE split-L3 but needs to be verified if these are multi NUMA. Unlilkely these are multi-socket.

memory intensive:
AMD: we have interesting CPUs EPYC 7571 which by the chatter in the internet seems to be AWS-specific versions of AMD 7601 SKUs which in turn do have split L3 but still unclear the multi-numa and multi-socket status.
r5a >= r5a.16xlarge seems to be OK with the usual caveats

Intel: likewise, with the usual caveat,
r5 >= r5.16xlarge seems to be OK

@ffromani
Copy link
Author

so I need to figure out how's the multi-NUMA and multi-socket situation on both AWS and GCP.

Chances are high that if we require "high enough" cores, we exceed the silicon limits and then the cloud provider is forced to give us multi-socket instances (a single physical socket can only have so many cores), but this is and likely will be us second-guessing them.

In addition, need to review the current usage for periodic non-presubmit jobs. IIRC we have n1 or n2 intances, will check and comment here and crosslink old conversations.

Coming up next: azure.

@ameukam
Copy link
Member

ameukam commented Dec 7, 2024

/sig node
/kind feature
/priority backlog

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 7, 2024
@ameukam ameukam moved this to Backlog in SIG K8S Infra Dec 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
Status: Backlog
Development

No branches or pull requests

4 participants