-
Notifications
You must be signed in to change notification settings - Fork 648
[RayCluster] Add multi-host indexing labels #3998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
8a74046 to
a6b94b3
Compare
|
@ryanaoleary PTAL when you get the chance. |
|
It'd be good to make clear the value of this PR. Currently host and replica indexing for multi-host workers occurs in a separate GKE webhook that injects these values as env vars and a k8s label. The env vars and This PR moves the logic for indexing KubeRay worker Pods that request TPU from the webhook to KubeRay itself. By assigning indices as k8s Pod labels directly from KubeRay when they are created, we avoid the necessity for complicated logic in the TPU webhook that tracks the state of multi-host replicas in a RayCluster using a PodInformer. Since these variables are already used in Ray core and libraries like Train to handle the multi-host case, it makes sense to consolidate the logic in KubeRay. Additionally, since KubeRay is aware of when Pods are deleted, it becomes easier to scale-down multi-host replicas atomically. Overall, this PR is consolidating logic that is currently spread across the TPU webhook, KubeRay, and Ray core. The next step after this PR would be to move the environment variable injection that occurs in the TPU webhook to Ray core when the Raylet is started on a node. The worker lifecycle would then look as follows for multi-host workers:
|
a6b94b3 to
6935b9e
Compare
| } | ||
|
|
||
| // Check if RayTpuMulithostIndexing feature is enabled | ||
| // Currently multihostIndexing won't work with Autoscaler v2 since autoscaler delete doesn't follow replica groups |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Currently multihostIndexing won't work with Autoscaler v2 since autoscaler delete doesn't follow replica groups
Can you explain more why this wouldn't work with the v2 autoscaler? Since it currently scales by replicas my initial thinking was that there shouldn't be an incompatibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, if that's the case then it is an misunderstanding on my side based on our prior discussion where my interpretation was that there was incompatibility due to how it scaled the replicas. Will remove the autoscaling v2 check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could just be forgetting what the incompatibility is, if I'm remembering correctly the v2 autoscaler determines the number of replicas of a group to scale, and then submits a scale request by patching both the replica count and workersToDelete of that group here.
There could be an issue with how the v2 autoscaler scales down here, since it doesn't consider whether to_delete_id is part of a multi-host group and will cause the entire group to scale down, but I think this might be fine though since we consider NumOfHosts in the desired num workers of a type here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add an e2e autoscaler test here https://github.com/ray-project/kuberay/tree/master/ray-operator/test/e2eautoscaler to verify the scale up/down behavior for both the V1 and V2 autoscaler in either this PR or a follow-up. That should probably be one of the requirements for moving this feature from alpha to beta.
45d53ae to
bb602d5
Compare
|
Once this is passing CI I think we can mark this as ready for review and ask other KubeRay contributors to review the feature. |
5bebd86 to
1f21e83
Compare
1f21e83 to
7419b56
Compare
| return errstd.Join(utils.ErrFailedCreateWorkerPod, err) | ||
|
|
||
| // Worker creation path for multi-host indexing | ||
| if multihostIndexingEnabled { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic below seems pretty complicated, should we abstract it away in a util package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we do since all it's really doing is going through and creating the remaining workers in groups. Abstracting it away in a util package will introduce another layer of indirection and I thought that might be unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I don't think a util package is necessary, but I moved it to it's own reconciliation function in e9a8b23 because reconcilePods was really long and I wanted it to be really clear what's behind the feature gate.
|
I removed the "[TPU]" from the title, we should ensure this implementation is generic enough to also be used for GPUs. For e.g. labelling worker pods in an NVLink using GB200s |
|
Hi @ryanaoleary, could you help to merge this with the master branch again? There is a fix for tests. |
88577eb to
12829cf
Compare
@rueian Done, should be up to date now |
| logger := ctrl.LoggerFrom(ctx) | ||
|
|
||
| // 1. Group existing pods by ray.io/worker-group-replica-index. | ||
| replicaMap := make(map[string][]corev1.Pod) | ||
| for _, pod := range workerPods { | ||
| if replicaName, ok := pod.Labels[utils.RayWorkerReplicaIndexKey]; ok { | ||
| replicaMap[replicaName] = append(replicaMap[replicaName], pod) | ||
| } | ||
| } | ||
|
|
||
| // 2. Clean up incomplete replica groups with deleted Pods caused by external deletion. | ||
| for replicaName, podList := range replicaMap { | ||
| if len(podList) < int(worker.NumOfHosts) { | ||
| logger.Info("Found incomplete multi-host replica group, deleting all remaining pods to maintain atomicity.", "group", worker.GroupName, "replica", replicaName, "found", len(podList), "expected", worker.NumOfHosts) | ||
| if err := r.deletePods(ctx, instance, podList, worker.GroupName, "cleanup of incomplete multi-host group"); err != nil { | ||
| return err | ||
| } | ||
| // Requeue to avoid creating new Pods on this reconciliation. | ||
| return fmt.Errorf("cleaned up incomplete replica group %s, requeueing", replicaName) | ||
| } | ||
| } | ||
|
|
||
| // 3. Delete unhealthy replica groups. | ||
| deletedPods := make(map[string]struct{}) | ||
| for _, pod := range workerPods { | ||
| if _, alreadyDeleted := deletedPods[pod.Name]; alreadyDeleted { | ||
| continue | ||
| } | ||
| if shouldDelete, reason := shouldDeletePod(pod, rayv1.WorkerNode); shouldDelete { | ||
| replicaName := pod.Labels[utils.RayWorkerReplicaIndexKey] | ||
| podsToDelete, ok := replicaMap[replicaName] | ||
| if !ok { | ||
| continue | ||
| } | ||
| logger.Info("Deleting unhealthy replica group.", "group", worker.GroupName, "replica", replicaName, "reason", reason) | ||
| if err := r.deletePods(ctx, instance, podsToDelete, worker.GroupName, reason); err != nil { | ||
| return err | ||
| } | ||
| // All Pods in the group have been deleted. | ||
| for _, p := range podsToDelete { | ||
| deletedPods[p.Name] = struct{}{} | ||
| } | ||
| } | ||
| } | ||
| if len(deletedPods) > 0 { | ||
| return fmt.Errorf("deleted %d unhealthy worker Pods in multi-host groups, requeueing", len(deletedPods)) | ||
| } | ||
|
|
||
| // 4. Handle explicit deletions from the autoscaler. | ||
| if len(worker.ScaleStrategy.WorkersToDelete) > 0 { | ||
| podsToDeleteFromStrategy := make(map[string]corev1.Pod) | ||
| for _, podName := range worker.ScaleStrategy.WorkersToDelete { | ||
| for _, pod := range workerPods { | ||
| if pod.Name == podName { | ||
| replicaName := pod.Labels[utils.RayWorkerReplicaIndexKey] | ||
| for _, p := range replicaMap[replicaName] { | ||
| podsToDeleteFromStrategy[p.Name] = p | ||
| } | ||
| break | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if len(podsToDeleteFromStrategy) > 0 { | ||
| logger.Info("removing the pods in the scaleStrategy of", "group", worker.GroupName, "podsToDelete", len(podsToDeleteFromStrategy)) | ||
| var podsToDel []corev1.Pod | ||
| for _, p := range podsToDeleteFromStrategy { | ||
| podsToDel = append(podsToDel, p) | ||
| } | ||
| if err := r.deletePods(ctx, instance, podsToDel, worker.GroupName, "autoscaler scale-down request"); err != nil { | ||
| return err | ||
| } | ||
| worker.ScaleStrategy.WorkersToDelete = []string{} | ||
| return fmt.Errorf("deleted %d worker Pods based on ScaleStrategy, requeueing", len(podsToDel)) | ||
| } | ||
| // Clear WorkersToDelete after deletion. | ||
| worker.ScaleStrategy.WorkersToDelete = []string{} | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| logger := ctrl.LoggerFrom(ctx) | |
| // 1. Group existing pods by ray.io/worker-group-replica-index. | |
| replicaMap := make(map[string][]corev1.Pod) | |
| for _, pod := range workerPods { | |
| if replicaName, ok := pod.Labels[utils.RayWorkerReplicaIndexKey]; ok { | |
| replicaMap[replicaName] = append(replicaMap[replicaName], pod) | |
| } | |
| } | |
| // 2. Clean up incomplete replica groups with deleted Pods caused by external deletion. | |
| for replicaName, podList := range replicaMap { | |
| if len(podList) < int(worker.NumOfHosts) { | |
| logger.Info("Found incomplete multi-host replica group, deleting all remaining pods to maintain atomicity.", "group", worker.GroupName, "replica", replicaName, "found", len(podList), "expected", worker.NumOfHosts) | |
| if err := r.deletePods(ctx, instance, podList, worker.GroupName, "cleanup of incomplete multi-host group"); err != nil { | |
| return err | |
| } | |
| // Requeue to avoid creating new Pods on this reconciliation. | |
| return fmt.Errorf("cleaned up incomplete replica group %s, requeueing", replicaName) | |
| } | |
| } | |
| // 3. Delete unhealthy replica groups. | |
| deletedPods := make(map[string]struct{}) | |
| for _, pod := range workerPods { | |
| if _, alreadyDeleted := deletedPods[pod.Name]; alreadyDeleted { | |
| continue | |
| } | |
| if shouldDelete, reason := shouldDeletePod(pod, rayv1.WorkerNode); shouldDelete { | |
| replicaName := pod.Labels[utils.RayWorkerReplicaIndexKey] | |
| podsToDelete, ok := replicaMap[replicaName] | |
| if !ok { | |
| continue | |
| } | |
| logger.Info("Deleting unhealthy replica group.", "group", worker.GroupName, "replica", replicaName, "reason", reason) | |
| if err := r.deletePods(ctx, instance, podsToDelete, worker.GroupName, reason); err != nil { | |
| return err | |
| } | |
| // All Pods in the group have been deleted. | |
| for _, p := range podsToDelete { | |
| deletedPods[p.Name] = struct{}{} | |
| } | |
| } | |
| } | |
| if len(deletedPods) > 0 { | |
| return fmt.Errorf("deleted %d unhealthy worker Pods in multi-host groups, requeueing", len(deletedPods)) | |
| } | |
| // 4. Handle explicit deletions from the autoscaler. | |
| if len(worker.ScaleStrategy.WorkersToDelete) > 0 { | |
| podsToDeleteFromStrategy := make(map[string]corev1.Pod) | |
| for _, podName := range worker.ScaleStrategy.WorkersToDelete { | |
| for _, pod := range workerPods { | |
| if pod.Name == podName { | |
| replicaName := pod.Labels[utils.RayWorkerReplicaIndexKey] | |
| for _, p := range replicaMap[replicaName] { | |
| podsToDeleteFromStrategy[p.Name] = p | |
| } | |
| break | |
| } | |
| } | |
| } | |
| if len(podsToDeleteFromStrategy) > 0 { | |
| logger.Info("removing the pods in the scaleStrategy of", "group", worker.GroupName, "podsToDelete", len(podsToDeleteFromStrategy)) | |
| var podsToDel []corev1.Pod | |
| for _, p := range podsToDeleteFromStrategy { | |
| podsToDel = append(podsToDel, p) | |
| } | |
| if err := r.deletePods(ctx, instance, podsToDel, worker.GroupName, "autoscaler scale-down request"); err != nil { | |
| return err | |
| } | |
| worker.ScaleStrategy.WorkersToDelete = []string{} | |
| return fmt.Errorf("deleted %d worker Pods based on ScaleStrategy, requeueing", len(podsToDel)) | |
| } | |
| // Clear WorkersToDelete after deletion. | |
| worker.ScaleStrategy.WorkersToDelete = []string{} | |
| } | |
| logger := ctrl.LoggerFrom(ctx) | |
| deletedPods := make(map[string]struct{}) | |
| // 1. Group existing pods by ray.io/worker-group-replica-index. | |
| replicaMap := make(map[string][]corev1.Pod) | |
| for _, pod := range workerPods { | |
| if replicaName, ok := pod.Labels[utils.RayWorkerReplicaIndexKey]; ok { | |
| replicaMap[replicaName] = append(replicaMap[replicaName], pod) | |
| } | |
| } | |
| // 2. Clean up incomplete replica groups with deleted Pods caused by external deletion. | |
| for replicaName, podList := range replicaMap { | |
| if len(podList) < int(worker.NumOfHosts) { | |
| logger.Info("Found incomplete multi-host replica group, deleting all remaining pods to maintain atomicity.", "group", worker.GroupName, "replica", replicaName, "found", len(podList), "expected", worker.NumOfHosts) | |
| if err := r.deletePods(ctx, instance, podList, worker.GroupName, "cleanup of incomplete multi-host group"); err != nil { | |
| return err | |
| } | |
| for _, p := range podList { | |
| deletedPods[p.Name] = struct{}{} | |
| } | |
| } | |
| } | |
| // 3. Delete unhealthy replica groups. | |
| for _, pod := range workerPods { | |
| if _, alreadyDeleted := deletedPods[pod.Name]; alreadyDeleted { | |
| continue | |
| } | |
| if shouldDelete, reason := shouldDeletePod(pod, rayv1.WorkerNode); shouldDelete { | |
| replicaName := pod.Labels[utils.RayWorkerReplicaIndexKey] | |
| podsToDelete, ok := replicaMap[replicaName] | |
| if !ok { | |
| continue | |
| } | |
| logger.Info("Deleting unhealthy replica group.", "group", worker.GroupName, "replica", replicaName, "reason", reason) | |
| if err := r.deletePods(ctx, instance, podsToDelete, worker.GroupName, reason); err != nil { | |
| return err | |
| } | |
| // All Pods in the group have been deleted. | |
| for _, p := range podsToDelete { | |
| deletedPods[p.Name] = struct{}{} | |
| } | |
| } | |
| } | |
| // 4. Handle explicit deletions from the autoscaler. | |
| if len(worker.ScaleStrategy.WorkersToDelete) > 0 { | |
| podsToDeleteFromStrategy := make(map[string]corev1.Pod) | |
| for _, podName := range worker.ScaleStrategy.WorkersToDelete { | |
| for _, pod := range workerPods { | |
| if pod.Name == podName { | |
| replicaName := pod.Labels[utils.RayWorkerReplicaIndexKey] | |
| for _, p := range replicaMap[replicaName] { | |
| podsToDeleteFromStrategy[p.Name] = p | |
| } | |
| break | |
| } | |
| } | |
| } | |
| if len(podsToDeleteFromStrategy) > 0 { | |
| logger.Info("removing the pods in the scaleStrategy of", "group", worker.GroupName, "podsToDelete", len(podsToDeleteFromStrategy)) | |
| var podsToDel []corev1.Pod | |
| for _, p := range podsToDeleteFromStrategy { | |
| if _, ok := deletedPods[p.Name]; ok { | |
| continue | |
| } | |
| podsToDel = append(podsToDel, p) | |
| } | |
| if err := r.deletePods(ctx, instance, podsToDel, worker.GroupName, "autoscaler scale-down request"); err != nil { | |
| return err | |
| } | |
| for _, p := range podsToDel { | |
| deletedPods[p.Name] = struct{}{} | |
| } | |
| } | |
| // Clear WorkersToDelete after deletion. | |
| worker.ScaleStrategy.WorkersToDelete = []string{} | |
| } |
Could we avoid requeue on deletion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that should work fine too, I only added:
if len(deletedPods) > 0 {
return fmt.Errorf("deleted %d unhealthy worker Pods in multi-host groups, requeueing", len(deletedPods))
}
because that's the same way it's handled in reconcilePods in the existing deletion code. I'll accept the suggestion though because I think it should work fine to process all deletions on one iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should work fine to process all deletions in one iteration. requeuing by returning an error cuasing stack trace being logged out is quite annoying. But I saw you latest change didn't do that in one iteration and CI failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh sorry I thought I'd removed it but must not have committed the change, done in 29924f1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the CI was failing because the feature was not enabled in the buildkite.
|
|
||
| isRayMultiHostIndexing := worker.NumOfHosts > 1 && features.Enabled(features.RayMultiHostIndexing) | ||
| if isRayMultiHostIndexing { | ||
| if err := r.reconcileMultiHostWorkerGroup(ctx, instance, &worker, workerPods.Items); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my own understanding, was it easier to separate the single-host and multi-host reconcilation into it's own function as opposed to trying to have a single reconcile with conditionals for multi-host?
My concern with separate functions is that in the future it will be easy to forget to update reconcileMultiHostWorkerGroup, but it seems fine if the complexity of merging both is too high
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keeping it separate for alpha is probably fine because we don't want to accidentally introduce changes in the default single-host code path, but for Beta (on by default), we may want to merge the code paths
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially it was a single reconcile with conditionals for multi-host, and yeah I changed it to a separate reconcile function because reconcilePods had become very long with several conditionals and I wanted the two code paths to be very clear and avoid introducing unintended changes to the regular path.
I'm good to change it to follow the previous pattern, whatever makes it easiest to merge. I agree that this might be better when we turn this feature on by default.
1f2047e to
c12c1d8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @ryanaoleary
is there a simple script I can try to test this PR on my kind kuberentes clusrter?
I can have multiple nodes in my kind cluster.
@Future-Outlier None of the changes in this PR actually rely on the Pods being scheduled/running, so you can test it with the existing multi-host TPU RayCluster's in the Ray repo: https://github.com/ray-project/kuberay/blob/master/ray-operator/config/samples/ray-cluster.tpu-v6e-16-multihost.yaml. The steps to manually test would be:
|
Signed-off-by: Aaron Liang <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
c12c1d8 to
29924f1
Compare
Alternatively, any RayCluster with a worker group with we'd expect to see a |
Why are these changes needed?
Part of #3902. POC, Adds group indexing and host index to multi-host workers.
These labels are useful for running workloads with workers when
numOfHosts > 1and/orreplicas > 1for a worker group, where it's important that the workload runs on a specific worker ornumOfHostsworkers scaled as a part of the same replica. This is the case for TPU or GPU workloads with topology or worker index requirements.Additionally, this PR adds logic to atomically delete a worker group replica where
numOfHosts > 1. This logic is necessary because most multi-host workloads will hang and fail when a single worker in the group fails, and we should delete or restart these Pods together.Related issue number
For: #3902
Checks