-
Notifications
You must be signed in to change notification settings - Fork 441
MachinePool: avoid SetNotReady during normal processing #5537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -615,19 +615,20 @@ func (m *MachinePoolScope) setProvisioningStateAndConditions(v infrav1.Provision | |
} else { | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetDesiredReplicasCondition, infrav1.ScaleSetScaleDownReason, clusterv1.ConditionSeverityInfo, "") | ||
} | ||
m.SetNotReady() | ||
m.SetReady() | ||
case v == infrav1.Updating: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetModelUpdatedCondition, infrav1.ScaleSetModelOutOfDateReason, clusterv1.ConditionSeverityInfo, "") | ||
m.SetNotReady() | ||
m.SetReady() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you please add a comment on rational behind changing this behavior ? |
||
case v == infrav1.Creating: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetRunningCondition, infrav1.ScaleSetCreatingReason, clusterv1.ConditionSeverityInfo, "") | ||
m.SetNotReady() | ||
case v == infrav1.Deleting: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetRunningCondition, infrav1.ScaleSetDeletingReason, clusterv1.ConditionSeverityInfo, "") | ||
m.SetNotReady() | ||
case v == infrav1.Failed: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetRunningCondition, infrav1.ScaleSetProvisionFailedReason, clusterv1.ConditionSeverityInfo, "") | ||
Comment on lines
+628
to
+629
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't we be marking the resource as There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. from what I gathered in my experiments, the VMSS marks itself as failed if one VM has failed provisoning state. If we mark the VMSS in this case as failed, further reconciliation is prevented until that one VM is either removed from the VMSS, or the VM goes into state succeeded again. Because a VMSS can still scale up and down even if provisioningState is failed, it's wrong to mark the whole AMP as failed and prevent reconciling. I explicitely didn't add There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Self notes: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also, in the scenario |
||
default: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetRunningCondition, string(v), clusterv1.ConditionSeverityInfo, "") | ||
m.SetNotReady() | ||
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is probably evolving from the observation listed in #5515, i.e.
Shouldn't the right approach in addressing this be something along the lines of below ?
Reference: #5515
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw the referenced issue but didn't investigate if the underlying issue is similar. From my understanding (what I wrote in the other comment reply), a VMSS might be different in that it makes it's provisioningState dependant on the provisioningStates of the running VMs/Instances in it.
Marking the VMSS as not ready has many implications, most notably that providerIdList is not processed anymore and therefore having dangling VMs lying around which don't get into CAPI/CAPZ at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for providing context on this.
As a users, it seems quite odd to me that the CAPZ controller would mark the
AzureMachinePool
's Status as Ready when*m.MachinePool.Spec.Replicas != m.AzureMachinePool.Status.Replicas
infrav1.ProvisioningState == infrav1.Updating
If I were to stretch my understanding, it is sort of acceptable for AzureMachinePool to mark itself ready when
infrav1.ProvisioningState == infrav1.Succeeded
and the*m.MachinePool.Spec.Replicas != m.AzureMachinePool.Status.Replicas
. This implies to me that Azure has acknowledged AzureMachinePools request(henceinfrav1.ProvisioningState == infrav1.Succeeded
) and is working to get the desired replicas.However, when
infrav1.ProvisioningState == infrav1.Updating
, it does not seem right to be broadcasting that AzureMachinePool isReady
when Azure is clearly working to get the VMs assigned and added to the VMSS.Is my understanding wrong here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to probe this further, but relying on its own (AMP) status to progress in its reconciliation logic is not a good pattern. It will only lead to more dependence on its own status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I outlined what happens when the AMP is not ready in the issue here: #4982 (comment)
see also the code from CAPI: https://github.com/kubernetes-sigs/cluster-api/blob/8d639f1fad564eecf5bda0a2ee03c8a38896a184/exp/internal/controllers/machinepool_controller_phases.go#L290-L319
From what I understand:
.Status.Ready
is used by CAPI to determine if it should be reconciled.The new v1beta2 status make it much more clear (Initialization status).
This change here ensures that we don't avoid processing the AMP/MP with the current API version.
I always read the AMP Ready status as: It's possible to work with the AMP, i.e. scale up and down is possible. Which is the case, even if with a Replica mismatch or a updating provisioningState.
The AMP Ready status is read by the CAPI MP controller. Therefore it has implications broader than just for the CAPZ controller. I think the Replicas difference or the ProvisioningState difference should not be reflected in the Ready flag but instead with conditions.
Does that make sense or do I see something not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay on getting back on this.
That makes sense. It is not fault-tolerant for a resource controller to determine next steps based on its own status, rather than spec.
I agree with this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to update the status of a MachinePool needs to be revisited in my opinion. It needs to be better. Sorry to push back on this so much.
The area of controversy to me is that the way we are updating the status of MachinePool even when it is in Updating state or if the total replicas requested is more than current.
For instance, is it valid to update the status of AzureMachinePool to Ready when the actual number of replicas is 0, but the desired replica count is greater than 1 ? Meaning, when the Azure Machine Pool is still spinning up ?