-
Notifications
You must be signed in to change notification settings - Fork 441
MachinePool: avoid SetNotReady during normal processing #5537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hi @mweibel. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5537 +/- ##
==========================================
+ Coverage 52.94% 53.05% +0.11%
==========================================
Files 272 272
Lines 29525 29526 +1
==========================================
+ Hits 15631 15665 +34
+ Misses 13087 13054 -33
Partials 807 807 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
861d905
to
f424888
Compare
FYI I removed the draft state. We're running this since a couple of days in production and it works well so far. |
/assign |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a great improvement to me, apologies for the delays here!
/lgtm
/assign @nawazkh
LGTM label has been added. Git tree hash: 6b8935af5490192b4ecb7bb6d8ebf881247a7fb5
|
/test pull-cluster-api-provider-azure-e2e-optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case v == infrav1.Failed: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetRunningCondition, infrav1.ScaleSetProvisionFailedReason, clusterv1.ConditionSeverityInfo, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we be marking the resource as NotReady
when the resource's Provisioning state is infrav1.Failed
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from what I gathered in my experiments, the VMSS marks itself as failed if one VM has failed provisoning state. If we mark the VMSS in this case as failed, further reconciliation is prevented until that one VM is either removed from the VMSS, or the VM goes into state succeeded again.
Because a VMSS can still scale up and down even if provisioningState is failed, it's wrong to mark the whole AMP as failed and prevent reconciling.
I explicitely didn't add m.SetReady()
here because in case the VMSS is already in NotReady state due to something else, we shouldn't reset that flag when the provisioningState is failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Self notes:
I agree that a VMSS can still scale up and down even if provisioningState is failed and marking it as failed is incorrect.
But is setting an AMP's status to NotReady
equivalent to marking it as failed ?
Will probe..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, in the scenario AzureMachinePool.Status.ProvisioningState == infrav1.Failed
shouldn't we be setting the ready status to m.SetReady()
so that the reconciler does not carry over the earlier state?
@@ -615,19 +615,20 @@ func (m *MachinePoolScope) setProvisioningStateAndConditions(v infrav1.Provision | |||
} else { | |||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetDesiredReplicasCondition, infrav1.ScaleSetScaleDownReason, clusterv1.ConditionSeverityInfo, "") | |||
} | |||
m.SetNotReady() | |||
m.SetReady() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is probably evolving from the observation listed in #5515, i.e.
We are observing issues with a customer that sees NICs sometimes enter a ProvisioningFailed state yet continue operating, which then cascades to prevent any further action on the dependent resources, such as the virtual machines.
Shouldn't the right approach in addressing this be something along the lines of below ?
These states are metadata properties of the resource. They're independent from the functionality of the resource itself. Being in the failed state doesn't necessarily mean that the resource isn't functional. In most cases, it can continue operating and serving traffic without issues.
In several scenarios, if the resource is in the failed state, further operations on the resource or on other resources that depend on it might fail. You need to revert the state back to succeeded before running other operations.
...
To restore succeeded state, run another write (PUT) operation on the resource.
The issue that caused the previous operation might no longer be current. The newer write operation should be successful and restore the provisioning state.
Reference: #5515
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw the referenced issue but didn't investigate if the underlying issue is similar. From my understanding (what I wrote in the other comment reply), a VMSS might be different in that it makes it's provisioningState dependant on the provisioningStates of the running VMs/Instances in it.
Marking the VMSS as not ready has many implications, most notably that providerIdList is not processed anymore and therefore having dangling VMs lying around which don't get into CAPI/CAPZ at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a VMSS might be different in that it makes it's provisioningState dependant on the provisioningStates of the running VMs/Instances in it.
Marking the VMSS as not ready has many implications, most notably that providerIdList is not processed anymore and therefore having dangling VMs lying around which don't get into CAPI/CAPZ at all.
Thank you for providing context on this.
As a users, it seems quite odd to me that the CAPZ controller would mark the AzureMachinePool
's Status as Ready when
*m.MachinePool.Spec.Replicas != m.AzureMachinePool.Status.Replicas
infrav1.ProvisioningState == infrav1.Updating
If I were to stretch my understanding, it is sort of acceptable for AzureMachinePool to mark itself ready when infrav1.ProvisioningState == infrav1.Succeeded
and the *m.MachinePool.Spec.Replicas != m.AzureMachinePool.Status.Replicas
. This implies to me that Azure has acknowledged AzureMachinePools request(hence infrav1.ProvisioningState == infrav1.Succeeded
) and is working to get the desired replicas.
However, when infrav1.ProvisioningState == infrav1.Updating
, it does not seem right to be broadcasting that AzureMachinePool is Ready
when Azure is clearly working to get the VMs assigned and added to the VMSS.
Is my understanding wrong here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most notably that providerIdList is not processed anymore and therefore having dangling VMs lying around which don't get into CAPI/CAPZ at all.
I need to probe this further, but relying on its own (AMP) status to progress in its reconciliation logic is not a good pattern. It will only lead to more dependence on its own status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I outlined what happens when the AMP is not ready in the issue here: #4982 (comment)
see also the code from CAPI: https://github.com/kubernetes-sigs/cluster-api/blob/8d639f1fad564eecf5bda0a2ee03c8a38896a184/exp/internal/controllers/machinepool_controller_phases.go#L290-L319
From what I understand:
- The
.Status.Ready
is used by CAPI to determine if it should be reconciled. - If not ready the whole MP does not get reconciled anymore
The new v1beta2 status make it much more clear (Initialization status).
This change here ensures that we don't avoid processing the AMP/MP with the current API version.
As a users, it seems quite odd to me that the CAPZ controller would mark the AzureMachinePool's Status as Ready when
I always read the AMP Ready status as: It's possible to work with the AMP, i.e. scale up and down is possible. Which is the case, even if with a Replica mismatch or a updating provisioningState.
The AMP Ready status is read by the CAPI MP controller. Therefore it has implications broader than just for the CAPZ controller. I think the Replicas difference or the ProvisioningState difference should not be reflected in the Ready flag but instead with conditions.
Does that make sense or do I see something not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay on getting back on this.
The AMP Ready status is read by the CAPI MP controller. Therefore it has implications broader than just for the CAPZ controller.
That makes sense. It is not fault-tolerant for a resource controller to determine next steps based on its own status, rather than spec.
I think the Replicas difference or the ProvisioningState difference should not be reflected in the Ready flag but instead with conditions.
I agree with this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to update the status of a MachinePool needs to be revisited in my opinion. It needs to be better. Sorry to push back on this so much.
The area of controversy to me is that the way we are updating the status of MachinePool even when it is in Updating state or if the total replicas requested is more than current.
For instance, is it valid to update the status of AzureMachinePool to Ready when the actual number of replicas is 0, but the desired replica count is greater than 1 ? Meaning, when the Azure Machine Pool is still spinning up ?
New changes are detected. LGTM label has been removed. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@nawazkh yeah I thought about that too. I do wonder how we could e2e test that however. We'd need a way to simulate a failing provisioning state to fully e2e test this, or did you have another idea? |
/retest |
Mocking would be a great alternative to having a full blown e2e and simulating a failing VMSS; especially since Azure doesn't really let the users change the status of a resource as per their wish. And I think your unit test is achieving the required test scenarios. Thank you for adding those tests 😃 |
/test pull-cluster-api-provider-azure-e2e-optional |
/test pull-cluster-api-provider-azure-apiversion-upgrade |
/test pull-cluster-api-provider-azure-e2e-optional |
is there anything else I can do to make this move along? |
case v == infrav1.Updating: | ||
conditions.MarkFalse(m.AzureMachinePool, infrav1.ScaleSetModelUpdatedCondition, infrav1.ScaleSetModelOutOfDateReason, clusterv1.ConditionSeverityInfo, "") | ||
m.SetNotReady() | ||
m.SetReady() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please add a comment on rational behind changing this behavior ?
Sorry for the delay on getting back to this PR. Added some final questions on this PR. Please take a look! |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Adjusts setting of Ready state of the AzureMachinePool based on provisioningState. Most of the provisioningStates do not prohibit scaling of the VMSS. When ready is set to false however, the CAPI MachinePool does not get reconciled anymore and e.g. the providerIDList is not updated. This has the effect of not doing processing of existing/new Machines.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #4982
TODOs:
Release note: