-
Notifications
You must be signed in to change notification settings - Fork 280
🐛Remove waitingForServer loop to allow controller to move forward and do further checks #2726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
🐛Remove waitingForServer loop to allow controller to move forward and do further checks #2726
Conversation
Signed-off-by: smoshiur1237 <[email protected]>
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @smoshiur1237. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cc @lentzi90 |
/ok-to-test |
Adding some notes from our code dive. The early return when waiting for the OSS to become ready, here cluster-api-provider-openstack/controllers/openstackmachine_controller.go Lines 364 to 367 in 6d42cbb
means that we never reconcile the OSM state, here
So we never update the status of the OSM while waiting on the OSS to become ready. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does look good to me!
Can I get a second opinion @mdbooth ? Is there a reason why we would need to return early before the OSS becomes ready?
It would be nice to have some unit tests for the machine controller, but it seems we don't have any at the moment so I am hesitant to require that as part of fixing this bug
/cc @EmilienM |
machineServer, waitingForServer, err := r.reconcileMachineServer(ctx, scope, openStackMachine, openStackCluster, machine) | ||
if err != nil || waitingForServer { | ||
machineServer, err := r.reconcileMachineServer(ctx, scope, openStackMachine, openStackCluster, machine) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line 377, if server isn't ready, InstanceID might be nil, causing panic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The real problem isn't the waiting loop - it's that when OpenStack returns errors about missing images, those errors aren't properly captured, converted to meaningful conditions, or propagated to user-visible status.
Removing the waiting mechanism doesn't improve error visibility and may introduce new issues:
- Controller proceeds with invalid/missing images without proper validation
- Potential runtime panics when accessing non-existent instance details
- No improvement in user-facing error messages
A better approach would focus on the OpenStackServer controller to:
- Add image validation before instance creation - ORC should be done that, maybe CAPO needs to signal this better to the users.
- Implement specific error conditions (ImageNotFound, ImageValidationFailed)
- Improve status reporting with meaningful error messages
- Ensure proper error propagation from server to machine status
The waiting logic serves a valid purpose and should remain until we have proper error detection and reporting mechanisms in place.
The instance ID that might be
They do propagate to the OpenStackServer, where the ERROR condition is seen along with the explanation. We are waiting here for the OpenStackServer to have We definitely need to handle the case when the instance ID is missing though. From what I understand, we only need the instance ID for control plane nodes. What if we move the cluster-api-provider-openstack/controllers/openstackmachine_controller.go Lines 412 to 415 in 77c09c2
up to happen immediately after reconcileMachineServer . It even has a requeue to wait for the server to become ACTIVE, very similar to the waitForServer that is there now.cluster-api-provider-openstack/controllers/openstackmachine_controller.go Lines 458 to 463 in 77c09c2
To me that would make sense because the OpenStackServer state is fundamentally "lower level" (happens before and is common for all nodes) than the control-plane logic with API loadbalancer. |
Ok I have been testing to see what works and not. There are at least two different cases:
Then I tried moving status:
conditions:
- lastTransitionTime: "2025-09-26T11:26:34Z"
reason: InstanceStateError
severity: Error
status: "False"
type: Ready
- lastTransitionTime: "2025-09-26T11:26:34Z"
reason: InstanceStateError
severity: Error
status: "False"
type: InstanceReady
failureMessage: instance state 0xc000f437c0 is unexpected
failureReason: UpdateError We could improve this to show the state instead of the memory address, and propagate the reason. The first case is not solved by this. In fact we get panic because Edit: I also tested using imageRef. It behaves the same as case 1. |
Here is a situation where openStack machine controller stuck in a loop for waiting for Server and doesn't do further checks for any other error and that is leading to wrong error message. It would solve the reporting of missing image for OpenStack machine.
Fixes #2265