Skip to content

Conversation

smoshiur1237
Copy link
Contributor

Here is a situation where openStack machine controller stuck in a loop for waiting for Server and doesn't do further checks for any other error and that is leading to wrong error message. It would solve the reporting of missing image for OpenStack machine.
Fixes #2265

Copy link

netlify bot commented Sep 21, 2025

Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!

Name Link
🔨 Latest commit 3a7eaa3
🔍 Latest deploy log https://app.netlify.com/projects/kubernetes-sigs-cluster-api-openstack/deploys/68d0559b0a74f20008c4b1b6
😎 Deploy Preview https://deploy-preview-2726--kubernetes-sigs-cluster-api-openstack.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 21, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign lentzi90 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Hi @smoshiur1237. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 21, 2025
@smoshiur1237
Copy link
Contributor Author

/cc @lentzi90
Please check if I need to do any other changes

@lentzi90
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 22, 2025
@lentzi90
Copy link
Contributor

Adding some notes from our code dive.

The early return when waiting for the OSS to become ready, here

machineServer, waitingForServer, err := r.reconcileMachineServer(ctx, scope, openStackMachine, openStackCluster, machine)
if err != nil || waitingForServer {
return ctrl.Result{}, err
}

means that we never reconcile the OSM state, here

result := r.reconcileMachineState(scope, openStackMachine, machine, machineServer)

So we never update the status of the OSM while waiting on the OSS to become ready.
The bug here is that the OSS can be in error state and we just keep waiting indefinitely without propagating that information up.

Copy link
Contributor

@lentzi90 lentzi90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look good to me!
Can I get a second opinion @mdbooth ? Is there a reason why we would need to return early before the OSS becomes ready?

It would be nice to have some unit tests for the machine controller, but it seems we don't have any at the moment so I am hesitant to require that as part of fixing this bug

@lentzi90
Copy link
Contributor

/cc @EmilienM

machineServer, waitingForServer, err := r.reconcileMachineServer(ctx, scope, openStackMachine, openStackCluster, machine)
if err != nil || waitingForServer {
machineServer, err := r.reconcileMachineServer(ctx, scope, openStackMachine, openStackCluster, machine)
if err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 377, if server isn't ready, InstanceID might be nil, causing panic

Copy link
Contributor

@EmilienM EmilienM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real problem isn't the waiting loop - it's that when OpenStack returns errors about missing images, those errors aren't properly captured, converted to meaningful conditions, or propagated to user-visible status.

Removing the waiting mechanism doesn't improve error visibility and may introduce new issues:

  • Controller proceeds with invalid/missing images without proper validation
  • Potential runtime panics when accessing non-existent instance details
  • No improvement in user-facing error messages

A better approach would focus on the OpenStackServer controller to:

  • Add image validation before instance creation - ORC should be done that, maybe CAPO needs to signal this better to the users.
  • Implement specific error conditions (ImageNotFound, ImageValidationFailed)
  • Improve status reporting with meaningful error messages
  • Ensure proper error propagation from server to machine status

The waiting logic serves a valid purpose and should remain until we have proper error detection and reporting mechanisms in place.

@lentzi90
Copy link
Contributor

The instance ID that might be nil is a good point. We need to handle that.
I don't quite agree with the rest though.

The real problem isn't the waiting loop - it's that when OpenStack returns errors about missing images, those errors aren't properly captured, converted to meaningful conditions, or propagated to user-visible status.

They do propagate to the OpenStackServer, where the ERROR condition is seen along with the explanation.
Furthermore, we even have propagation in place, but we do not reach it because of that wait.

We are waiting here for the OpenStackServer to have status.ready. So until it is completely green from ORC point of view, we do not proceed. This means that we never get to the place where we already have the propagation logic. We will never know if the server went into error state or why. The controller will happily sit here waiting for the server to become ready while ORC knows and has already told us that the image is missing or the instance failed to build or there was not enough capacity or the credentials didn't work, or any other reason.

We definitely need to handle the case when the instance ID is missing though. From what I understand, we only need the instance ID for control plane nodes. What if we move the reconcileMachineState call above all that? So we would move

result := r.reconcileMachineState(scope, openStackMachine, machine, machineServer)
if result != nil {
return *result, nil
}

up to happen immediately after reconcileMachineServer. It even has a requeue to wait for the server to become ACTIVE, very similar to the waitForServer that is there now.
// The other state is normal (for example, migrating, shutoff) but we don't want to proceed until it's ACTIVE
// due to potential conflict or unexpected actions
scope.Logger().Info("Waiting for instance to become ACTIVE", "id", openStackServer.Status.InstanceID, "status", openStackServer.Status.InstanceState)
v1beta1conditions.MarkUnknown(openStackMachine, infrav1.InstanceReadyCondition, infrav1.InstanceNotReadyReason, "Instance state is not handled: %v", ptr.Deref(openStackServer.Status.InstanceState, infrav1.InstanceStateUndefined))
return &ctrl.Result{RequeueAfter: waitForInstanceBecomeActiveToReconcile}

To me that would make sense because the OpenStackServer state is fundamentally "lower level" (happens before and is common for all nodes) than the control-plane logic with API loadbalancer.

@lentzi90
Copy link
Contributor

lentzi90 commented Sep 26, 2025

Ok I have been testing to see what works and not. There are at least two different cases:

  1. Image filter is used and does not match any image.
    This is silent in both ORC and CAPO. There is no visible error on the OSS or OSM, they are just "not ready". Only this log line shows up in CAPO:

    E0926 11:29:21.102342      97 controller.go:353] "Reconciler error" err="no images were found with the given image filter: name=flatcar_production-v1.33.1, tags=[]" controller="openstackserver" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackServer" OpenStackServer="default/development-7058-9kfgq-f9cpz" namespace="default" name="development-7058-9kfgq-f9cpz" reconcileID="4344ee80-7c8e-45a2-afe7-d94564611732"
    
  2. Image ID is used. This shows up nicely in the OSS but is not propagated to the OSM (because of what we discussed above).

Then I tried moving reconcileMachineState up and rerun. This solves the second case quite well (with some room for improvement). The OSM status then looks like this:

status:
  conditions:
  - lastTransitionTime: "2025-09-26T11:26:34Z"
    reason: InstanceStateError
    severity: Error
    status: "False"
    type: Ready
  - lastTransitionTime: "2025-09-26T11:26:34Z"
    reason: InstanceStateError
    severity: Error
    status: "False"
    type: InstanceReady
  failureMessage: instance state 0xc000f437c0 is unexpected
  failureReason: UpdateError

We could improve this to show the state instead of the memory address, and propagate the reason.

The first case is not solved by this. In fact we get panic because reconcileMachineState tries to check the InstanceState of the OSS, and we do not have an instance yet. I think this should be fixed in both CAPO and ORC. ORC should set InstanceCreateFailed (as it does in case 2) and CAPO should check if the instance state is available and set a condition to show if it is not. We should also propagate the InstanceCreateFailed condition from ORC when it is available.
I have checked that the panic goes away if we add the check for the instance state.

Edit: I also tested using imageRef. It behaves the same as case 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
Status: Inbox
Development

Successfully merging this pull request may close these issues.

Missing image is hard to debug
4 participants