Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vSphere CSI migration fails for volumes with "The object or item referred to could not be found." #3082

Closed
gnufied opened this issue Oct 17, 2024 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gnufied
Copy link
Contributor

gnufied commented Oct 17, 2024

One of our customers migrated from a version of k8s where CSI migration was not enabled to a version where CSI migration is enabled. Now bunch of those PVs are unusable with new version of k8s.

When I dug further, what I found is - syncer receives vim.fault.NotFound error for volume we are trying to register with CNS. So full error in syncer looks like:

", fault: "(*types.LocalizedMethodFault)(0xc00252d500)({
 DynamicData: (types.DynamicData) {
 },
 Fault: (*types.NotFound)(0xc00252d520)({
  VimFault: (types.VimFault) {
   MethodFault: (types.MethodFault) {
    FaultCause: (*types.LocalizedMethodFault)(<nil>),
    FaultMessage: ([]types.LocalizableMessage) <nil>
   }
  }
 }),
 LocalizedMessage: (string) (len=50) \"The object or item referred to could not be found.\"

But - when I open vsan logs in vCenter, I see:

2024-10-17T08:56:55.738Z info vsanvcmgmtd[16073] [vSAN@6876 sub=vmomi.soapStub[5] opID=91718591] SOAP request returned HTTP failure; <<io_obj p:0x00007f2d3c3f3000, h:30, <TCP '127.0.0.1 :
        │  34092'>, <TCP '127.0.0.1 : 1080'>>, /sdk>, method: registerDisk; code: 500(Internal Server Error); fault: (vim.fault.AlreadyExists) {
47619   │ -->    faultCause = (vmodl.MethodFault) null,
47620   │ -->    faultMessage = <unset>,
47621   │ -->    name = "e241bc4f-b78b-4cd5-997f-3424eb561ef1"
47622   │ -->    msg = "Received SOAP response fault from [<<io_obj p:0x00007f2d3c3f3000, h:30, <TCP '127.0.0.1 : 34092'>, <TCP '127.0.0.1 : 1080'>>, /sdk>]: registerDisk
47623   │ --> The specified key, name, or identifier 'e241bc4f-b78b-4cd5-997f-3424eb561ef1' already exists."
47624   │ --> }

47635   │ 2024-10-17T08:56:56.197Z error vsanvcmgmtd[16073] [vSAN@6876 sub=FcdService opID=91718591] Failed to find vol e241bc4f-b78b-4cd5-997f-3424eb561ef1 from volumeInfoCache
47636   │ 2024-10-17T08:56:56.203Z error vsanvcmgmtd[33985] [vSAN@6876 sub=Workflow opID=91718591] Workflow previous action has fault (vim.fault.NotFound) {
47637   │ -->    faultCause = (vmodl.MethodFault) null,
47638   │ -->    faultMessage = <unset>
47639   │ -->    msg = "e241bc4f-b78b-4cd5-997f-3424eb561ef1"

So although - vsan service thinks volume is already registered, later on volume is not found in its cache and hence vim.Fault.NotFound is returned to the client.

47635   │ 2024-10-17T08:56:56.197Z error vsanvcmgmtd[16073] [vSAN@6876 sub=FcdService opID=91718591] Failed to find vol e241bc4f-b78b-4cd5-997f-3424eb561ef1 from volumeInfoCache

This looks like similar to case we observed earlier - https://knowledge.broadcom.com/external/article?legacyId=91752

Is there a workaround we can use?

@gnufied
Copy link
Contributor Author

gnufied commented Oct 17, 2024

cc @divyenpatel

@gnufied
Copy link
Contributor Author

gnufied commented Oct 17, 2024

Another point is - in this case customer is on 8.0.3 version of vCenter. I thought this issue was fixed in 8.0.2.

@gnufied
Copy link
Contributor Author

gnufied commented Oct 28, 2024

cc @xing-yang

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 25, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants