Skip to content

Conversation

sureshanaparti
Copy link
Contributor

@sureshanaparti sureshanaparti commented Sep 18, 2025

Description

This PR updates the default value of the setting 'mount.disabled.storage.pool' to true, and mounts the disabled pools by default (for new installations only, old installation should explicitly update the setting to true if required). It keeps the disabled pools mounted (as these are restricted for further allocation only).

Issue

The disabled pools that are unmounted (and entries removed from storage_pool_host_ref table) during host disconnection on management server / agent stop, are not re-mounted (and added back to storage_pool_host_ref table) during host connection on management server / agent start, which is resulting in access issues for the existing volumes on them and impacting the VM/Volume operations.

Background

Further allocation of the pools is not allowed when the pools are disabled. The disabled pools are expected to be mounted by design (and host/pool entries to be added for the disabled pools in storage_pool_host_ref table) for accessing the existing volumes. The setting 'mount.disabled.storage.pool' (introduced in 4.17.0.0) is false by default to keep the same behavior, and also doesn't allow mount of disabled pools. There was a discussion about mounting the disabled pools by default (and only unmount during pool maintenance) in the PR #6164 (where the setting 'mount.disabled.storage.pool' is introduced), but it was not addressed.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

Disable the pool and restart management server / agent, and notice that the pool is mounted (and host/pool entry exists in storage_pool_host_ref table)

How did you try to break this feature and the system with this change?

@sureshanaparti
Copy link
Contributor Author

@blueorangutan package

@sureshanaparti sureshanaparti changed the base branch from main to 4.20 September 18, 2025 07:13
@blueorangutan
Copy link

@sureshanaparti a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@sureshanaparti
Copy link
Contributor Author

@blueorangutan package

@blueorangutan
Copy link

@sureshanaparti a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the default value of the mount.disabled.storage.pool configuration setting from false to true, restoring the previous default behavior where disabled storage pools are mounted by default. This addresses access issues with existing volumes on disabled pools that were not being re-mounted after host reconnection.

  • Changed the default value from "false" to Boolean.TRUE.toString() for the MountDisabledStoragePool configuration key
  • Ensures disabled storage pools are mounted by default for new installations while preserving existing installation settings
  • Restores behavior that was present until version 4.17.0.0

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Copy link

codecov bot commented Sep 18, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 17.39%. Comparing base (0cbebbd) to head (29b2fc6).
⚠️ Report is 5 commits behind head on 4.20.

Additional details and impacted files
@@             Coverage Diff              @@
##               4.20   #11666      +/-   ##
============================================
+ Coverage     16.17%   17.39%   +1.22%     
- Complexity    13297    15284    +1987     
============================================
  Files          5656     5889     +233     
  Lines        498151   526184   +28033     
  Branches      60441    64242    +3801     
============================================
+ Hits          80588    91543   +10955     
- Misses       408591   424296   +15705     
- Partials       8972    10345    +1373     
Flag Coverage Δ
uitests 3.62% <ø> (-0.39%) ⬇️
unittests 18.44% <100.00%> (+1.41%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 15076

Copy link
Member

@weizhouapache weizhouapache left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@sureshanaparti
any doc needed ?

@weizhouapache weizhouapache added this to the 4.20.2 milestone Sep 18, 2025
Copy link
Contributor

@DaanHoogland DaanHoogland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clgtm

@sureshanaparti
Copy link
Contributor Author

lgtm

@sureshanaparti any doc needed ?

will check/update

@sureshanaparti sureshanaparti marked this pull request as ready for review September 19, 2025 08:37
Copy link
Contributor

@harikrishna-patnala harikrishna-patnala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sureshanaparti this LGTM. one request out of the scope of this issue. I see this setting is of cluster scope and there is a usage of this without clusterId, can you please add that here too.

https://github.com/shapeblue/cloudstack/blob/9fee6dae34cc233ee6644ed6fe5499d3103f9f3a/server/src/main/java/com/cloud/storage/listener/StoragePoolMonitor.java#L150

@weizhouapache
Copy link
Member

@sureshanaparti this LGTM. one request out of the scope of this issue. I see this setting is of cluster scope and there is a usage of this without clusterId, can you please add that here too.

https://github.com/shapeblue/cloudstack/blob/9fee6dae34cc233ee6644ed6fe5499d3103f9f3a/server/src/main/java/com/cloud/storage/listener/StoragePoolMonitor.java#L150

did you mean zoneId ? @harikrishna-patnala
👍

@weizhouapache
Copy link
Member

@sureshanaparti
could you consider @harikrishna-patnala 's comment ? thanks

@harikrishna-patnala
Copy link
Contributor

@sureshanaparti this LGTM. one request out of the scope of this issue. I see this setting is of cluster scope and there is a usage of this without clusterId, can you please add that here too.
https://github.com/shapeblue/cloudstack/blob/9fee6dae34cc233ee6644ed6fe5499d3103f9f3a/server/src/main/java/com/cloud/storage/listener/StoragePoolMonitor.java#L150

did you mean zoneId ? @harikrishna-patnala 👍

no @weizhouapache clusterId only, as the scope is of cluster

@sureshanaparti
Copy link
Contributor Author

@sureshanaparti this LGTM. one request out of the scope of this issue. I see this setting is of cluster scope and there is a usage of this without clusterId, can you please add that here too.
https://github.com/shapeblue/cloudstack/blob/9fee6dae34cc233ee6644ed6fe5499d3103f9f3a/server/src/main/java/com/cloud/storage/listener/StoragePoolMonitor.java#L150

did you mean zoneId ? @harikrishna-patnala 👍

no @weizhouapache clusterId only, as the scope is of cluster

@harikrishna-patnala @weizhouapache for zone-wide pools, we cannot consider clustered scope value (as each cluster in the zone can have different setting), so I think, that's the reason it's picked from global scope there. in case, to change to multi-scope config, we can address later in a separate pr.

@weizhouapache
Copy link
Member

thanks @sureshanaparti @harikrishna-patnala
I agree we could do it later

@sureshanaparti
one question, I checked PR #6164, it seems that the disabled pools are not mounted previously. it looks different to what you described in the description. can you re-check ?

@weizhouapache
Copy link
Member

@blueorangutan test

@blueorangutan
Copy link

@weizhouapache a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@sureshanaparti
Copy link
Contributor Author

thanks @sureshanaparti @harikrishna-patnala I agree we could do it later

@sureshanaparti one question, I checked PR #6164, it seems that the disabled pools are not mounted previously. it looks different to what you described in the description. can you re-check ?

checked @weizhouapache, updated the description

@weizhouapache
Copy link
Member

checked in a new env

mysql> select * from configuration where name ='mount.disabled.storage.pool'\G
*************************** 1. row ***************************
     category: Storage
     instance: DEFAULT
    component: StorageManager
         name: mount.disabled.storage.pool
        value: true
  description: Mount all zone-wide or cluster-wide disabled storage pools after node reboot
default_value: true
      updated: 2025-09-23 08:11:49
        scope: Cluster
   is_dynamic: 1
     group_id: 9
  subgroup_id: 36
       parent: NULL
 display_text: Mount disabled storage pool
         kind: NULL
      options: NULL
1 row in set (0.00 sec)

@weizhouapache
Copy link
Member

verified the global setting when host is rebooted

  • mount.disabled.storage.pool=false, only the Enabled pools are mounted
  • mount.disabled.storage.pool=true, the Enabled and Disabled pools are mounted

@weizhouapache weizhouapache self-assigned this Sep 23, 2025
Copy link
Contributor

@harikrishna-patnala harikrishna-patnala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @sureshanaparti

@blueorangutan
Copy link

[SF] Trillian test result (tid-14423)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 56401 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr11666-t14423-kvm-ol8.zip
Smoke tests completed. 141 look OK, 0 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants