Skip to content

Commit 88a7239

Browse files
committed
Add troubleshooting of node maintenance mode
Signed-off-by: Jian Wang <[email protected]>
1 parent 8a771b7 commit 88a7239

File tree

12 files changed

+368
-4
lines changed

12 files changed

+368
-4
lines changed

docs/advanced/storageclass.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,12 @@ The number of replicas created for each volume in Longhorn. Defaults to `3`.
4444

4545
![](/img/v1.2/storageclass/create_storageclasses_replicas.png)
4646

47+
:::info important
48+
49+
When the value is `1`, the created volume from this StorageClass has only one replica, it may block the [Node Maintenance](../host/host.md#node-maintenance), check the section [Single-Replica Volumes](../troubleshooting/host.md#single-replica-volumes) and set a proper global option.
50+
51+
:::
52+
4753
#### Stale Replica Timeout
4854

4955
Determines when Longhorn would clean up an error replica after the replica's status is ERROR. The unit is minute. Defaults to `30` minutes in Harvester.

docs/host/host.md

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,36 @@ Because Harvester is built on top of Kubernetes and uses etcd as its database, t
2121

2222
## Node Maintenance
2323

24+
In the following scenarios, you plan to migrate/shutdown the workloads from one node and also possibly shutdown the node.
25+
26+
- Replace/add/remove hardware
27+
28+
- Change the network setting
29+
30+
- Troubleshooting
31+
32+
- Remove a node
33+
34+
Harvester provides `Node Maintenance` feature to run a series of checks and operations automatically.
35+
2436
For admin users, you can click **Enable Maintenance Mode** to evict all VMs from a node automatically. It will leverage the `VM live migration` feature to migrate all VMs to other nodes automatically. Note that at least two active nodes are required to use this feature.
2537

2638
![node-maintenance.png](/img/v1.2/host/node-maintenance.png)
2739

40+
After a while the target node will enter maintenance mode successfully.
41+
42+
![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)
43+
44+
:::info important
45+
46+
Check those [known limitations and workarounds](../troubleshooting/host.md#an-enable-maintenance-mode-node-stucks-on-cordoned-state) before you click this menu or when you have encountered some issues.
47+
48+
If you have attached any volume to this node manually, it may block the node maintenance, check the section [Manually Attached Volumes](../troubleshooting/host.md#manually-attached-volumes) and set a proper global option.
49+
50+
If you have any single-replica volume, it may block the node maintenance, check the section [Single-Replica Volumes](../troubleshooting/host.md#single-replica-volumes) and set a proper global option.
51+
52+
:::
53+
2854
## Cordoning a Node
2955

3056
Cordoning a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you’re done, power back on and make the node schedulable again by uncordoning it.
@@ -42,6 +68,8 @@ Before removing a node from a Harvester cluster, determine if the remaining node
4268

4369
If the remaining nodes do not have enough resources, VMs might fail to migrate and volumes might degrade when you remove a node.
4470

71+
If you have some volumes which were created from the customized `StorageClass` with the value **1** of the [Number of Replicas](../advanced/storageclass.md#number-of-replicas), it is recommended to backup those single-replica volumes or re-deploy the related workloads to other node in advance to get the volume scheduled to other node. Otherwise, those volumes can't be rebuilt or restored from other nodes after this node is removed.
72+
4573
:::
4674

4775
### 1. Check if the node can be removed from the cluster.
@@ -522,4 +550,4 @@ status:
522550
```
523551

524552
The `harvester-node-manager` pod(s) in the `harvester-system` namespace may also contain some hints as to why it is not rendering a file to a node.
525-
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.
553+
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.

docs/troubleshooting/host.md

Lines changed: 141 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,141 @@
1+
---
2+
sidebar_position: 6
3+
sidebar_label: Host
4+
title: "Host"
5+
---
6+
7+
<head>
8+
<link rel="canonical" href="https://docs.harvesterhci.io/v1.3/troubleshooting/host"/>
9+
</head>
10+
11+
## Node in Maintenance Mode Becomes Stuck in Cordoned State
12+
13+
When you enable Maintenance Mode on a node using the Harvester UI, the node becomes stuck in the `Cordoned` state and the menu shows the **Enable Maintenance Mode** option instead of **Disable Maintenance Mode**.
14+
15+
![node-stuck-cordoned.png](/img/v1.3/troubleshooting/node-stuck-cordoned.png)
16+
17+
The Harvester pod logs contain messages similar to the following:
18+
19+
```
20+
time="2024-08-05T19:03:02Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
21+
time="2024-08-05T19:03:02Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
22+
23+
time="2024-08-05T19:03:07Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
24+
time="2024-08-05T19:03:07Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
25+
26+
time="2024-08-05T19:03:12Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
27+
time="2024-08-05T19:03:12Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
28+
```
29+
30+
The Longhorn Instance Manager uses a PodDisruptionBudget (PDB) to protect itself from accidental eviction, which results in loss of volume data. When the Maintenance Mode error occurs, it indicates that the `instance-manager` pod is still serving volumes or replicas.
31+
32+
The following sections describe the known causes and their corresponding workarounds.
33+
34+
### Manually Attached Volumes
35+
36+
A volume that is attached to a node using the [embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards) can cause the error. This is because the object is attached to a node name instead of the pod name.
37+
38+
You can check it from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).
39+
40+
![attached-volume.png](/img/v1.3/troubleshooting/attached-volume.png)
41+
42+
The manually attached object is attached to a node name instead of the pod name.
43+
44+
You can also use the CLI to retrieve the details of the CRD object `VolumeAttachment`.
45+
46+
Example of a volume that was attached using the Longhorn UI:
47+
48+
```
49+
- apiVersion: longhorn.io/v1beta2
50+
kind: VolumeAttachment
51+
...
52+
spec:
53+
attachmentTickets:
54+
longhorn-ui:
55+
id: longhorn-ui
56+
nodeID: node-name
57+
...
58+
volume: pvc-9b35136c-f59e-414b-aa55-b84b9b21ff89
59+
```
60+
61+
Example of a volume that was attached using the Longhorn CSI driver:
62+
63+
```
64+
- apiVersion: longhorn.io/v1beta2
65+
kind: VolumeAttachment
66+
spec:
67+
attachmentTickets:
68+
csi-b5097155cddde50b4683b0e659923e379cbfc3873b5b2ee776deb3874102e9bf:
69+
id: csi-b5097155cddde50b4683b0e659923e379cbfc3873b5b2ee776deb3874102e9bf
70+
nodeID: node-name
71+
...
72+
volume: pvc-3c6403cd-f1cd-4b84-9b46-162f746b9667
73+
```
74+
75+
:::note
76+
77+
Manually attaching a volume to the node is not recommended.
78+
79+
Harvester automatically attaches/detaches volumes based on operations like creating or migrating VM.
80+
81+
:::
82+
83+
#### Workaround 1: Set `Detach Manually Attached Volumes When Cordoned` to `True`
84+
85+
The Longhorn setting [Detach Manually Attached Volumes When Cordoned](https://longhorn.io/docs/1.6.0/references/settings/#detach-manually-attached-volumes-when-cordoned) blocks node draining when there are volumes manually attached to the node.
86+
87+
The default value of this setting depends on the embedded Longhorn version:
88+
89+
| Harvester version | Embedded Longhorn version | Default value |
90+
| --- | --- | --- |
91+
| v1.3.1 | v1.6.0 | `true` |
92+
| v1.4.0 | v1.7.0 | `false` |
93+
94+
Set this option to `true` from the [embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).
95+
96+
#### Workaround 2: Manually Detach the Volume
97+
98+
Detach the volume using the [embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).
99+
100+
![detached-volume.png](/img/v1.3/troubleshooting/detached-volume.png)
101+
102+
Once the volume is detached, you can successfully enable Maintenance Mode on the node.
103+
104+
![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)
105+
106+
### Single-Replica Volumes
107+
108+
Harvester allows you to create custom StorageClasses that describe how Longhorn must provision volumes. If necessary, you can create a StorageClass with the [Number of Replicas](../advanced/storageclass.md#number-of-replicas) parameter set to `1`.
109+
110+
When a volume is created using such a StorageClass and is attached to a node using the CSI driver or other methods, the lone replica stays on that node even after the volume is detached.
111+
112+
You can check this using the CRD object `Volume`.
113+
114+
```
115+
- apiVersion: longhorn.io/v1beta2
116+
kind: Volume
117+
...
118+
spec:
119+
...
120+
numberOfReplicas: 1 // the replica number
121+
...
122+
status:
123+
...
124+
ownerID: nodeName
125+
...
126+
state: attached
127+
```
128+
129+
#### Workaround: Set `Node Drain Policy`
130+
131+
The Longhorn [Node Drain Policy](https://longhorn.io/docs/1.6.0/references/settings/#node-drain-policy) is set to `block-if-contains-last-replica` by default. This option forces Longhorn to block node draining when the node contains the last healthy replica of a volume.
132+
133+
To address the issue, change the value to `allow-if-replica-is-stopped` using the [embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).
134+
135+
:::info important
136+
137+
If you plan to remove the node after Maintenance Mode is enabled, back up single-replica volumes or redeploy the related workloads to other nodes in advance so that the volumes are scheduled to other nodes.
138+
139+
:::
140+
141+
Starting with Harvester v1.4.0, the `Node Drain Policy` is set to `allow-if-replica-is-stopped` by default.

docs/volume/create-volume.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,14 @@ description: Create a volume from the Volume page.
2929

3030
![create-empty-volume](/img/v1.2/volume/create-empty-volume.png)
3131

32+
:::info important
33+
34+
Manually attaching a volume to the node is not recommended.
35+
36+
Harvester automatically attaches/detaches volumes based on operations like creating or migrating VM. If you plans to attach any volume to certain node manually, it may block the [Node Maintenance](../host/host.md#node-maintenance), check the section [Manually Attached Volumes](../troubleshooting/host.md#manually-attached-volumes) and set a proper global option.
37+
38+
:::
39+
3240
</TabItem>
3341
<TabItem value="api" label="API">
3442

182 KB
Loading
83.5 KB
Loading
95.8 KB
Loading
98 KB
Loading

versioned_docs/version-v1.3/advanced/storageclass.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,12 @@ The number of replicas created for each volume in Longhorn. Defaults to `3`.
4141

4242
![](/img/v1.2/storageclass/create_storageclasses_replicas.png)
4343

44+
:::info important
45+
46+
When the value is `1`, the created volume from this StorageClass has only one replica, it may block the [Node Maintenance](../host/host.md#node-maintenance), check the section [Single-Replica Volumes](../troubleshooting/host.md#single-replica-volumes) and set a proper global option.
47+
48+
:::
49+
4450
#### Stale Replica Timeout
4551

4652
Determines when Longhorn would clean up an error replica after the replica's status is ERROR. The unit is minute. Defaults to `30` minutes in Harvester.
@@ -148,4 +154,4 @@ Then, create a new `StorageClass` for the HDD (use the above disk tags). For har
148154

149155
You can now create a volume using the above `StorageClass` with HDDs mostly for cold storage or archiving purpose.
150156

151-
![](/img/v1.2/storageclass/create_volume_hdd.png)
157+
![](/img/v1.2/storageclass/create_volume_hdd.png)

versioned_docs/version-v1.3/host/host.md

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,36 @@ Because Harvester is built on top of Kubernetes and uses etcd as its database, t
2121

2222
## Node Maintenance
2323

24+
In the following scenarios, you plan to migrate/shutdown the workloads from one node and also possibly shutdown the node.
25+
26+
- Replace/add/remove hardware
27+
28+
- Change the network setting
29+
30+
- Troubleshooting
31+
32+
- Remove a node
33+
34+
Harvester provides `Node Maintenance` feature to run a series of checks and operations automatically.
35+
2436
For admin users, you can click **Enable Maintenance Mode** to evict all VMs from a node automatically. It will leverage the `VM live migration` feature to migrate all VMs to other nodes automatically. Note that at least two active nodes are required to use this feature.
2537

2638
![node-maintenance.png](/img/v1.2/host/node-maintenance.png)
2739

40+
After a while the target node will enter maintenance mode successfully.
41+
42+
![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)
43+
44+
:::info important
45+
46+
Check those [known limitations and workarounds](../troubleshooting/host.md#an-enable-maintenance-mode-node-stucks-on-cordoned-state) before you click this menu or when you have encountered some issues.
47+
48+
If you have attached any volume to this node manually, it may block the node maintenance, check the section [Manually Attached Volumes](../troubleshooting/host.md#manually-attached-volumes) and set a proper global option.
49+
50+
If you have any single-replica volume, it may block the node maintenance, check the section [Single-Replica Volumes](../troubleshooting/host.md#single-replica-volumes) and set a proper global option.
51+
52+
:::
53+
2854
## Cordoning a Node
2955

3056
Cordoning a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you’re done, power back on and make the node schedulable again by uncordoning it.
@@ -42,6 +68,8 @@ Before removing a node from a Harvester cluster, determine if the remaining node
4268

4369
If the remaining nodes do not have enough resources, VMs might fail to migrate and volumes might degrade when you remove a node.
4470

71+
If you have some volumes which were created from the customized `StorageClass` with the value **1** of the [Number of Replicas](../advanced/storageclass.md#number-of-replicas), it is recommended to backup those single-replica volumes or re-deploy the related workloads to other node in advance to get the volume scheduled to other node. Otherwise, those volumes can't be rebuilt or restored from other nodes after this node is removed.
72+
4573
:::
4674

4775
### 1. Check if the node can be removed from the cluster.
@@ -522,4 +550,4 @@ status:
522550
```
523551

524552
The `harvester-node-manager` pod(s) in the `harvester-system` namespace may also contain some hints as to why it is not rendering a file to a node.
525-
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.
553+
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.

0 commit comments

Comments
 (0)