-
Notifications
You must be signed in to change notification settings - Fork 105
Update backup_restore.md #321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| ``` | ||
|
|
||
| 4. Remove the rke2 db directory on the other server nodes as follows: | ||
| 4. Move the rke2 db directory on the other server nodes as follows (you want to keep a copy to avoid ending up with only an old or corrupt backup to chose for): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having the old DB dir around on the secondary servers doesn't really help with anything. If you run into problems, restoring a snapshot is a better resolution than moving an old db dir back into place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue is that we currently run rm -rf /var/lib/rancher/rke2/server/db/, which deletes both the etcd data and the snapshots directory. This means we erase the live data along with its backups.
We've encountered cases where customers, not paying close attention, have accidentally executed this command on all three master/etcd nodes, leading to complete data loss.
This change ensures that snapshots are not deleted until the cluster has been fully restored, allowing customers to perform the cleanup on their own afterward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, then how about we leave this as-is and just delete the etcd directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the proposed change to use rm -rf /var/lib/rancher/rke2/server/db/etcd instead of the broader directory removal.
The more targeted approach addresses the core issue while providing several important benefits:
- It removes only the etcd database files that need to be replaced during restoration
- Preserves the snapshots directory, preventing potential complete data loss scenarios
- Eliminates the risk we've seen with customers accidentally executing the broader command across all master/etcd nodes simultaneously
- Requires no additional cleanup steps later in the process
| mv /var/lib/rancher/rke2/server/db /var/lib/rancher/rke2/server/backups | ||
| ``` | ||
| Clean them out after this operation: | ||
| ``` | ||
| rm -rf /var/lib/rancher/rke2/server/backups |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| mv /var/lib/rancher/rke2/server/db /var/lib/rancher/rke2/server/backups | |
| ``` | |
| Clean them out after this operation: | |
| ``` | |
| rm -rf /var/lib/rancher/rke2/server/backups | |
| rm -rf /var/lib/rancher/rke2/server/db/etcd |
This should remove the etcd files but leave the snapshots, without requiring any additional cleanup later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good, unless I'm missing anything; @mattmattox?
Can't count how many times we've seen removing backup etcd dbs put a customer in a bad spot.