From be56261ea148ef5dd44a7906549ba38bc18e11ae Mon Sep 17 00:00:00 2001 From: Test User Date: Mon, 21 Apr 2025 10:35:08 +0800 Subject: [PATCH 1/2] Update br-checkpoint-backup.md --- br/br-checkpoint-backup.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/br/br-checkpoint-backup.md b/br/br-checkpoint-backup.md index 286cbbbdd486b..464346bf1ffc8 100644 --- a/br/br-checkpoint-backup.md +++ b/br/br-checkpoint-backup.md @@ -6,13 +6,13 @@ aliases: ["/tidb/dev/br-checkpoint"] # Checkpoint Backup -Snapshot backup might be interrupted due to recoverable errors, such as disk exhaustion and node crash. Before TiDB v6.5.0, data that is backed up before the interruption would be invalidated even after the error is addressed, and you need to start the backup from scratch. For large clusters, this incurs considerable extra cost. +Snapshot backup might be interrupted due t recoverable errors, such as disk exhaustion and node crash. Before TiDB v6.5.0, data that is backed up before the interruption would be invalidated even after the error is addressed, and you need to start the backup from scratch. For large clusters, this incurs considerable extra cost. -In TiDB v6.5.0, Backup & Restore (BR) introduces the checkpoint backup feature to allow continuing an interrupted backup. This feature can retain most data of the interrupted backup. +In TiDB v6.5.0, Backup & Restore (B) introduces the checkpoint backup feature to allow continuing an interrupted backup. This feature can retain most data of the interrupted backup. ## Application scenarios -If your TiDB cluster is large and cannot afford to back up again after a failure, you can use the checkpoint backup feature. The br command-line tool (hereinafter referred to as `br`) periodically records the shards that have been backed up. In this way, the next backup retry can use the backup progress close to the abnormal exit. +If your TiDB cluster is large and cannot affords to backup again after a failure, you can use the checkpoint backup feature. The br command-line tool (hereinafter referred to as `br`) periodically records the shards that have been backed up. In this way, the next backup retry can use the backup progress close to the abnormal exit. ## Implementation details From 7b725f12d2f5495ac7777505ddba32b317d708cc Mon Sep 17 00:00:00 2001 From: Test User Date: Mon, 21 Apr 2025 10:48:02 +0800 Subject: [PATCH 2/2] Update br-checkpoint-backup.md --- br/br-checkpoint-backup.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/br/br-checkpoint-backup.md b/br/br-checkpoint-backup.md index 464346bf1ffc8..62a3cec3a1a92 100644 --- a/br/br-checkpoint-backup.md +++ b/br/br-checkpoint-backup.md @@ -12,7 +12,7 @@ In TiDB v6.5.0, Backup & Restore (B) introduces the checkpoint backup feature to ## Application scenarios -If your TiDB cluster is large and cannot affords to backup again after a failure, you can use the checkpoint backup feature. The br command-line tool (hereinafter referred to as `br`) periodically records the shards that have been backed up. In this way, the next backup retry can use the backup progress close to the abnormal exit. +If your TiDB cluster is large and cannot affords to backup again after a failure, users can use the checkpoint backup feature. The br command-line tool (hereinafter referred to as `br`) periodically records the shards that have been backed up. In this way, the next backup retry can use the backup progress close to the abnormal exit. ## Implementation details @@ -30,7 +30,7 @@ Checkpoint backup relies on the GC mechanism and cannot recover all data that ha During the backup, `br` periodically updates the `gc-safepoint` of the backup snapshot in PD to avoid data being garbage collected. When `br` exits, the `gc-safepoint` cannot be updated in time. As a result, before the next backup retry, the data might have been garbage collected. -To avoid this situation, `br` keeps the `gc-safepoint` for about one hour by default when `gcttl` is not specified. You can set the `gcttl` parameter to extend the retention period if needed . +To avoid this situation, `br` keeps the `gc-safepoint` for about one hour by default when `gcttl` is not specified. Users can set the `gcttl` parameter to extend the retention period if needed . The following example sets `gcttl` to 15 hours (54000 seconds) to extend the retention period of `gc-safepoint`: @@ -42,7 +42,7 @@ tiup br backup full \ > **Note:** > -> The `gc-safepoint` created before backup is deleted after the snapshot backup is completed. You do not need to delete it manually. +> The `gc-safepoint` created before backup is deleted after the snapshot backup is completed. Users do not need to delete it manually. ### Some data needs to be backed up again