Skip to content

Commit 82751be

Browse files
authored
Use tiup for BR, Lightning and Dumpling. (#17392)
1 parent d7e73db commit 82751be

18 files changed

+149
-149
lines changed

best-practices/readonly-nodes.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -127,5 +127,5 @@ spark.tispark.replica_read learner
127127
To read data from read-only nodes when backing up cluster data, you can specify the `--replica-read-label` option in the br command line. Note that when running the following command in shell, you need to use single quotes to wrap the label to prevent `$` from being parsed.
128128
129129
```shell
130-
br backup full ... --replica-read-label '$mode:readonly'
130+
tiup br backup full ... --replica-read-label '$mode:readonly'
131131
```

br/backup-and-restore-storages.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ By default, BR sends a credential to each TiKV node when using Amazon S3, GCS, o
1919
Note that this operation is not applicable to cloud environments. If you use IAM Role authorization, each node has its own role and permissions. In this case, you need to configure `--send-credentials-to-tikv=false` (or `-c=0` in short) to disable sending credentials:
2020

2121
```bash
22-
./br backup full -c=0 -u pd-service:2379 --storage 's3://bucket-name/prefix'
22+
tiup br backup full -c=0 -u pd-service:2379 --storage 's3://bucket-name/prefix'
2323
```
2424

2525
If you back up or restore data using the [`BACKUP`](/sql-statements/sql-statement-backup.md) and [`RESTORE`](/sql-statements/sql-statement-restore.md) statements, you can add the `SEND_CREDENTIALS_TO_TIKV = FALSE` option:
@@ -50,14 +50,14 @@ This section provides some URI examples by using `external` as the `host` parame
5050
**Back up snapshot data to Amazon S3**
5151

5252
```shell
53-
./br backup full -u "${PD_IP}:2379" \
53+
tiup br backup full -u "${PD_IP}:2379" \
5454
--storage "s3://external/backup-20220915?access-key=${access-key}&secret-access-key=${secret-access-key}"
5555
```
5656

5757
**Restore snapshot data from Amazon S3**
5858

5959
```shell
60-
./br restore full -u "${PD_IP}:2379" \
60+
tiup br restore full -u "${PD_IP}:2379" \
6161
--storage "s3://external/backup-20220915?access-key=${access-key}&secret-access-key=${secret-access-key}"
6262
```
6363

@@ -67,14 +67,14 @@ This section provides some URI examples by using `external` as the `host` parame
6767
**Back up snapshot data to GCS**
6868

6969
```shell
70-
./br backup full --pd "${PD_IP}:2379" \
70+
tiup br backup full --pd "${PD_IP}:2379" \
7171
--storage "gcs://external/backup-20220915?credentials-file=${credentials-file-path}"
7272
```
7373

7474
**Restore snapshot data from GCS**
7575

7676
```shell
77-
./br restore full --pd "${PD_IP}:2379" \
77+
tiup br restore full --pd "${PD_IP}:2379" \
7878
--storage "gcs://external/backup-20220915?credentials-file=${credentials-file-path}"
7979
```
8080

@@ -84,14 +84,14 @@ This section provides some URI examples by using `external` as the `host` parame
8484
**Back up snapshot data to Azure Blob Storage**
8585

8686
```shell
87-
./br backup full -u "${PD_IP}:2379" \
87+
tiup br backup full -u "${PD_IP}:2379" \
8888
--storage "azure://external/backup-20220915?account-name=${account-name}&account-key=${account-key}"
8989
```
9090

9191
**Restore the `test` database from snapshot backup data in Azure Blob Storage**
9292

9393
```shell
94-
./br restore db --db test -u "${PD_IP}:2379" \
94+
tiup br restore db --db test -u "${PD_IP}:2379" \
9595
--storage "azure://external/backup-20220915account-name=${account-name}&account-key=${account-key}"
9696
```
9797

@@ -128,7 +128,7 @@ It is recommended that you configure access to S3 using either of the following
128128
Associate an IAM role that can access S3 with EC2 instances where the TiKV and BR nodes run. After the association, BR can directly access the backup directories in S3 without additional settings.
129129

130130
```shell
131-
br backup full --pd "${PD_IP}:2379" \
131+
tiup br backup full --pd "${PD_IP}:2379" \
132132
--storage "s3://${host}/${path}"
133133
```
134134

@@ -195,7 +195,7 @@ You can configure the account used to access GCS by specifying the access key. I
195195
- Use BR to back up data to Azure Blob Storage:
196196

197197
```shell
198-
./br backup full -u "${PD_IP}:2379" \
198+
tiup br backup full -u "${PD_IP}:2379" \
199199
--storage "azure://external/backup-20220915?account-name=${account-name}"
200200
```
201201

br/br-batch-create-table.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ BR enables the Batch Create Table feature by default, with the default configura
2727
To disable this feature, you can set `--ddl-batch-size` to `1`. See the following example command:
2828

2929
```shell
30-
br restore full \
30+
tiup br restore full \
3131
--storage local:///br_data/ --pd "${PD_IP}:2379" --log-file restore.log \
3232
--ddl-batch-size=1
3333
```

br/br-checkpoint-backup.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ To avoid this situation, `br` keeps the `gc-safepoint` for about one hour by def
3535
The following example sets `gcttl` to 15 hours (54000 seconds) to extend the retention period of `gc-safepoint`:
3636

3737
```shell
38-
br backup full \
38+
tiup br backup full \
3939
--storage local:///br_data/ --pd "${PD_IP}:2379" \
4040
--gcttl 54000
4141
```

br/br-incremental-guide.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: TiDB Incremental Backup and Restore Guide
3-
summary: Incremental data is the differentiated data between starting and end snapshots, along with DDLs. It reduces backup volume and requires setting `tidb_gc_life_time` for incremental backup. Use `br backup` with `--lastbackupts` for incremental backup and ensure all previous data is restored before restoring incremental data.
3+
summary: Incremental data is the differentiated data between starting and end snapshots, along with DDLs. It reduces backup volume and requires setting `tidb_gc_life_time` for incremental backup. Use `tiup br backup` with `--lastbackupts` for incremental backup and ensure all previous data is restored before restoring incremental data.
44
---
55

66
# TiDB Incremental Backup and Restore Guide
@@ -13,7 +13,7 @@ Incremental data of a TiDB cluster is differentiated data between the starting s
1313
1414
## Back up incremental data
1515

16-
To back up incremental data, run the `br backup` command with **the last backup timestamp** `--lastbackupts` specified. In this way, br command-line tool automatically backs up incremental data generated between `lastbackupts` and the current time. To get `--lastbackupts`, run the `validate` command. The following is an example:
16+
To back up incremental data, run the `tiup br backup` command with **the last backup timestamp** `--lastbackupts` specified. In this way, br command-line tool automatically backs up incremental data generated between `lastbackupts` and the current time. To get `--lastbackupts`, run the `validate` command. The following is an example:
1717

1818
```shell
1919
LAST_BACKUP_TS=`tiup br validate decode --field="end-version" --storage "s3://backup-101/snapshot-202209081330?access-key=${access-key}&secret-access-key=${secret-access-key}"| tail -n1`

br/br-pitr-guide.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Before you back up or restore data using the br command-line tool (hereinafter r
1919
> - The following examples assume that Amazon S3 access keys and secret keys are used to authorize permissions. If IAM roles are used to authorize permissions, you need to set `--send-credentials-to-tikv` to `false`.
2020
> - If other storage systems or authorization methods are used to authorize permissions, adjust the parameter settings according to [Backup Storages](/br/backup-and-restore-storages.md).
2121
22-
To start a log backup, run `br log start`. A cluster can only run one log backup task each time.
22+
To start a log backup, run `tiup br log start`. A cluster can only run one log backup task each time.
2323

2424
```shell
2525
tiup br log start --task-name=pitr --pd "${PD_IP}:2379" \
@@ -48,7 +48,7 @@ checkpoint[global]: 2022-05-13 11:31:47.2 +0800; gap=4m53s
4848

4949
### Run full backup regularly
5050

51-
The snapshot backup can be used as a method of full backup. You can run `br backup full` to back up the cluster snapshot to the backup storage according to a fixed schedule (for example, every 2 days).
51+
The snapshot backup can be used as a method of full backup. You can run `tiup br backup full` to back up the cluster snapshot to the backup storage according to a fixed schedule (for example, every 2 days).
5252

5353
```shell
5454
tiup br backup full --pd "${PD_IP}:2379" \
@@ -57,10 +57,10 @@ tiup br backup full --pd "${PD_IP}:2379" \
5757

5858
## Run PITR
5959

60-
To restore the cluster to any point in time within the backup retention period, you can use `br restore point`. When you run this command, you need to specify the **time point you want to restore**, **the latest snapshot backup data before the time point**, and the **log backup data**. BR will automatically determine and read data needed for the restore, and then restore these data to the specified cluster in order.
60+
To restore the cluster to any point in time within the backup retention period, you can use `tiup br restore point`. When you run this command, you need to specify the **time point you want to restore**, **the latest snapshot backup data before the time point**, and the **log backup data**. BR will automatically determine and read data needed for the restore, and then restore these data to the specified cluster in order.
6161

6262
```shell
63-
br restore point --pd "${PD_IP}:2379" \
63+
tiup br restore point --pd "${PD_IP}:2379" \
6464
--storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}' \
6565
--full-backup-storage='s3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}' \
6666
--restored-ts '2022-05-15 18:00:00+0800'
@@ -80,7 +80,7 @@ Restore KV Files <--------------------------------------------------------------
8080

8181
As described in the [Usage Overview of TiDB Backup and Restore](/br/br-use-overview.md):
8282

83-
To perform PITR, you need to restore the full backup before the restore point, and the log backup between the full backup point and the restore point. Therefore, for log backups that exceed the backup retention period, you can use `br log truncate` to delete the backup before the specified time point. **It is recommended to only delete the log backup before the full snapshot**.
83+
To perform PITR, you need to restore the full backup before the restore point, and the log backup between the full backup point and the restore point. Therefore, for log backups that exceed the backup retention period, you can use `tiup br log truncate` to delete the backup before the specified time point. **It is recommended to only delete the log backup before the full snapshot**.
8484

8585
The following steps describe how to clean up backup data that exceeds the backup retention period:
8686

@@ -100,7 +100,7 @@ The following steps describe how to clean up backup data that exceeds the backup
100100
4. Delete snapshot data earlier than the snapshot backup `FULL_BACKUP_TS`:
101101

102102
```shell
103-
rm -rf s3://backup-101/snapshot-${date}
103+
aws s3 rm --recursive s3://backup-101/snapshot-${date}
104104
```
105105

106106
## Performance capabilities of PITR

0 commit comments

Comments
 (0)