Skip to content

Commit 6936c1e

Browse files
authored
fix typos in docs (#17381)
1 parent 2c13d8b commit 6936c1e

File tree

79 files changed

+98
-98
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

79 files changed

+98
-98
lines changed

Diff for: CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Please check out these templates before you submit a pull request:
4444
We use separate branches to maintain different versions of TiDB documentation.
4545

4646
- The [documentation under development](https://docs.pingcap.com/tidb/dev) is maintained in the `master` branch.
47-
- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-<verion>` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch.
47+
- The [published documentation](https://docs.pingcap.com/tidb/stable/) is maintained in the corresponding `release-<version>` branch. For example, TiDB v7.5 documentation is maintained in the `release-7.5` branch.
4848
- The [archived documentation](https://docs-archive.pingcap.com/) is no longer maintained and does not receive any further updates.
4949

5050
### Use cherry-pick labels

Diff for: benchmark/benchmark-tidb-using-sysbench.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ server_configs:
2020
log.level: "error"
2121
```
2222
23-
It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documetnation about what the SQL plan cache does and how to monitor it.
23+
It is also recommended to make sure [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610) is enabled and that you allow sysbench to use prepared statements by using `--db-ps-mode=auto`. See the [SQL Prepared Execution Plan Cache](/sql-prepared-plan-cache.md) for documentation about what the SQL plan cache does and how to monitor it.
2424

2525
> **Note:**
2626
>

Diff for: best-practices-on-public-cloud.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ To reduce the number of Regions and alleviate the heartbeat overhead on the syst
180180

181181
## After tuning
182182

183-
After the tunning, the following effects can be observed:
183+
After the tuning, the following effects can be observed:
184184

185185
- The TSO requests per second are decreased to 64,800.
186186
- The CPU utilization is significantly reduced from approximately 4,600% to 1,400%.

Diff for: check-before-deployment.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ To check whether the NTP service is installed and whether it synchronizes with t
269269
Unable to talk to NTP daemon. Is it running?
270270
```
271271

272-
3. Run the `chronyc tracking` command to check wheter the Chrony service synchronizes with the NTP server.
272+
3. Run the `chronyc tracking` command to check whether the Chrony service synchronizes with the NTP server.
273273

274274
> **Note:**
275275
>

Diff for: configure-memory-usage.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Currently, the memory limit set by `tidb_server_memory_limit` **DOES NOT** termi
5757
>
5858
> + During the startup process, TiDB does not guarantee that the [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) limit is enforced. If the free memory of the operating system is insufficient, TiDB might still encounter OOM. You need to ensure that the TiDB instance has enough available memory.
5959
> + In the process of memory control, the total memory usage of TiDB might slightly exceed the limit set by `tidb_server_memory_limit`.
60-
> + Since v6.5.0, the configruation item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`.
60+
> + Since v6.5.0, the configuration item `server-memory-quota` is deprecated. To ensure compatibility, after you upgrade your cluster to v6.5.0 or a later version, `tidb_server_memory_limit` will inherit the value of `server-memory-quota`. If you have not configured `server-memory-quota` before the upgrade, the default value of `tidb_server_memory_limit` is used, which is `80%`.
6161
6262
When the memory usage of a tidb-server instance reaches a certain proportion of the total memory (the proportion is controlled by the system variable [`tidb_server_memory_limit_gc_trigger`](/system-variables.md#tidb_server_memory_limit_gc_trigger-new-in-v640)), tidb-server will try to trigger a Golang GC to relieve memory stress. To avoid frequent GCs that cause performance issues due to the instance memory fluctuating around the threshold, this GC method will trigger GC at most once every minute.
6363

Diff for: dashboard/dashboard-session-sso.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ First, create an Okta Application Integration to integrate SSO.
104104

105105
![Sample Step](/media/dashboard/dashboard-session-sso-okta-1.png)
106106

107-
4. In the poped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**.
107+
4. In the popped up dialog, choose **OIDC - OpenID Connect** in **Sign-in method**.
108108

109109
5. Choose **Single-Page Application** in **Application Type**.
110110

Diff for: ddl-introduction.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ absent -> delete only -> write only -> write reorg -> public
7777
For users, the newly created index is unavailable before the `public` state.
7878

7979
<SimpleTab>
80-
<div label="Online DDL asychronous change before TiDB v6.2.0">
80+
<div label="Online DDL asynchronous change before TiDB v6.2.0">
8181

8282
Before v6.2.0, the process of handling asynchronous schema changes in the TiDB SQL layer is as follows:
8383

Diff for: dm/dm-enable-tls.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ This section introduces how to enable encrypted data transmission between DM com
109109

110110
### Enable encrypted data transmission for downstream TiDB
111111

112-
1. Configure the downstream TiDB to use encrypted connections. For detailed operatons, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections).
112+
1. Configure the downstream TiDB to use encrypted connections. For detailed operations, refer to [Configure TiDB server to use secure connections](/enable-tls-between-clients-and-servers.md#configure-tidb-server-to-use-secure-connections).
113113

114114
2. Set the TiDB client certificate in the task configuration file:
115115

Diff for: dm/dm-faq.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -365,7 +365,7 @@ To solve this issue, you are recommended to maintain DM clusters using TiUP. In
365365

366366
## Why DM-master cannot be connected when I use dmctl to execute commands?
367367

368-
When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But afer checking the network connection using commands like `telnet <master-addr>`, no exception is found.
368+
When using dmctl execute commands, you might find the connection to DM master fails (even if you have specified the parameter value of `--master-addr` in the command), and the error message is like `RawCause: context deadline exceeded, Workaround: please check your network connection.`. But after checking the network connection using commands like `telnet <master-addr>`, no exception is found.
369369

370370
In this case, you can check the environment variable `https_proxy` (note that it is **https**). If this variable is configured, dmctl automatically connects the host and port specified by `https_proxy`. If the host does not have a corresponding `proxy` forwarding service, the connection fails.
371371

Diff for: dm/dm-open-api.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1346,7 +1346,7 @@ curl -X 'GET' \
13461346
"name": "string",
13471347
"source_name": "string",
13481348
"worker_name": "string",
1349-
"stage": "runing",
1349+
"stage": "running",
13501350
"unit": "sync",
13511351
"unresolved_ddl_lock_id": "string",
13521352
"load_status": {

Diff for: dm/dm-table-routing.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ To migrate the upstream instances to the downstream `test`.`t`, you must create
8686

8787
Assuming in the scenario of sharded schemas and tables, you want to migrate the `test_{1,2,3...}`.`t_{1,2,3...}` tables in two upstream MySQL instances to the `test`.`t` table in the downstream TiDB instance. At the same time, you want to extract the source information of the sharded tables and write it to the downstream merged table.
8888

89-
To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addtion, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations:
89+
To migrate the upstream instances to the downstream `test`.`t`, you must create routing rules similar to the previous section [Merge sharded schemas and tables](#merge-sharded-schemas-and-tables). In addition, you need to add the `extract-table`, `extract-schema`, and `extract-source` configurations:
9090

9191
- `extract-table`: For a sharded table matching `schema-pattern` and `table-pattern`, DM extracts the sharded table name by using `table-regexp` and writes the name suffix without the `t_` part to `target-column` of the merged table, that is, the `c_table` column.
9292
- `extract-schema`: For a sharded schema matching `schema-pattern` and `table-pattern`, DM extracts the sharded schema name by using `schema-regexp` and writes the name suffix without the `test_` part to `target-column` of the merged table, that is, the `c_schema` column.

Diff for: dm/monitor-a-dm-cluster.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ The following metrics show only when `task-mode` is in the `incremental` or `all
9494
| total sqls jobs | The number of newly added jobs per unit of time | N/A | N/A |
9595
| finished sqls jobs | The number of finished jobs per unit of time | N/A | N/A |
9696
| statement execution latency | The duration that the binlog replication unit executes the statement to the downstream (in seconds) | N/A | N/A |
97-
| add job duration | The duration tht the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A |
97+
| add job duration | The duration that the binlog replication unit adds a job to the queue (in seconds) | N/A | N/A |
9898
| DML conflict detect duration | The duration that the binlog replication unit detects the conflict in DML (in seconds) | N/A | N/A |
9999
| skipped event duration | The duration that the binlog replication unit skips a binlog event (in seconds) | N/A | N/A |
100100
| unsynced tables | The number of tables that have not received the shard DDL statement in the current subtask | N/A | N/A |

Diff for: dm/quick-start-create-source.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ The returned results are as follows:
8484
8585
After creating a data source, you can use the following command to query the data source:
8686
87-
- If you konw the `source-id` of the data source, you can use the `dmctl config source <source-id>` command to directly check the configuration of the data source:
87+
- If you know the `source-id` of the data source, you can use the `dmctl config source <source-id>` command to directly check the configuration of the data source:
8888
8989
{{< copyable "shell-regular" >}}
9090

Diff for: explain-index-merge.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,6 @@ When using the intersection-type index merge to access tables, the optimizer can
9494
>
9595
> - If the optimizer can choose the single index scan method (other than full table scan) for a query plan, the optimizer will not automatically use index merge. For the optimizer to use index merge, you need to use the optimizer hint.
9696
>
97-
> - Index Merge is not supported in [tempoaray tables](/temporary-tables.md) for now.
97+
> - Index Merge is not supported in [temporary tables](/temporary-tables.md) for now.
9898
>
9999
> - The intersection-type index merge will not automatically be selected by the optimizer. You must specify the **table name and index name** using the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint for it to be selected.

Diff for: faq/manage-cluster-faq.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ TiDB provides a few features and [tools](/ecosystem-tool-user-guide.md), with wh
7373

7474
The TiDB community is highly active. The engineers have been keeping optimizing features and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see [TiDB Release Timeline](/releases/release-timeline.md).
7575

76-
It is recommeneded to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods:
76+
It is recommended to deploy TiDB [using TiUP](/production-deployment-using-tiup.md) or [using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable). TiDB has a unified management of the version number. You can view the version number using one of the following methods:
7777

7878
- `select tidb_version()`
7979
- `tidb-server -V`

Diff for: faq/migration-tidb-faq.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see
9393

9494
Currently, it is recommended to use OGG.
9595

96-
### Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches`
96+
### Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in `batches`
9797

9898
In Sqoop, `--batch` means committing 100 `statement`s in each batch, but by default each `statement` contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.
9999

Diff for: faq/sql-faq.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ TiDB supports modifying the [`sql_mode`](/system-variables.md#sql_mode) system v
151151
- Changes to [`GLOBAL`](/sql-statements/sql-statement-set-variable.md) scoped variables propagate to the rest servers of the cluster and persist across restarts. This means that you do not need to change the `sql_mode` value on each TiDB server.
152152
- Changes to `SESSION` scoped variables only affect the current client session. After restarting a server, the changes are lost.
153153

154-
## Error: `java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches
154+
## Error: `java.sql.BatchUpdateException:statement count 5001 exceeds the transaction limitation` while using Sqoop to write data into TiDB in batches
155155

156156
In Sqoop, `--batch` means committing 100 statements in each batch, but by default each statement contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.
157157

Diff for: functions-and-operators/precision-math.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ DECIMAL columns do not store a leading `+` character or `-` character or leading
5151

5252
DECIMAL columns do not permit values larger than the range implied by the column definition. For example, a `DECIMAL(3,0)` column supports a range of `-999` to `999`. A `DECIMAL(M,D)` column permits at most `M - D` digits to the left of the decimal point.
5353

54-
For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB souce code.
54+
For more information about the internal format of the DECIMAL values, see [`mydecimal.go`](https://github.com/pingcap/tidb/blob/master/pkg/types/mydecimal.go) in TiDB source code.
5555

5656
## Expression handling
5757

Diff for: functions-and-operators/string-functions.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -218,10 +218,10 @@ SELECT CHAR_LENGTH("TiDB") AS LengthOfString;
218218
```
219219

220220
```sql
221-
SELECT CustomerName, CHAR_LENGTH(CustomerName) AS LenghtOfName FROM Customers;
221+
SELECT CustomerName, CHAR_LENGTH(CustomerName) AS LengthOfName FROM Customers;
222222

223223
+--------------------+--------------+
224-
| CustomerName | LenghtOfName |
224+
| CustomerName | LengthOfName |
225225
+--------------------+--------------+
226226
| Albert Einstein | 15 |
227227
| Robert Oppenheimer | 18 |

Diff for: grafana-pd-dashboard.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ The following is the description of PD Dashboard metrics items:
7878
- Store Write rate keys: The total written keys on each TiKV instance
7979
- Hot cache write entry number: The number of peers on each TiKV instance that are in the write hotspot statistics module
8080
- Selector events: The event count of Selector in the hotspot scheduling module
81-
- Direction of hotspot move leader: The direction of leader movement in the hotspot scheduling. The positive number means scheduling into the instance. The negtive number means scheduling out of the instance
81+
- Direction of hotspot move leader: The direction of leader movement in the hotspot scheduling. The positive number means scheduling into the instance. The negative number means scheduling out of the instance
8282
- Direction of hotspot move peer: The direction of peer movement in the hotspot scheduling. The positive number means scheduling into the instance. The negative number means scheduling out of the instance
8383

8484
![PD Dashboard - Hot write metrics](/media/pd-dashboard-hotwrite-v4.png)

Diff for: information-schema/information-schema-deadlocks.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ USE INFORMATION_SCHEMA;
1212
DESC deadlocks;
1313
```
1414

15-
Thhe output is as follows:
15+
The output is as follows:
1616

1717
```sql
1818
+-------------------------+---------------------+------+------+---------+-------+

Diff for: migrate-small-mysql-to-tidb.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -137,8 +137,8 @@ To view the historical status of the migration task and other internal metrics,
137137

138138
If you have deployed Prometheus, Alertmanager, and Grafana when deploying DM using TiUP, you can access Grafana using the IP address and port specified during the deployment. You can then select the DM dashboard to view DM-related monitoring metrics.
139139

140-
- The log directory of DM-master: specified by the DM-master process parameter `--log-file`. If you have deployd DM using TiUP, the log directory is `/dm-deploy/dm-master-8261/log/` by default.
141-
- The log directory of DM-worker: specified by the DM-worker process parameter `--log-file`. If you have deployd DM using TiUP, the log directory is `/dm-deploy/dm-worker-8262/log/` by default.
140+
- The log directory of DM-master: specified by the DM-master process parameter `--log-file`. If you have deployed DM using TiUP, the log directory is `/dm-deploy/dm-master-8261/log/` by default.
141+
- The log directory of DM-worker: specified by the DM-worker process parameter `--log-file`. If you have deployed DM using TiUP, the log directory is `/dm-deploy/dm-worker-8262/log/` by default.
142142

143143
## What's next
144144

Diff for: migrate-with-pt-ghost.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ summary: Learn how to use DM to replicate incremental data from databases that u
77

88
In production scenarios, table locking during DDL execution can block the reads from or writes to the database to a certain extent. Therefore, online DDL tools are often used to execute DDLs to minimize the impact on reads and writes. Common DDL tools are [gh-ost](https://github.com/github/gh-ost) and [pt-osc](https://www.percona.com/doc/percona-toolkit/3.0/pt-online-schema-change.html).
99

10-
When using DM to migrate data from MySQL to TiDB, you can enbale `online-ddl` to allow collaboration of DM and gh-ost or pt-osc.
10+
When using DM to migrate data from MySQL to TiDB, you can enable `online-ddl` to allow collaboration of DM and gh-ost or pt-osc.
1111

1212
For the detailed replication instructions, refer to the following documents by scenarios:
1313

Diff for: online-unsafe-recovery.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Before using Online Unsafe Recovery, make sure that the following requirements a
3838

3939
### Step 1. Specify the stores that cannot be recovered
4040

41-
To trigger automatic recovery, use PD Control to execute [`unsafe remove-failed-stores <store_id>[,<store_id>,...]`](/pd-control.md#unsafe-remove-failed-stores-store-ids--show) and specify **all** the TiKV nodes that cannot be recovered, seperated by commas.
41+
To trigger automatic recovery, use PD Control to execute [`unsafe remove-failed-stores <store_id>[,<store_id>,...]`](/pd-control.md#unsafe-remove-failed-stores-store-ids--show) and specify **all** the TiKV nodes that cannot be recovered, separated by commas.
4242

4343
{{< copyable "shell-regular" >}}
4444

@@ -174,7 +174,7 @@ After the recovery is completed, the data and index might be inconsistent. Use t
174174
ADMIN CHECK TABLE table_name;
175175
```
176176

177-
If there are inconsistent indexes, you can fix the index inconsistency by renaming the old index, creating a new index, and then droping the old index.
177+
If there are inconsistent indexes, you can fix the index inconsistency by renaming the old index, creating a new index, and then dropping the old index.
178178

179179
1. Rename the old index:
180180

Diff for: oracle-functions-to-tidb.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -65,13 +65,13 @@ TiDB distinguishes between `NULL` and an empty string `''`.
6565
Oracle supports reading and writing to the same table in an `INSERT` statement. For example:
6666

6767
```sql
68-
INSERT INTO table1 VALUES (feild1,(SELECT feild2 FROM table1 WHERE...))
68+
INSERT INTO table1 VALUES (field1,(SELECT field2 FROM table1 WHERE...))
6969
```
7070

7171
TiDB does not support reading and writing to the same table in a `INSERT` statement. For example:
7272

7373
```sql
74-
INSERT INTO table1 VALUES (feild1,(SELECT T.fields2 FROM table1 T WHERE...))
74+
INSERT INTO table1 VALUES (field1,(SELECT T.fields2 FROM table1 T WHERE...))
7575
```
7676

7777
### Get the first n rows from a query

0 commit comments

Comments
 (0)