Skip to content

Commit 5685cad

Browse files
authored
tiproxy: add a guide to enable tiproxy using tiup (#20799)
1 parent 621de48 commit 5685cad

File tree

1 file changed

+72
-25
lines changed

1 file changed

+72
-25
lines changed

tiproxy/tiproxy-overview.md

Lines changed: 72 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -61,44 +61,42 @@ It is recommended that you use TiProxy for the scenarios that TiProxy is suitabl
6161

6262
## Installation and usage
6363

64-
This section describes how to deploy and change TiProxy using TiUP.
64+
This section describes how to deploy and change TiProxy using TiUP. You can either [create a new cluster with TiProxy](#create-a-cluster-with-tiproxy) or [enable TiProxy for an existing cluster](#enable-tiproxy-for-an-existing-cluster) by scaling out TiProxy.
65+
66+
> **Note:**
67+
>
68+
> Make sure that TiUP is v1.16.1 or later.
6569
6670
For other deployment methods, refer to the following documents:
6771

6872
- To deploy TiProxy using TiDB Operator, see the [TiDB Operator](https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/deploy-tiproxy) documentation.
6973
- To quickly deploy TiProxy locally using TiUP, see [Deploy TiProxy](/tiup/tiup-playground.md#deploy-tiproxy).
7074

71-
### Deploy TiProxy
72-
73-
1. Before TiUP v1.15.0, you need to manually generate a self-signed certificate.
74-
75-
Generate a self-signed certificate for the TiDB instance and place the certificate on all TiDB instances to ensure that all TiDB instances have the same certificate. For detailed steps, see [Generate self-signed certificates](/generate-self-signed-certificates.md).
75+
### Create a cluster with TiProxy
7676

77-
2. Configure the TiDB instances.
77+
The following steps describe how to deploy TiProxy when creating a new cluster.
7878

79-
When using TiProxy, you also need to configure the following items for the TiDB instances:
79+
1. Configure the TiDB instances.
8080

81-
- Before TiUP v1.15.0, configure the [`security.session-token-signing-cert`](/tidb-configuration-file.md#session-token-signing-cert-new-in-v640) and [`security.session-token-signing-key`](/tidb-configuration-file.md#session-token-signing-key-new-in-v640) of TiDB instances to the path of the certificate. Otherwise, the connection cannot be migrated.
82-
- Configure the [`graceful-wait-before-shutdown`](/tidb-configuration-file.md#graceful-wait-before-shutdown-new-in-v50) of TiDB instances to a value greater than the longest transaction duration of the application. Otherwise, the client might disconnect when the TiDB server is offline. You can view the transaction duration through the [Transaction metrics on the TiDB monitoring dashboard](/grafana-tidb-dashboard.md#transaction). For details, see [TiProxy usage limitations](#limitations).
81+
When using TiProxy, you need to configure [`graceful-wait-before-shutdown`](/tidb-configuration-file.md#graceful-wait-before-shutdown-new-in-v50) for TiDB. This value must be greater than the duration of the longest transaction of the application, which can avoid client connection interruption when the TiDB server goes offline. You can view the transaction duration through the [Transaction metrics on the TiDB monitoring dashboard](/grafana-tidb-dashboard.md#transaction). For details, see [Limitations](#limitations).
8382

8483
A configuration example is as follows:
8584

8685
```yaml
8786
server_configs:
8887
tidb:
89-
security.session-token-signing-cert: "/var/sess/cert.pem"
90-
security.session-token-signing-key: "/var/sess/key.pem"
9188
graceful-wait-before-shutdown: 15
9289
```
9390
94-
3. Define the TiProxy instances.
91+
2. Configure the TiProxy instances.
9592
96-
When selecting the model and number of TiProxy instances, consider the following factors:
93+
To ensure the high availability of TiProxy, it is recommended to deploy at least two TiProxy instances and configure a virtual IP by setting [`ha.virtual-ip`](/tiproxy/tiproxy-configuration.md#virtual-ip) and [`ha.interface`](/tiproxy/tiproxy-configuration.md#interface) to route the traffic to the available TiProxy instance.
9794

98-
- For the workload type and maximum QPS, see [TiProxy Performance Test Report](/tiproxy/tiproxy-performance-test.md).
99-
- Because the number of TiProxy instances is less than that of TiDB servers, the network bandwidth of TiProxy is more likely to become a bottleneck than that of TiDB servers. Therefore, you also need to consider the network bandwidth. For example, in AWS, the baseline network bandwidth of the same series of EC2 is not proportional to the number of CPU cores. For details, see [Network performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compute-optimized-instances.html#compute-network-performance). In such cases, when the network bandwidth becomes a bottleneck, splitting the TiProxy instance into more and smaller instances can improve QPS.
95+
Note the following:
10096

101-
It is recommended to specify the version number of TiProxy in the topology configuration so that TiProxy will not be upgraded when you upgrade the TiDB cluster through [`tiup cluster upgrade`](/tiup/tiup-component-cluster-upgrade.md). Otherwise, the client connection might be disconnected during TiProxy upgrade.
97+
- Select the model and number of TiProxy instances based on the workload type and maximum QPS. For details, see [TiProxy Performance Test Report](/tiproxy/tiproxy-performance-test.md).
98+
- Because there are usually fewer TiProxy instances than TiDB server instances, the network bandwidth of TiProxy is more likely to become a bottleneck. For example, on AWS, the baseline network bandwidth EC2 instances in the same series is not proportional to the number of CPU cores. When network bandwidth becomes a bottleneck, you can split the TiProxy instance into more and smaller instances to increase QPS. For details, see [Network specifications](https://docs.aws.amazon.com/ec2/latest/instancetypes/co.html#co_network).
99+
- It is recommended to specify the TiProxy version in the topology configuration file. This will prevent TiProxy from being upgraded automatically when you execute [`tiup cluster upgrade`](/tiup/tiup-component-cluster-upgrade.md) to upgrade the TiDB cluster, thus preventing client connections from being disconnected due to the TiProxy upgrade.
102100

103101
For more information about the template for TiProxy, see [A simple template for the TiProxy topology](https://github.com/pingcap/docs/blob/master/config-templates/simple-tiproxy.yaml).
104102

@@ -109,6 +107,10 @@ For other deployment methods, refer to the following documents:
109107
```yaml
110108
component_versions:
111109
tiproxy: "v1.2.0"
110+
server_configs:
111+
tiproxy:
112+
ha.virtual-ip: "10.0.1.10/24"
113+
ha.interface: "eth0"
112114
tiproxy_servers:
113115
- host: 10.0.1.11
114116
port: 6000
@@ -118,28 +120,73 @@ For other deployment methods, refer to the following documents:
118120
status_port: 3080
119121
```
120122

121-
4. Configure the TiProxy instances.
123+
3. Start the cluster.
122124

123-
To ensure the high availability of TiProxy, it is recommended to deploy at least two TiProxy instances and configure a virtual IP by setting [`ha.virtual-ip`](/tiproxy/tiproxy-configuration.md#virtual-ip) and [`ha.interface`](/tiproxy/tiproxy-configuration.md#interface) to route the traffic to the available TiProxy instance.
125+
To start the cluster using TiUP, see [TiUP documentation](/tiup/tiup-documentation-guide.md).
124126

125-
To configure TiProxy configuration items, see [TiProxy Configuration File](/tiproxy/tiproxy-configuration.md). For more configurations of TiUP deployment topology, see [tiproxy-servers configurations](/tiup/tiup-cluster-topology-reference.md#tiproxy_servers).
127+
4. Connect to TiProxy.
126128

127-
A configuration example is as follows:
129+
After the cluster is deployed, the TiDB server port and TiProxy port will be exposed at the same time. The client should connect to the TiProxy port instead of directly connecting to the TiDB server.
130+
131+
### Enable TiProxy for an existing cluster
132+
133+
For clusters that do not have TiProxy deployed, you can enable TiProxy by scaling out TiProxy instances.
134+
135+
1. Configure the TiProxy instance.
136+
137+
Configure TiProxy in a separate topology file, such as `tiproxy.toml`:
128138

129139
```yaml
140+
component_versions:
141+
tiproxy: "v1.2.0"
130142
server_configs:
131143
tiproxy:
132144
ha.virtual-ip: "10.0.1.10/24"
133145
ha.interface: "eth0"
146+
tiproxy_servers:
147+
- host: 10.0.1.11
148+
deploy_dir: "/tiproxy-deploy"
149+
port: 6000
150+
status_port: 3080
151+
- host: 10.0.1.12
152+
deploy_dir: "/tiproxy-deploy"
153+
port: 6000
154+
status_port: 3080
134155
```
135156

136-
5. Start the cluster.
157+
2. Scale out TiProxy.
137158

138-
To start the cluster using TiUP, see [TiUP documentation](/tiup/tiup-documentation-guide.md).
159+
Use the [`tiup cluster scale-out`](/tiup/tiup-component-cluster-scale-out.md) command to scale out the TiProxy instances. For example:
160+
161+
```shell
162+
tiup cluster scale-out <cluster-name> tiproxy.toml
163+
```
164+
165+
When you scale out TiProxy, TiUP automatically configures a self-signed certificate [`security.session-token-signing-cert`](/tidb-configuration-file.md#session-token-signing-cert-new-in-v640) and [`security.session-token-signing-key`](/tidb-configuration-file.md#session-token-signing-key-new-in-v640) for TiDB. The certificate is used for connection migration.
166+
167+
3. Modify the TiDB configuration.
168+
169+
When using TiProxy, you need to configure [`graceful-wait-before-shutdown`](/tidb-configuration-file.md#graceful-wait-before-shutdown-new-in-v50) for TiDB. This value must be greater than the duration of the longest transaction of the application to avoid client connection interruption when the TiDB server goes offline. You can view the transaction duration through the [Transaction metrics on the TiDB monitoring dashboard](/grafana-tidb-dashboard.md#transaction). For details, see [Limitations](#limitations).
170+
171+
A configuration example is as follows:
172+
173+
```yaml
174+
server_configs:
175+
tidb:
176+
graceful-wait-before-shutdown: 15
177+
```
178+
179+
4. Reload TiDB configuration.
180+
181+
Because TiDB is configured with a self-signed certificate and `graceful-wait-before-shutdown`, you need to use the [`tiup cluster reload`](/tiup/tiup-component-cluster-reload.md) command to reload the configuration for them to take effect. Note that after reloading the configuration, TiDB will perform a rolling restart, and the client connection will be disconnected.
182+
183+
```shell
184+
tiup cluster reload <cluster-name> -R tidb
185+
```
139186

140-
6. Connect to TiProxy.
187+
5. Connect to TiProxy.
141188

142-
After the cluster is deployed, the cluster exposes the ports of TiDB server and TiProxy at the same time. The client should connect to the port of TiProxy instead of the port of TiDB server.
189+
After you enable TiProxy, the client should connect to the TiProxy port instead of the TiDB server port.
143190

144191
### Modify TiProxy configuration
145192

0 commit comments

Comments
 (0)