Skip to content

Commit be2c0ba

Browse files
committed
chore: improve locality load balancing doc
Signed-off-by: Tom <[email protected]>
1 parent b307a77 commit be2c0ba

File tree

1 file changed

+30
-33
lines changed

1 file changed

+30
-33
lines changed

docs/application-layer/locality-loadbalance.md

Lines changed: 30 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -3,25 +3,25 @@ sidebar_position: 9
33
title: Locality Load Balancing
44
---
55

6-
This document introduces how to use Locality Load Balancing with Istio in the Kmesh.
6+
This document explains how to use Locality Load Balancing with Istio in Kmesh.
77

8-
> The current Kmesh Locality Load Balancing is at the L4 level and only support [Locality Failover](https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/failover/).
8+
Note: Kmesh's current Locality Load Balancing operates at L4 and only supports [Locality Failover](https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/failover/).
99

1010
## What is Locality Load Balancing?
1111

12-
A locality defines the geographic location of a workload instance within mesh. Locality Load Balancing in service mesh helps improve the availability and performance of services by intelligently routing traffic based on the location of the service instances.
12+
A locality describes the geographic location of a workload instance in the mesh. Locality Load Balancing improves availability and performance by routing traffic based on the location of service instances.
1313

14-
We strongly recommend that you first read https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/ to understand what locality load balancing is.
14+
We strongly recommend reading https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/ for background on locality load balancing.
1515

1616
## Supported Modes and Configuration Methods for Kmesh
1717

18-
Currently, Istio's ambient mode only supports specifying a fixed locality load balancing policy by configuring specific fields. This includes two modes: PreferClose and Local.
18+
Currently, Istio's ambient mode supports specifying a fixed locality load-balancing policy via configuration. Kmesh supports two modes: PreferClose and Local.
1919

2020
### 1. PreferClose
2121

22-
A failover mode that uses NETWORK, REGION, ZONE, and SUBZONE as the routingPreference.
22+
Failover mode that uses NETWORK, REGION, ZONE, and SUBZONE as the routing preference.
2323

24-
- With `spec.trafficDistribution`k8s >= beta [1.31.0](https://kubernetes.io/docs/concepts/services-networking/service/), isito >= [1.23.1](https://istio.io/latest/news/releases/1.23.x/announcing-1.23/)
24+
- With `spec.trafficDistribution` (k8s >= beta [1.31.0](https://kubernetes.io/docs/concepts/services-networking/service/), istio >= [1.23.1](https://istio.io/latest/news/releases/1.23.x/announcing-1.23/))
2525

2626
```yaml
2727
spec:
@@ -39,9 +39,9 @@ A failover mode that uses NETWORK, REGION, ZONE, and SUBZONE as the routingPrefe
3939
4040
### 2. Local
4141
42-
A strict mode that only matches the current NODE.
42+
Strict mode that restricts traffic to the current node.
4343
44-
- spec.internalTrafficPolicy: Local (k8s >= beta 1.24 or >= 1.26)
44+
- Set `spec.internalTrafficPolicy: Local` (k8s >= beta 1.24 or >= 1.26)
4545

4646
```yaml
4747
spec:
@@ -52,14 +52,14 @@ A strict mode that only matches the current NODE.
5252

5353
### Prepare the environment
5454

55-
- Refer to [develop with kind](/docs/setup/develop-with-kind.md)
56-
- We prepare three nodes in the cluster
55+
- Refer to [develop with kind](/docs/setup/develop-with-kind.md).
56+
- A three-node kind cluster is required.
5757
- istio >= 1.23.1
5858
- k8s >= 1.31.0
5959
- Ensure sidecar injection is disabled: `kubectl label namespace default istio-injection-`
6060
- Required images:
61-
- docker.io/istio/examples-helloworld-v1
62-
- curlimages/curl
61+
- `docker.io/istio/examples-helloworld-v1`
62+
- `curlimages/curl`
6363

6464
```yaml
6565
kind create cluster --image=kindest/node:v1.31.0 --config=- <<EOF
@@ -74,7 +74,7 @@ nodes:
7474
EOF
7575
```
7676

77-
### 1. Assign locality information to the node
77+
### 1. Assign locality information to nodes
7878

7979
```bash
8080
kubectl label node ambient-worker topology.kubernetes.io/region=region
@@ -96,15 +96,15 @@ kubectl label node ambient-worker3 topology.kubernetes.io/subzone=subzone3
9696

9797
### 2. Start test servers
9898

99-
- Create `sample` namespace
99+
- Create the `sample` namespace:
100100

101101
```bash
102102
kubectl create namespace sample
103103
```
104104

105-
- Run a service
105+
- Create the service:
106106

107-
```yaml
107+
```bash
108108
kubectl apply -n sample -f - <<EOF
109109
apiVersion: v1
110110
kind: Service
@@ -123,7 +123,7 @@ kubectl label node ambient-worker3 topology.kubernetes.io/subzone=subzone3
123123
EOF
124124
```
125125

126-
- Start a service instance on the ambient-worker
126+
- Start a service instance on `ambient-worker`:
127127

128128
```yaml
129129
kubectl apply -n sample -f - <<EOF
@@ -243,11 +243,11 @@ kubectl label node ambient-worker3 topology.kubernetes.io/subzone=subzone3
243243
EOF
244244
```
245245

246-
### 3. Test on client
246+
### 3. Test from a client pod
247247

248-
- Start the test client on the ambient-worker
248+
- Start the test client on `ambient-worker`:
249249

250-
```yaml
250+
```bash
251251
kubectl apply -n sample -f - <<EOF
252252
apiVersion: apps/v1
253253
kind: Deployment
@@ -282,43 +282,43 @@ kubectl label node ambient-worker3 topology.kubernetes.io/subzone=subzone3
282282
EOF
283283
```
284284

285-
- Test the access
285+
- Verify access from the client:
286286

287287
```bash
288288
kubectl exec -n sample "$(kubectl get pod -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -c sleep -- curl -sSL "http://helloworld:5000/hello"
289289
```
290290

291-
The output is from the helloworld-region.zone1.subzone1 that is currently co-located on the ambient-worker:
291+
The response should come from the local instance running on `ambient-worker`, for example:
292292

293293
```text
294294
Hello version: region.zone1.subzone1, instance: helloworld-region.zone1.subzone1-6d6fdfd856-9dhv8
295295
```
296296

297-
- Remove the service on the ambient-worker and test Failover
297+
- Remove the local deployment to test failover:
298298

299299
```bash
300300
kubectl delete deployment -n sample helloworld-region.zone1.subzone1
301301
```
302302

303+
Re-run the client request:
304+
303305
```bash
304306
kubectl exec -n sample "$(kubectl get pod -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -c sleep -- curl -sSL "http://helloworld:5000/hello"
305307
```
306308

307-
The output is helloworld-region.zone1.subzone2, and a failover of the traffic has occurred:
309+
The response should now come from the next available locality (example):
308310

309311
```text
310312
Hello version: region.zone1.subzone2, instance: helloworld-region.zone1.subzone2-948c95bdb-7p6zb
311313
```
312314

313-
- Relabel the locality of the ambient-worker3 same as the worker2 and test
315+
- Relabel `ambient-worker3` to match `ambient-worker2` and redeploy the third instance:
314316

315317
```bash
316318
kubectl label node ambient-worker3 topology.kubernetes.io/zone=zone1 --overwrite
317319
kubectl label node ambient-worker3 topology.kubernetes.io/subzone=subzone2 --overwrite
318320
```
319321

320-
Delete helloworld-region.zone2.subzone3 and re-apply the development pod as follows, then run test:
321-
322322
```bash
323323
kubectl delete deployment -n sample helloworld-region.zone2.subzone3
324324
@@ -359,18 +359,15 @@ kubectl label node ambient-worker3 topology.kubernetes.io/subzone=subzone3
359359
EOF
360360
```
361361

362-
Test multiple times:
362+
Test multiple times from the client:
363363

364364
```bash
365365
kubectl exec -n sample "$(kubectl get pod -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -c sleep -- curl -sSL "http://helloworld:5000/hello"
366366
```
367367

368-
The output randomly shows helloworld-region.zone1.subzone2 and helloworld-region.zone1.subzone2-worker3:
368+
Responses will alternate between the two instances in the same locality, for example:
369369

370370
```text
371371
Hello version: region.zone1.subzone2-worker3, instance: helloworld-region.zone1.subzone2-worker3-6d6fdfd856-6kd2s
372372
Hello version: region.zone1.subzone2, instance: helloworld-region.zone1.subzone2-948c95bdb-7p6zb
373-
Hello version: region.zone1.subzone2, instance: helloworld-region.zone1.subzone2-948c95bdb-7p6zb
374-
Hello version: region.zone1.subzone2-worker3, instance: helloworld-region.zone1.subzone2-worker3-6d6fdfd856-6kd2s
375-
Hello version: region.zone1.subzone2, instance: helloworld-region.zone1.subzone2-948c95bdb-7p6zb
376373
```

0 commit comments

Comments
 (0)