Skip to content

Commit

Permalink
:docs: add locality load balancing example (envoyproxy#19022)
Browse files Browse the repository at this point in the history
Signed-off-by: Allen Leigh <[email protected]>
  • Loading branch information
allenlsy authored Dec 10, 2021
1 parent c68cf0d commit 05bdb1d
Show file tree
Hide file tree
Showing 9 changed files with 484 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/root/start/sandboxes/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ The following sandboxes are available:
jaeger_native_tracing
jaeger_tracing
load_reporting_service
locality_load_balancing
lua
mysql
postgres
Expand Down
157 changes: 157 additions & 0 deletions docs/root/start/sandboxes/locality_load_balancing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
.. _install_sandboxes_locality_load_balancing:

Locality Weighted Load Balancing
================================

.. sidebar:: Requirements

.. include:: _include/docker-env-setup-link.rst

:ref:`curl <start_sandboxes_setup_curl>`
Used to make ``HTTP`` requests.

This example demonstrates the :ref:`locality weighted load balancing <arch_overview_load_balancing_locality_weighted_lb>` feature in Envoy proxy. The demo simulates a scenario that a backend service resides in two local zones and one remote zone.

The components used in this demo are as follows:

- A client container: runs Envoy proxy
- Backend container in the same locality as the client, with priority set to 0, referred to as ``local-1``.
- Backend container in the same locality as the client, with priority set to 1, referred to as ``local-2``.
- Backend container in the the remote locality, with priority set to 1, referred to as ``remote-1``.
- Backend container in the the remote locality, with priority set to 2, referred to as ``remote-2``.

The client Envoy proxy configures the 4 backend containers in the same Envoy cluster, so that Envoy handles load balancing to those backend servers. From here we can see, we have localities with 3 different priorities:

- priority 0: ``local-1``
- priority 1: ``local-2`` and ``remote-1``
- priority 2: ``remote-2``

In Envoy, when the healthiness of a given locality drops below a threshold (71% by default), the next priority locality will start to share the request loads. The demo below will show this behavior.

Step 1: Start all of our containers
***********************************

In terminal, move to the ``examples/locality_load_balancing`` directory.

To build this sandbox example and start the example services, run the following commands:

.. code-block:: console
# Start demo
$ docker-compose up --build -d
The locality configuration is set in the client container via static Envoy configuration file. Please refer to the ``cluster`` section of the :download:`proxy configuration <_include/locality-load-balancing/envoy-proxy.yaml>` file.

Step 2: Scenario with one replica in the highest priority locality
******************************************************************

In this scenario, each locality has 1 healthy replica running and all the requests should be sent to the locality with the highest priority (i.e. lowest integer set for priority - ``0``), which is ``local-1``.

.. code-block:: console
# all requests to local-1
$ docker-compose exec -T client-envoy python3 client.py http://localhost:3000/ 100
Hello from backend-local-1!: 100, 100.0%
Failed: 0
If locality ``local-1`` becomes unhealthy (i.e. fails the Envoy health check), the requests should be load balanced among the subsequent priority localities, which are ``local-2`` and ``remote-1``. They both have priority 1. We then send 100 requests to the backend cluster, and check the responders.

.. code-block:: console
# bring down local-1
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-1_1:8000/unhealthy
[backend-local-1] Set to unhealthy
# local-2 and remote-1 localities split the traffic 50:50
$ docker-compose exec -T client-envoy python3 client.py http://localhost:3000/ 100
Hello from backend-remote-1!: 51, 51.0%
Hello from backend-local-2!: 49, 49.0%
Failed: 0
Now if ``local-2`` becomes unhealthy also, priority 1 locality is only 50% healthy. Thus priority 2 locality starts to share the request load. Requests will be sent to both ``remote-1`` and ``remote-2``.

.. code-block:: console
# bring down local-2
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-2_1:8000/unhealthy
# remote-1 locality receive 100% of the traffic
$ docker-compose exec -T client-envoy python3 client.py http://localhost:3000/ 100
Hello from backend-remote-1!: actual weight 69.0%
Hello from backend-remote-2!: actual weight 31.0%
Failed: 0
Step 3: Recover servers
***********************

Before moving on, we need to server local-1 and local-2 first.

.. code-block:: console
# recover local-1 and local-2 after the demo
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-1_1:8000/healthy
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-2_1:8000/healthy
Step 4: Scenario with multiple replicas in the highest priority locality
************************************************************************

To demonstrate how locality based load balancing works in multiple replicas setup, let's now scale up the ``local-1`` locality to 5 replicas.

.. code-block:: console
$ docker-compose up --scale backend-local-1=5 -d
We are going to show the scenario that ``local-1`` is just partially healthy. So let's bring down 4 of the replicas in ``local-1``.

.. code-block:: console
# bring down local-1 replicas
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-1_2:8000/unhealthy
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-1_3:8000/unhealthy
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-1_4:8000/unhealthy
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-1_5:8000/unhealthy
Then we check the endpoints again:

.. code-block:: console
# check healthiness
$ docker-compose exec -T client-envoy curl -s localhost:8001/clusters | grep health_flags
backend::172.28.0.4:8000::health_flags::/failed_active_hc
backend::172.28.0.2:8000::health_flags::/failed_active_hc
backend::172.28.0.5:8000::health_flags::/failed_active_hc
backend::172.28.0.6:8000::health_flags::/failed_active_hc
backend::172.28.0.7:8000::health_flags::healthy
backend::172.28.0.8:8000::health_flags::healthy
backend::172.28.0.3:8000::health_flags::healthy
We can confirm that 4 backend endpoints become unhealthy.

Now we send the 100 requests again.

.. code-block:: console
# watch traffic change
$ docker-compose exec -T client-envoy python3 client.py http://localhost:3000/ 100
Hello from backend-remote-1!: actual weight 37.0%
Hello from backend-local-2!: actual weight 36.0%
Hello from backend-local-1!: actual weight 27.0%
Failed: 0
As ``local-1`` does not have enough healthy workloads, requests are partially shared by secondary localities.

If we bring down all the servers in priority 1 locality, it will make priority 1 locality 0% healthy. The traffic should split between priority 0 and priority 2 localities.

.. code-block:: console
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-local-2_1:8000/unhealthy
$ docker-compose exec -T client-envoy curl -s locality-load-balancing_backend-remote-1_1:8000/unhealthy
$ docker-compose exec -T client-envoy python3 client.py http://localhost:3000/ 100
Hello from backend-remote-2!: actual weight 77.0%
Hello from backend-local-1!: actual weight 23.0%
Failed: 0
11 changes: 11 additions & 0 deletions examples/locality-load-balancing/Dockerfile-client
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
FROM envoyproxy/envoy-dev:latest
RUN apt-get update && apt-get install -y bash curl python3

COPY ./envoy-proxy.yaml /etc/envoy.yaml
COPY ./client.py /client.py

RUN chmod go+r /etc/envoy.yaml

EXPOSE 8001

CMD ["/usr/local/bin/envoy", "-c", "/etc/envoy.yaml", "--service-node", "${HOSTNAME}", "--service-cluster", "client"]
8 changes: 8 additions & 0 deletions examples/locality-load-balancing/Dockerfile-server
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
FROM alpine:latest

RUN apk update && apk add py3-pip
RUN pip3 install -q Flask==0.11.1
RUN mkdir /code
COPY ./service.py /code

CMD ["python3", "/code/service.py"]
20 changes: 20 additions & 0 deletions examples/locality-load-balancing/client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import sys
import urllib.request
from collections import Counter

url, n_requests = sys.argv[1], int(sys.argv[2])

count = Counter()
count_fail = 0

for i in range(n_requests):
try:
with urllib.request.urlopen(url) as resp:
content = resp.read().decode("utf-8").strip()
count[content] += 1
except:
count_fail += 1

for k in count:
print(f"{k}: actual weight {count[k] / n_requests * 100}%")
print(f"Failed: {count_fail}")
50 changes: 50 additions & 0 deletions examples/locality-load-balancing/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
version: "3.7"
services:
client-envoy:
build:
context: .
dockerfile: Dockerfile-client
ports:
- 8001:8001
networks:
- envoymesh
depends_on:
- "backend-local-1"
- "backend-local-2"
- "backend-remote-1"
- "backend-remote-2"
backend-local-1:
build:
context: .
dockerfile: Dockerfile-server
environment:
- HOST=backend-local-1
networks:
- envoymesh
backend-local-2:
build:
context: .
dockerfile: Dockerfile-server
environment:
- HOST=backend-local-2
networks:
- envoymesh
backend-remote-1:
build:
context: .
dockerfile: Dockerfile-server
environment:
- HOST=backend-remote-1
networks:
- envoymesh
backend-remote-2:
build:
context: .
dockerfile: Dockerfile-server
environment:
- HOST=backend-remote-2
networks:
- envoymesh

networks:
envoymesh: {}
107 changes: 107 additions & 0 deletions examples/locality-load-balancing/envoy-proxy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
node:
cluster: test-cluster
id: test-id
admin:
address:
socket_address:
address: 0.0.0.0
port_value: 8001
static_resources:
listeners:
- name: backend
address:
socket_address:
address: 0.0.0.0
port_value: 3000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: backend
http_filters:
- name: envoy.filters.http.router
clusters:
- name: backend
type: STRICT_DNS
lb_policy: ROUND_ROBIN
health_checks:
- interval: 2s
timeout: 3s
no_traffic_interval: 4s
no_traffic_healthy_interval: 4s
unhealthy_threshold: 1
healthy_threshold: 1
http_health_check:
path: "/"
load_assignment:
cluster_name: backend
endpoints:
- locality:
region: local
zone: zone-1
load_balancing_weight: 1
priority: 0 # highest
lb_endpoints:
- endpoint:
address:
socket_address:
address: backend-local-1
port_value: 8000
health_check_config:
port_value: 8000
hostname: backend-local-1
- locality:
region: local
zone: zone-2
load_balancing_weight: 1
priority: 1
lb_endpoints:
- endpoint:
address:
socket_address:
address: backend-local-2
port_value: 8000
health_check_config:
port_value: 8000
hostname: backend-local-2
- locality:
region: remote
zone: zone-1
load_balancing_weight: 1
priority: 1
lb_endpoints:
- endpoint:
address:
socket_address:
address: backend-remote-1
port_value: 8000
health_check_config:
port_value: 8000
hostname: backend-remote-1
- locality:
region: remote
zone: zone-2
load_balancing_weight: 1
priority: 2
lb_endpoints:
- endpoint:
address:
socket_address:
address: backend-remote-2
port_value: 8000
health_check_config:
port_value: 8000
hostname: backend-remote-2
32 changes: 32 additions & 0 deletions examples/locality-load-balancing/service.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
from flask import Flask
import os

app = Flask(__name__)
healthy = True


@app.route('/')
def hello():
global healthy
if healthy:
return f"Hello from {os.environ['HOST']}!\n"
else:
return "Unhealthy", 503


@app.route('/healthy')
def healthy():
global healthy
healthy = True
return f"[{os.environ['HOST']}] Set to healthy\n", 201


@app.route('/unhealthy')
def unhealthy():
global healthy
healthy = False
return f"[{os.environ['HOST']}] Set to unhealthy\n", 201


if __name__ == "__main__":
app.run(host='0.0.0.0', port=8000, debug=False)
Loading

0 comments on commit 05bdb1d

Please sign in to comment.