Skip to content
Open
Binary file modified img/kube/namespace-selection.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/swarm/pets-configure.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/swarm/pets-container-list.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/swarm/pets-service-link.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 5 additions & 7 deletions kube.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ We will setup 'kubectl' on the **manager1** instance. To do so, run the
following commands in the shell on **manager1**

```
password='typeInPasswordHere'
password='admin1234'

controller="$(curl -sS https://${PWD_HOST_FQDN}/sessions/${SESSION_ID} | jq -r '.instances[] | select(.hostname == "manager1") | .proxy_host').direct.${PWD_HOST_FQDN}"

Expand All @@ -60,7 +60,7 @@ cd ..
Next clone the `docker-networking-workshop` repo and navigate to the `kubernetes` directory on the **manager1** node:

```
git clone https://github.com/sixeyed/docker-networking-workshop.git
git clone https://github.com/GuillaumeMorini/docker-networking-workshop.git
cd ./docker-networking-workshop/kubernetes
```

Expand All @@ -87,11 +87,9 @@ kubectl apply -f ./manifests/
![ucb kubernetes namespace selection](img/kube/namespace-selection.jpg)
![kubernetes namespace context confirmation](img/kube/context-switch.jpg)

* Finally, find and click on the `managent-ui` **loadbalancer** and a
new web browser tab/window should open showing the real-time application data
flows.

![select the load balancer](img/kube/get-loadbalancer.jpg)
* Finally, find and click on the `management-ui` **Load Balancers**, then on the one named **management-ui**, scroll to find the node port (should be 33002) and open a new web browser tab/window with UCP url and add the node port behind (like `http://ip172-18-0-6-bc2m2d8qitt0008vqor0.direct.ee-beta2.play-with-docker.com:33002`). This should show the real-time application data flows.

Be sure to check that you forced your browser to use http and not https, as the app is based on HTTP.

> If it all worked, you should now see something that looks like the
> image below
Expand Down
35 changes: 20 additions & 15 deletions swarm.md
Original file line number Diff line number Diff line change
Expand Up @@ -379,11 +379,20 @@ Now that you have a Swarm initialized it's time to create an **overlay** network

Create a new overlay network called **overnet** by running `docker network create -d overlay overnet` on **manager1**.

> **NOTE:** We can encrypt overlay networks using _--opt encrypted_, this enables IPSEC encryption at the level of the vxlan, but take care with this option because it adds some performance penalty.

```
$ docker network create -d overlay overnet
wlqnvajmmzskn84bqbdi1ytuy
```

If you want to attach a container later to this network, you should use the attachable flag.

```
$ docker network create -d overlay --attachable overnet
wlqnvajmmzskn84bqbdi1ytuy
```

Use the `docker network ls` command to verify the network was created successfully:

```
Expand All @@ -399,11 +408,7 @@ Run the `docker network ls` command on **worker1** to show all the overlay netwo
```
$ docker network ls --filter driver=overlay
NETWORK ID NAME DRIVER SCOPE
55f10b3fb8ed bridge bridge local
b7b30433a639 docker_gwbridge bridge local
a7449465c379 host host local
8hq1n8nak54x ingress overlay swarm
06c349b9cc77 none null local
```

> The **overnet** network is not in the list. This is because Docker only extends overlay networks to hosts when they are needed. This is usually when a host runs a task from a service that is created on the network. We will see this shortly.
Expand Down Expand Up @@ -530,7 +535,7 @@ Execute the following commands from **worker3**, to verify that containers on th
Store the ID of the service task container to a varable:

```
id=$(docker container ls --last 1 --format "{{ .ID }}")
id=$(docker container ls --last 1 --filter name=ubuntu --format "{{ .ID }}")
```

And now confirm you can ping the container running on **worker3** from the container running on **worker2**:
Expand All @@ -548,7 +553,7 @@ The output above shows that all the tasks from the **ubuntu** service are on the

Now that you have a working service using an overlay network, let's test service discovery.

Still on **worker2**, use the container ID you have stored to see how DNS resolution is configured in containers. Run `cat /etc/resolv.conf`:
Still on **worker3**, use the container ID you have stored to see how DNS resolution is configured in containers. Run `cat /etc/resolv.conf`:

```
$ docker container exec $id cat /etc/resolv.conf
Expand Down Expand Up @@ -623,7 +628,7 @@ Towards the bottom of the output you will see the VIP of the service listed. The
Now that you're connected to **manager1** you can repeate the same `ping` command using the container running on the manager - you will get a response form the same VIP:

```
$ id=$(docker container ls --last 1 --format "{{ .ID }}")
id=$(docker container ls --last 1 --filter name=ubuntu --format "{{ .ID }}")

$ docker container exec $id ping -c 2 ubuntu
PING ubuntu (10.0.0.2) 56(84) bytes of data.
Expand All @@ -642,27 +647,27 @@ Create the service with a single replica, using the **manager1** node:
docker service create -p 5000:5000 -d --name pets --replicas=1 nicolaka/pets_web:1.0
```

Browse to UCP and using the left navigation click on _Swarm...Services_. You'll see the **pets** service in the list - click on the service and in the details panel on the right you can see a link to the published endpoint:
Browse to UCP and using the left navigation click on _Swarm...Services_. You'll see the **pets** service in the list - click on the service and look for endpoints section, copy the URL:

[](img/swarm/pets-service-link.jpg)
![](img/swarm/pets-service-link.jpg)

Click the link and the app will open in a new browser tab:
Paste it in a new tab in your browser:

[](img/swarm/pets-1.jpg)
![](img/swarm/pets-1.jpg)

> The domain name you're browsing to is the UCP manager node. The ingress network receives the request and routes it to one of the service tasks - any node in the cluster can respond to the request by internally routing it to a container.

Try scaling up the service. In UCP select the **pets** service and click _Configure_:

[](img/swarm/pets-configure.jpg)
![](img/swarm/pets-configure.jpg)

Select the _Scheduling_ section, and run more tasks by setting the _Scale_ level to 10:

[](img/swarm/pets-scale.jpg)
![](img/swarm/pets-scale.jpg)

Click save and UCP returns to the service list. The service is scaling up and you can see the container list by clicking on _Inspect Resource...Containers_:
Click save and UCP returns to the service list. The service is scaling up and you can see the container list by clicking on the **pets** service and scrolling down to the bottom of the page:

[](img/swarm/pets-container-list.jpg)
![](img/swarm/pets-container-list.jpg)

You'll see containers running on nodes across the cluster. Now refresh the tab with the Pets website. Each time you refresh you'll see a different container ID. Docker swarm load-balances requests across all the tasks in the service.

Expand Down