Here we will see high availability of a site after a failed deployment.
These steps are to be executed from your local machine!
$ cd /[LOCATION YOU CLONED THIS REPO]/GKE-hands-on-training
$ kubectl apply -f examples/healthcheck/service.yaml
$ kubectl apply -f examples/healthcheck/healthy-deployment.yaml
Execute the following command:
$ kubectl get pods -o wide
You should now see
NAME READY STATUS RESTARTS AGE IP NODE
probes-demo-1216114202-fkbjn 0/1 Running 0 13s 172.16.235.211 worker1
probes-demo-1216114202-jl08v 0/1 Running 0 13s 172.16.235.212 worker1
probes-demo-1216114202-wv5jx 0/1 Running 0 13s 172.16.235.210 worker1
if you closed the Kuberentes Dashboard, follow the instructions to open Dashboard here
$ kubectl get services k8s-workshop-site-dev
You should now be seeing:
root@bootstrap-node:~/hands-on-with-kubernetes-workshop# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-workshop-site 172.17.149.128 104.196.252.72 80:32233/TCP 13s
$ kubectl apply -f examples/healthcheck/broken-deployment.yaml
Now refresh the Kubernetes dashboard displaying the pods a few times
You should start to see the warning message Liveness probe failed: HTTP probe failed with statuscode: 404
You will see that some of the pods have failed to start.
You should still see "version 1.0" displayed on the webpage.
Kubernetes health checks have failed on the new pods so version 1.1 of the website isn't healthy so no traffic is being sent to it. Kubernetes is preventing a rolling update from happening until the new pods are known to be healthy.
Finally execute the following command to tidy away the demo:
$ kubectl delete -f examples/healthcheck/service.yaml
$ kubectl delete -f examples/healthcheck/broken-deployment.yaml