Welcome to this tutorial where we will take you step by step in creating an Azure Kubernetes Web Application that is secured via https. This tutorial assumes you are logged into Azure CLI already and have selected a subscription to use with the CLI. It also assumes that you have Helm installed (Instructions can be found here).
The first step in this tutorial is to define environment variables.
export RANDOM_ID="$(openssl rand -hex 3)"
export NETWORK_PREFIX="$(($RANDOM % 254 + 1))"
export SSL_EMAIL_ADDRESS="$(az account show --query user.name --output tsv)"
export MY_RESOURCE_GROUP_NAME="myAKSResourceGroup$RANDOM_ID"
export REGION="eastus"
export MY_AKS_CLUSTER_NAME="myAKSCluster$RANDOM_ID"
export MY_PUBLIC_IP_NAME="myPublicIP$RANDOM_ID"
export MY_DNS_LABEL="mydnslabel$RANDOM_ID"
export MY_VNET_NAME="myVNet$RANDOM_ID"
export MY_VNET_PREFIX="10.$NETWORK_PREFIX.0.0/16"
export MY_SN_NAME="mySN$RANDOM_ID"
export MY_SN_PREFIX="10.$NETWORK_PREFIX.0.0/22"
export FQDN="${MY_DNS_LABEL}.${REGION}.cloudapp.azure.com"
A resource group is a container for related resources. All resources must be placed in a resource group. We will create one for this tutorial. The following command creates a resource group with the previously defined $MY_RESOURCE_GROUP_NAME and $REGION parameters.
az group create --name $MY_RESOURCE_GROUP_NAME --location $REGION
Results:
{
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myAKSResourceGroupxxxxxx",
"location": "eastus",
"managedBy": null,
"name": "testResourceGroup",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null,
"type": "Microsoft.Resources/resourceGroups"
}
A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet.
az network vnet create \
--resource-group $MY_RESOURCE_GROUP_NAME \
--location $REGION \
--name $MY_VNET_NAME \
--address-prefix $MY_VNET_PREFIX \
--subnet-name $MY_SN_NAME \
--subnet-prefixes $MY_SN_PREFIX
Results:
{
"newVNet": {
"addressSpace": {
"addressPrefixes": [
"10.xxx.0.0/16"
]
},
"enableDdosProtection": false,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/myAKSResourceGroupxxxxxx/providers/Microsoft.Network/virtualNetworks/myVNetxxx",
"location": "eastus",
"name": "myVNetxxx",
"provisioningState": "Succeeded",
"resourceGroup": "myAKSResourceGroupxxxxxx",
"subnets": [
{
"addressPrefix": "10.xxx.0.0/22",
"delegations": [],
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/myAKSResourceGroupxxxxxx/providers/Microsoft.Network/virtualNetworks/myVNetxxx/subnets/mySNxxx",
"name": "mySNxxx",
"privateEndpointNetworkPolicies": "Disabled",
"privateLinkServiceNetworkPolicies": "Enabled",
"provisioningState": "Succeeded",
"resourceGroup": "myAKSResourceGroupxxxxxx",
"type": "Microsoft.Network/virtualNetworks/subnets"
}
],
"type": "Microsoft.Network/virtualNetworks",
"virtualNetworkPeerings": []
}
}
Verify Microsoft.OperationsManagement and Microsoft.OperationalInsights providers are registered on your subscription. These are Azure resource providers required to support Container insights. To check the registration status, run the following commands
az provider register --namespace Microsoft.Insights
az provider register --namespace Microsoft.OperationsManagement
az provider register --namespace Microsoft.OperationalInsights
Create an AKS cluster using the az aks create command with the --enable-addons monitoring parameter to enable Container insights. The following example creates an autoscaling, availability zone enabled cluster.
This will take a few minutes.
export MY_SN_ID=$(az network vnet subnet list --resource-group $MY_RESOURCE_GROUP_NAME --vnet-name $MY_VNET_NAME --query "[0].id" --output tsv)
az aks create \
--resource-group $MY_RESOURCE_GROUP_NAME \
--name $MY_AKS_CLUSTER_NAME \
--auto-upgrade-channel stable \
--enable-cluster-autoscaler \
--enable-addons monitoring \
--location $REGION \
--node-count 1 \
--min-count 1 \
--max-count 3 \
--network-plugin azure \
--network-policy azure \
--vnet-subnet-id $MY_SN_ID \
--no-ssh-key \
--node-vm-size Standard_DS2_v2 \
--zones 1 2 3
To manage a Kubernetes cluster, use the Kubernetes command-line client, kubectl. kubectl is already installed if you use Azure Cloud Shell.
-
Install az aks CLI locally using the az aks install-cli command
if ! [ -x "$(command -v kubectl)" ]; then az aks install-cli; fi
-
Configure kubectl to connect to your Kubernetes cluster using the az aks get-credentials command. The following command:
- Downloads credentials and configures the Kubernetes CLI to use them.
- Uses ~/.kube/config, the default location for the Kubernetes configuration file. Specify a different location for your Kubernetes configuration file using --file argument.
[!WARNING] This will overwrite any existing credentials with the same entry
az aks get-credentials --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_AKS_CLUSTER_NAME --overwrite-existing
-
Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
kubectl get nodes
export MY_STATIC_IP=$(az network public-ip create --resource-group MC_${MY_RESOURCE_GROUP_NAME}_${MY_AKS_CLUSTER_NAME}_${REGION} --location ${REGION} --name ${MY_PUBLIC_IP_NAME} --dns-name ${MY_DNS_LABEL} --sku Standard --allocation-method static --version IPv4 --zone 1 2 3 --query publicIp.ipAddress -o tsv)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$MY_DNS_LABEL \
--set controller.service.loadBalancerIP=$MY_STATIC_IP \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--wait
A Kubernetes manifest file defines a cluster's desired state, such as which container images to run.
In this quickstart, you will use a manifest to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments:
- The sample Azure Vote Python applications.
- A Redis instance.
Two Kubernetes Services are also created:
- An internal service for the Redis instance.
- An external service to access the Azure Vote application from the internet.
Finally, an Ingress resource is created to route traffic to the Azure Vote application.
A test voting app YML file is already prepared. To deploy this app run the following command
kubectl apply -f azure-vote-start.yml
Validate that the application is running by either visiting the public ip or the application url. The application url can be found by running the following command:
Note
It often takes 2-3 minutes for the PODs to be created and the site to be reachable via HTTP
runtime="5 minute";
endtime=$(date -ud "$runtime" +%s);
while [[ $(date -u +%s) -le $endtime ]]; do
STATUS=$(kubectl get pods -l app=azure-vote-front -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}'); echo $STATUS;
if [ "$STATUS" == 'True' ]; then
break;
else
sleep 10;
fi;
done
curl "http://$FQDN"
Results:
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<link rel="stylesheet" type="text/css" href="/static/default.css">
<title>Azure Voting App</title>
<script language="JavaScript">
function send(form){
}
</script>
</head>
<body>
<div id="container">
<form id="form" name="form" action="/"" method="post"><center>
<div id="logo">Azure Voting App</div>
<div id="space"></div>
<div id="form">
<button name="vote" value="Cats" onclick="send()" class="button button1">Cats</button>
<button name="vote" value="Dogs" onclick="send()" class="button button2">Dogs</button>
<button name="vote" value="reset" onclick="send()" class="button button3">Reset</button>
<div id="space"></div>
<div id="space"></div>
<div id="results"> Cats - 0 | Dogs - 0 </div>
</form>
</div>
</div>
</body>
</html>
At this point in the tutorial you have an AKS web app with NGINX as the Ingress controller and a custom domain you can use to access your application. The next step is to add an SSL certificate to the domain so that users can reach your application securely via HTTPS.
In order to add HTTPS we are going to use Cert Manager. Cert Manager is an open source tool used to obtain and manage SSL certificate for Kubernetes deployments. Cert Manager will obtain certificates from a variety of Issuers, both popular public Issuers as well as private Issuers, and ensure the certificates are valid and up-to-date, and will attempt to renew certificates at a configured time before expiry.
-
In order to install cert-manager, we must first create a namespace to run it in. This tutorial will install cert-manager into the cert-manager namespace. It is possible to run cert-manager in a different namespace, although you will need to make modifications to the deployment manifests.
kubectl create namespace cert-manager
-
We can now install cert-manager. All resources are included in a single YAML manifest file. This can be installed by running the following:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.0/cert-manager.crds.yaml
-
Add the certmanager.k8s.io/disable-validation: "true" label to the cert-manager namespace by running the following. This will allow the system resources that cert-manager requires to bootstrap TLS to be created in its own namespace.
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters.
Cert-manager provides Helm charts as a first-class method of installation on Kubernetes.
-
Add the Jetstack Helm repository
This repository is the only supported source of cert-manager charts. There are some other mirrors and copies across the internet, but those are entirely unofficial and could present a security risk.
helm repo add jetstack https://charts.jetstack.io
-
Update local Helm Chart repository cache
helm repo update
-
Install Cert-Manager addon via helm by running the following:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.7.0
-
Apply Certificate Issuer YAML File
ClusterIssuers are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request. The issuer we are using can be found in the
cluster-issuer-prod.yml file
cluster_issuer_variables=$(<cluster-issuer-prod.yml) echo "${cluster_issuer_variables//\$SSL_EMAIL_ADDRESS/$SSL_EMAIL_ADDRESS}" | kubectl apply -f -
-
Upate Voting App Application to use Cert-Manager to obtain an SSL Certificate.
The full YAML file can be found in
azure-vote-nginx-ssl.yml
azure_vote_nginx_ssl_variables=$(<azure-vote-nginx-ssl.yml) echo "${azure_vote_nginx_ssl_variables//\$FQDN/$FQDN}" | kubectl apply -f -
Run the following command to get the HTTPS endpoint for your application:
Note
It often takes 2-3 minutes for the SSL certificate to propogate and the site to be reachable via HTTPS.
runtime="5 minute";
endtime=$(date -ud "$runtime" +%s);
while [[ $(date -u +%s) -le $endtime ]]; do
STATUS=$(kubectl get svc --namespace=ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}');
echo $STATUS;
if [ "$STATUS" == "$MY_STATIC_IP" ]; then
break;
else
sleep 10;
fi;
done
echo "You can now visit your web server at https://$FQDN"