Provide the AWS credentials in following ways:
- Environment Variables. You will need to specify
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
.
Adapt the config.yaml
file to specify the cluster details. Example config:
eks:
cluster-name: "kubernauts-eks"
aws_region: "us-west-2"
node-instance-type: "m4.large"
desired-capacity: 1
autoscalling-max-size: 2
autoscalling-min-size: 1
key-file-path: "~/.ssh/id_rsa.pub"
- Git
- Terraform
- Ansible
- kubectl
- Python
- pip
- AWS IAM Authenticator
- Existing SSH keypair in AWS
- AWS access and secret keys
- Exported AWS Credentials
Once done run:
tk8 cluster install eks
or with Docker
docker run -v <path-to-the-AWS-SSH-key>:/root/.ssh/ -v "$(pwd)":/tk8 -e AWS_ACCESS_KEY_ID=xxx -e AWS_SECRET_ACCESS_KEY=XXX kubernautslabs/tk8 cluster install eks
Post installation the kubeconfig will be available at: $(pwd)/inventory/yourWorkspaceOrClusterName/provisioner/kubeconfig
Do not delete the inventory directory post installation as the cluster state will be saved in it.
Make sure you are in same directory where you executed tk8 cluster install eks
with the inventory directory.
If you use a different workspace name with the --name flag please provide it on destroying too.
To delete the provisioned cluster run:
tk8 cluster destroy eks
or with Docker
docker run -v <path-to-the-AWS-SSH-key>:/root/.ssh/ -v "$(pwd)":/tk8 -e AWS_ACCESS_KEY_ID=xxx -e AWS_SECRET_ACCESS_KEY=XXX kubernautslabs/tk8 cluster destroy eks