Provide the AWS credentials in following ways:
- Environment Variables. You will need to specify
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_DEFAULT_REGION
.
- Git
- Terraform
- kubectl
- rke-cli
- terraform-provider-rke
- Familiarity with rke configuration.
- Modify values in
config.yaml.example
insiderke
key. Supported options as of now:
cluster_name - name of cluster
node_os - operating system for nodes. Currently, ubuntu
rke_aws_region - AWS region to install the cluster in.
authorization - authorization in cluster. Possible values: `rbac`, `none`. Recommended is `rbac`
rke_node_instance_type - instance type for nodes. t2.micro is highly discouraged.
node_count - number of nodes to launch in cluster
cloud_provisioner - cloud provisioner for rke cluster. Currently, only aws is supported.
Example:
rke:
cluster_name: "rke-tk8"
node_os: ubuntu
rke_aws_region: us-east-2
authorization: "rbac"
rke_node_instance_type: "t2.medium"
node_count: 5
cloud_provider: aws
- After appropriate values are filled, run:
tk8 cluster install rke
Once the infrastructure and cluster is setup, kubeconfig
file and rancher-cluster.yml
file will be available at:
- kubeconfig -
inventory/rke-tk8/provisioner/kube_config_cluster.yml
- rancher config -
inventory/rke-tk8/provisioner/rancher-cluster.yml
- Now you can use the same
kubeconfig
file to interact with kubernetes with kubectl and userancher-cluster.yml
to interact with cluster usingrke
cli.
Note: Do not
mv
these files from this directory to somewhere else as these are stored in Terraform states. If required, make a copy.
- For removing the rke cluster and keep the underlying infrastructure, run:
tk8 cluster remove rke
This is equivalent to rke remove --config rancher-cluster.yml
.
- This will destroy the complete infrastructure. Run:
tk8 cluster destroy rke
Note This is just a cluster provisioner, it will not install
rancher-2.x
on the cluster by itself. Use
tk8 addon install rancher
on the cluster.