A project to understand the internal workings of Kubernetes by building a cluster from scratch using Kubeadm, with infrastructure provisioned by Terraform and configuration managed by Ansible.
- AWS Account
- AWS CLI configured
- Terraform installed (v1.0.0+)
- Ansible installed (v2.9+)
- kubectl installed
- Domain registered in Route53
- SSH key pair named "testing" in AWS
- VPC with public subnets
- 1 Master Node (t2.medium)
- 2 Worker Nodes (t2.small)
- CRI-O as container runtime
- Calico for networking
- Route53 A record for accessing NodePort services
- Clone the repository:
git clone https://github.com/yourusername/kubernetes-cluster.git
cd kubernetes-cluster- Create terraform.tfvars:
aws_region = "us-east-1"
environment = "production"
ubuntu_ami = "ami-0e86e20dae9224db8"
vpc_cidr = "10.0.0.0/16"
public_subnet_cidrs = ["10.0.3.0/24", "10.0.4.0/24"]
availability_zones = ["us-east-1a", "us-east-1b"]
key_name = "testing"
master_instance_type = "t2.medium"
worker_instance_type = "t2.small"
worker_count = 2
cluster_name = "production-k8s"
dns_domain = "cluster.local"- Initialize and apply Terraform:
cd terraform
terraform init
terraform apply- VPC with 2 public subnets
- Security groups for Kubernetes ports
- EC2 instances
- Route53 A record (k8.yourdomain.com)
Roles:
- common: Base setup for all nodes
- master: Control plane setup
- worker: Node joining
- utils: Metrics server and node labeling
Key Scripts:
- k8-setup.sh: Run on all nodes
- master-setup.sh: Run on master only
- Check nodes:
kubectl get nodes- Check DNS setup:
dig k8.yourdomain.com- Deploy test service:
kubectl create deploy nginx --image=nginx
kubectl expose deploy nginx --type=NodePort --port=80kubernetes-cluster/
├── terraform/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ ├── terraform.tfvars
│ └── modules/
│ ├── vpc/
│ ├── security/
│ └── instances/
└── ansible/
├── site.yml
└── roles/
├── common/
│ └── files/
│ └── k8-setup.sh
├── master/
│ └── files/
│ └── master-setup.sh
└── utils/
- CRI-O: Latest stable version
- Kubernetes: v1.31
- Calico: v3.26.0
- Pod CIDR: 192.168.0.0/16
- Service CIDR: 10.96.0.0/12
- NodePort range: 30000-32767
If pods can't resolve services:
# Check CoreDNS
kubectl get pods -n kube-system -l k8s-app=kube-dnsIf workers can't join:
# On master
kubeadm token create --print-join-commandterraform destroy- Nodes are in public subnets for demo purposes
- Security groups restrict access to necessary ports
- Consider bastion host for production setups
- Fork repository
- Create feature branch
- Submit pull request
Open an issue if you need help!