Fancy homelab automation bits.
- Create a script that will create a barebones Ubuntu Server base inside of PVE to base these builds on
- Plain Docker base image
- K3s base image
- Buildkite base image
- Packer:
- Automated rebuilds? Github Actions? Buildkite?
- K3s:
- Automated deployments? Flux? ArgoCD?
- Nomad:
- Automated deployments???
- Use
scripts/create-ubuntu-template.sh
to create a barebones Ubuntu Server template.- Previously used:
./create-ubuntu-template.sh -i 9001 -n pve-002 -r jammy -s current -m amd64 -t local -T local-lvm -F
- Previously used:
- Follow the packer README to create the base images.
- Follow the terraform README to create the cluster base.
- Follow the ansible README to configure the cluster.
- Follow the fluxcd README to set up gitops.
- Cluster ingress is handled by Traefik, which is a deployment built-in to
k3s
. - There is a cluster "gateway" of sorts. The
support
node runs the following services:postgres
(as k3s leader datastore)pgadmin
(for administrating said postgres)nginx
- TCP proxy load balancer to each cluster node's
traefik
instance - Reverse proxying
pgadmin
- I have two internal DNS records that point to this node:
gateway.main.k8s.2811rrt.net
- split horizon between Tailnet and my home network*.main.k8s.2811rrt.net
- CNAME togateway.main.k8s.2811rrt.net
- Note to self: if you rebuild the cluster, you need to update the Tailnet leg of the split horizon record, as the rebuild changes the instance's Tailnet IP address.
- TCP proxy load balancer to each cluster node's
- Cluster ingress is handled by Traefik
- Jobs are currently deployed via Terraform module, but is unreliable when a full redeploy is needed.
- A series of targeted applies are needed to get all jobs deployed successfully, but I need to run that down later.