Prototype for an operator like application not depending on Kubernetes CRDs or API, breaking out the SyncSets functionality from OpenShift Hive.
- Simulate Kubernetes operator style applications, but without actually being depending on the Kubernetes apiserver, CRDs, or etcd.
- Generate restapi and Go client with go-swagger from Go types. ("code first")
- Rest API backed by Postgres JSONB document storage.
- Types will support some common metadata such as labels, only now they will actually be indexed.
- Watch will be provided by RabbitMQ or some AMQP bus for pub/sub.
- Hoping for very minimal bus usage with no logic.
- Rest API publishes events to queues for each API type.
- Establishing a watch means subscribing to a queue for an API type, and if you're listening you will receive messages on API events.
- Controllers to be horizontally scalable.
- Leverage a second type of queue where only one listener can pickup an event.
- Can run multiple pods and workloads will be distributed to only one as they pull work from the queue.
Used to generate API server, go client code, and documentation.
go get -u github.com/go-swagger/go-swagger/cmd/swagger
Presently using "code first" with go-swagger where types/handlers have appropriate godoc annotations to generate swagger.yaml, from which virtually everything under client/ and restapi/ is generated.
Just an easier option than curl. Used in some commands in this README.
This project presently aims to use RabbitMQ for pub/sub consumers who wish to watch API events.
Using the RabbitMQ Operator, this process involves some resources I had to patch to work on OpenShift.
kubectl apply -f manifests/rabbitmq-operator/
kubectl apply -f manifests/namespace.yaml
kubectl apply -f manifests/rabbitmq-cluster.yaml
oc adm policy add-scc-to-user rabbitmq-cluster -z rabbitmq-server
Once running you can check in with:
oc rsh rabbitmq-server-0 rabbitmqctl cluster_status
Several options here:
- On OpenShift: Use the OpenShift Template, create a new project in the console, select to add a database and choose postgresql. * Crunchy PostgreSQL operator appears much too complicated and possibly broken. * TODO: Try EnterpriseDB PostgreSQL operator.
- On plain Kubernetes/Kind: Run a postgresql pod:
kubectl create -f manifests/postgresql/postgresql.yaml
* This won't work on OpenShift as the official Docker images assume root. * TODO: Update the manifest to use the OpenShift image and have one manifest that works on both. - Amazon RDS: Choose free tier and public access.
Note your password and ensure you've created a database called syncsets
:
Establish a local port forward if running on OpenShift or Kube.
The POSTGRES_PARAMS
env var will be used both for goose schema migrations and the api server itself.
You should be able to connect to your local database with:
kubectl port-forward svc/postgresql 5432:5432
export POSTGRES_PARAMS="user=postgres password=helloworld dbname=syncsets sslmode=disable host=localhost"
psql $POSTGRES_PARAMS
Install goose for managing database schema migrations, and create (or update) the schema:
go get -u github.com/pressly/goose/cmd/goose
goose postgres $POSTGRES_PARAMS up
Ensure you have postgresql properly configured and reachable from localhost per above.
Compile your current code, generate server/client, and run the API locally:
make run
Push some data with httpie:
echo '{"name": "cluster1", "namespace": "foo", "kubeconfig": "foobar"}' | http POST localhost:7070/v1/clusters
Your database should now have an entry in the clusters
table.
psql $POSTGRES_PARAMS -c "select * from clusters"
IMG="quay.io/dgoodwin/syncsets:latest" make docker-push deploy