|
1 |
| -# civo-csi |
| 1 | +# Civo CSI Driver |
| 2 | + |
| 3 | +This controller is installed in to Civo K3s client clusters and handles the mounting of Civo Volumes on to the |
| 4 | +correct nodes and promoting the storage into the cluster as a Persistent Volume. |
| 5 | + |
| 6 | +## Background reading |
| 7 | + |
| 8 | +* [Official Kubernetes CSI announcement blog](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) |
| 9 | +* [Official CSI documentation](https://kubernetes-csi.github.io/docs/) |
| 10 | +* [Good list of current CSI drivers to see how others have done things](https://kubernetes-csi.github.io/docs/drivers.html) |
| 11 | +* [Presentation on how CSI is architected](https://www.usenix.org/sites/default/files/conference/protected-files/vault20_slides_seidman.pdf) |
| 12 | +* [Example Hostpath CSI driver](https://github.com/kubernetes-csi/csi-driver-host-path/) |
| 13 | +* [Notes on Hostpath CSI driver](https://www.velotio.com/engineering-blog/kubernetes-csi-in-action-explained-with-features-and-use-cases) |
| 14 | + |
| 15 | +## Key takeaways |
| 16 | + |
| 17 | +* We need to enable [dynamic provisioning](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/#dynamic-provisioning) |
| 18 | +* We're going to build a single binary and use the sidecars to register the appropriate parts of it in the appropriate place (one part runs on the control plane as a deployment, the other part runs on each node as a DaemonSet) |
| 19 | + |
| 20 | +## Known issues |
| 21 | + |
| 22 | +No currently known issues. |
| 23 | + |
| 24 | +## Getting started |
| 25 | + |
| 26 | +Normally for our Civo Kubernetes integrations we'd recommend visiting the [getting started document for CivoStack](https://github.com/civo/civo-stack/blob/master/GETTING_STARTED.md) guide, but this is a different situation (installed on the client cluster, not the supercluster), so below are some similar sort of steps to get you started: |
| 27 | + |
| 28 | +### How do I run the driver in development |
| 29 | + |
| 30 | +Unlike Operators, you can't as easily run CSI drivers locally just connected in to a cluster (there is a way with `socat` and forwarding Unix sockets, but we haven't experimented with that). |
| 31 | + |
| 32 | +So the way we test our work is: |
| 33 | + |
| 34 | +#### A. Run the CSI Sanity test suite |
| 35 | + |
| 36 | +This is already integrated and is a simple `go test` away 🥳 |
| 37 | + |
| 38 | +This will run the full Kubernetes Storage SIG's suiet of tests against the endpoints you're supposed to have implemented. |
| 39 | + |
| 40 | +#### B. Install in to a cluster |
| 41 | + |
| 42 | +The steps are: |
| 43 | + |
| 44 | +1. Create an environment variable called `IMAGE_NAME` with a random or recognisable name (`IMAGE_NAME=$(uuidgen)` works well) |
| 45 | +2. Build the Docker image with `docker build -t ttl.sh/${IMAGE_NAME}:2h .` |
| 46 | +3. Push the Docker image to ttl.sh (a short lived Docker image repository, useful for dev): `docker push ttl.sh/${IMAGE_NAME}:2h` |
| 47 | +4. Copy recursively the `deploy/kubernetes` folder to `deploy/kubernetes-dev` and replace all occurences of `civo-csi:latest` in there with `YOUR_IMAGE_NAME:2h` (ENV variable interpolation won't work here), this folder is automatically in `.gitignore` |
| 48 | +5. In a test cluster (a Civo K3s 1 node cluster will work) you'll need to create a `Secret` within the `civo-system` called `api-access` containing the keys `api_key` and `api_url` set to your Civo API key and either `https://api.civo.com` or a xip.io/ngrok pointing to your local development environment (depending on where your cluster is running) |
| 49 | +6. Deploy the Kubernetes resources required to the cluster with `kubectl apply -f deploy/kubernetes-dev` |
0 commit comments