Skip to content

Commit 5f53c83

Browse files
committed
Create project scaffolding and install CSI Sanity
1 parent 6a3be40 commit 5f53c83

10 files changed

+341
-1
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
deploy/kubernetes-dev/

Dockerfile

+31
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
################################
2+
# STEP 1 build executable binary
3+
################################
4+
FROM golang:alpine AS builder
5+
6+
# Install git - required for fetching the dependencies
7+
RUN apk add --update --no-cache ca-certificates git
8+
WORKDIR /app
9+
COPY . .
10+
11+
# Fetch dependencies
12+
RUN go mod download
13+
RUN go mod verify
14+
15+
RUN find .
16+
17+
# Build the binary.
18+
RUN GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /app/civo-csi
19+
20+
21+
############################
22+
# STEP 2 build a small image
23+
############################
24+
FROM scratch
25+
26+
# Copy our static executable
27+
WORKDIR /app
28+
COPY --from=builder /app/civo-csi /app/civo-csi
29+
30+
# Run the civo-csi binary
31+
ENTRYPOINT ["/app/civo-csi"]

Makefile

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
VERSION?="0.0.1"
2+
3+
generate:
4+
go generate ./...
5+
6+
protobuf:
7+
bash scripts/protobufcheck.sh
8+
9+
fmtcheck:
10+
@sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'"
11+
12+
docker: fmtcheck
13+
docker build .
14+
15+
.PHONY: fmtcheck generate protobuf docker

README.md

+49-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,49 @@
1-
# civo-csi
1+
# Civo CSI Driver
2+
3+
This controller is installed in to Civo K3s client clusters and handles the mounting of Civo Volumes on to the
4+
correct nodes and promoting the storage into the cluster as a Persistent Volume.
5+
6+
## Background reading
7+
8+
* [Official Kubernetes CSI announcement blog](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/)
9+
* [Official CSI documentation](https://kubernetes-csi.github.io/docs/)
10+
* [Good list of current CSI drivers to see how others have done things](https://kubernetes-csi.github.io/docs/drivers.html)
11+
* [Presentation on how CSI is architected](https://www.usenix.org/sites/default/files/conference/protected-files/vault20_slides_seidman.pdf)
12+
* [Example Hostpath CSI driver](https://github.com/kubernetes-csi/csi-driver-host-path/)
13+
* [Notes on Hostpath CSI driver](https://www.velotio.com/engineering-blog/kubernetes-csi-in-action-explained-with-features-and-use-cases)
14+
15+
## Key takeaways
16+
17+
* We need to enable [dynamic provisioning](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/#dynamic-provisioning)
18+
* We're going to build a single binary and use the sidecars to register the appropriate parts of it in the appropriate place (one part runs on the control plane as a deployment, the other part runs on each node as a DaemonSet)
19+
20+
## Known issues
21+
22+
No currently known issues.
23+
24+
## Getting started
25+
26+
Normally for our Civo Kubernetes integrations we'd recommend visiting the [getting started document for CivoStack](https://github.com/civo/civo-stack/blob/master/GETTING_STARTED.md) guide, but this is a different situation (installed on the client cluster, not the supercluster), so below are some similar sort of steps to get you started:
27+
28+
### How do I run the driver in development
29+
30+
Unlike Operators, you can't as easily run CSI drivers locally just connected in to a cluster (there is a way with `socat` and forwarding Unix sockets, but we haven't experimented with that).
31+
32+
So the way we test our work is:
33+
34+
#### A. Run the CSI Sanity test suite
35+
36+
This is already integrated and is a simple `go test` away 🥳
37+
38+
This will run the full Kubernetes Storage SIG's suiet of tests against the endpoints you're supposed to have implemented.
39+
40+
#### B. Install in to a cluster
41+
42+
The steps are:
43+
44+
1. Create an environment variable called `IMAGE_NAME` with a random or recognisable name (`IMAGE_NAME=$(uuidgen)` works well)
45+
2. Build the Docker image with `docker build -t ttl.sh/${IMAGE_NAME}:2h .`
46+
3. Push the Docker image to ttl.sh (a short lived Docker image repository, useful for dev): `docker push ttl.sh/${IMAGE_NAME}:2h`
47+
4. Copy recursively the `deploy/kubernetes` folder to `deploy/kubernetes-dev` and replace all occurences of `civo-csi:latest` in there with `YOUR_IMAGE_NAME:2h` (ENV variable interpolation won't work here), this folder is automatically in `.gitignore`
48+
5. In a test cluster (a Civo K3s 1 node cluster will work) you'll need to create a `Secret` within the `civo-system` called `api-access` containing the keys `api_key` and `api_url` set to your Civo API key and either `https://api.civo.com` or a xip.io/ngrok pointing to your local development environment (depending on where your cluster is running)
49+
6. Deploy the Kubernetes resources required to the cluster with `kubectl apply -f deploy/kubernetes-dev`

go.mod

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
module civo.com/csi
2+
3+
go 1.15
4+
5+
require (
6+
github.com/container-storage-interface/spec v1.3.0 // indirect
7+
github.com/google/uuid v1.2.0 // indirect
8+
github.com/kubernetes-csi/csi-test v2.2.0+incompatible
9+
github.com/kubernetes-csi/csi-test/v4 v4.0.2
10+
github.com/onsi/ginkgo v1.15.0 // indirect
11+
github.com/onsi/gomega v1.10.5 // indirect
12+
google.golang.org/grpc v1.35.0 // indirect
13+
gopkg.in/yaml.v2 v2.4.0 // indirect
14+
)

0 commit comments

Comments
 (0)