Description
I already have a working installation of Kube Stack Prometheus (includes Prometheus, Grafana, Alertmanager, lots of exporters as one bundle). I recently migrated my UniFi Network Application (controller) to Kubernetes and wanted a nice dashboard and found this project. Within a few hours I was able to convert the Docker instructions to Kubernetes manifest files.
My use case, is just Unpoller running with Prometheus plugin enabled, everything else is disabled. The only thing I had to add was a PodMonitor
which makes the Unpoller exporter discoverable to Prometheus. Hopefully the steps below help someone.
First Item is a secret which holds the Unifi ID & Password credentials (values here should be base64 encoded). These will be exposed as environment variables to the Unpoller container.
---
apiVersion: v1
kind: Secret
metadata:
labels:
app: unpoller
app.kubernetes.io/instance: unpoller
app.kubernetes.io/name: unpoller
name: unpoller-secrettype: Opaque
data:
unifi-user: cmVkY2FjdGVk
unifi-pass: YWxzby1yZWRjYWN0ZWQ=
Next a configMap which holds the contents of the up.conf
file and will be mounted in the container in /etc/unifi-poller
directory. The url
is the internal Kubernetes DNS name to my Unfi Controller container. You would need to adjust to whatever you named your container.
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: unpoller
app.kubernetes.io/instance: unpoller
app.kubernetes.io/name: unpoller
name: unpoller-config-file
data:
up.conf: |
# See Github example page for more details.
# https://github.com/unpoller/unpoller/blob/master/examples/up.conf.example
[poller]
debug = false
quiet = false
plugins = []
[prometheus]
disable = false
http_listen = "0.0.0.0:9130"
[influxdb]
disable = true
# Loki is disabled with no URL
[loki]
url = ""
[datadog]
disable = true
[webserver]
enable = false
port = 37288
html_path = "/usr/lib/unifi-poller/web"
[unifi]
dynamic = false
[unifi.defaults]
url = "https://unifi-controller:8443"
sites = ["all"]
Next is a deployment file, which links everything together. Instructs where to mount the secret and ConfigMap, defines the image and version of the container, which port to expose, how many copies of unpoller to run, etc. Since I'm only using the Unpoller Prometheus plugin I only defined that port and named that port metrics
.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: unpoller
app.kubernetes.io/instance: unpoller
app.kubernetes.io/name: unpoller
name: unpoller
spec:
replicas: 1
selector:
matchLabels:
app: unpoller
app.kubernetes.io/instance: unpoller
app.kubernetes.io/name: unpoller
template:
metadata:
labels:
app: unpoller
app.kubernetes.io/instance: unpoller
app.kubernetes.io/name: unpoller
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- unifi-controller
topologyKey: "kubernetes.io/hostname"
containers:
- name: unpoller
image: golift/unifi-poller:2.1.3
ports:
- name: metrics
containerPort: 9130
protocol: TCP
env:
- name: UP_UNIFI_DEFAULT_USER
valueFrom:
secretKeyRef:
name: unpoller-secret
key: unifi-user
- name: UP_UNIFI_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: unpoller-secret
key: unifi-pass
volumeMounts:
- mountPath: /etc/unifi-poller
name: unpoller-config
volumes:
- configMap:
name: unpoller-config-file
name: unpoller-config
I also added a podAffinity
section above instructing Kubernetes to run this container on the same node (hostname) where the Unifi Controller software is running, where ever that is, this will follow. If that is not started, then this won't start. It is looking for a label app
with value unifi-controller
, if you name yours something else then adjust as needed. By using pod affinity to keep the two applications next to each other, it keeps all the polling chatter local to reduce network traffic.
Lastly is a PodMonitor which is what Prometheus will look for. While the unpoller deployment I placed in the unifi
namespace where the Unifi Controller is also placed, I put this podmonitor in the monitoring
namespace where Prometheus is located.
The installation of Prometheus I have will automatically discover any pod or service monitor with a label release: kube-stack-prometheus
thus no configuration is needed. This is not universal, if you used a different Prometheus package it likely is looking for a different label, adjust as needed if unpoller metrics are not seen within Prometheus after a few seconds. This podmonitor will scrape data off the metrics
port named in the deployment and will use URI path /metrics
.
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
app: unpoller
app.kubernetes.io/instance: unpoller
app.kubernetes.io/name: unpoller
name: unpoller-prometheus-podmonitor
release: kube-stack-prometheus
name: unpoller-prometheus-podmonitor
namespace: monitoring
spec:
selector:
matchLabels:
app.kubernetes.io/name: unpoller
namespaceSelector:
matchNames:
- unifi
podMetricsEndpoints:
- port: metrics
path: /metrics
FYI - As to the number and content of labels, they were auto-generated by Kustomize. If you were to write labels manually, you probably would use more meaningful descriptions. But these work fine.
Ideally this would also have a service account defined and associated RBAC roles limiting what that service account could do. I haven't gotten around to that yet.
Unpoller and Unifi Contoller running side by side in same namespace:
$ kubectl get pods -n unifi -o wide
NAME READY STATUS RESTARTS AGE IP NODE
unifi-controller-76758cfcf-mlfct 1/1 Running 0 8d 10.42.0.252 k3s01
unpoller-5dbb8857f5-56zks 1/1 Running 0 16h 10.42.0.36 k3s01
And the pod monitoring running:
$ kubectl get podmonitor -n monitoring
NAME AGE
unpoller-prometheus-podmonitor 16h
And if you want to test if metrics are being pulled by Unpoller and made available easy to test, just point to the IP address assigned to the Unpoller container:
$ curl http://10.42.0.36:9130/metrics
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.7439e-05
go_gc_duration_seconds{quantile="0.25"} 6.4822e-05
go_gc_duration_seconds{quantile="0.5"} 7.5783e-05
go_gc_duration_seconds{quantile="0.75"} 0.00010628
go_gc_duration_seconds{quantile="1"} 0.000329951
go_gc_duration_seconds_sum 0.542228114
go_gc_duration_seconds_count 5420
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 12
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.16.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
(Hundreds of lines removed from output for brevity).
All of the dashboards documented work just fine in Grafana.
If someone has defined reasonable alerts for Alertmanager based on these metrics that would be handy.