Self-hosted log monitoring for Firewalla using Grafana and Loki. Collects Zeek network logs (DNS, connections) and Firewalla ACL alarm logs, then visualizes them through pre-built dashboards.
ASCII fallback
Firewalla ββsyslog/HTTPβββΆ Loki (3100) βββββββββββββββββββββββΆ Grafana (3000)
β β
log store 5 dashboards
(30-day TTL)
blackbox_exporter (9115)
β² β²
ICMP ping ββ ββ HTTP probes
(devices) (services)
β
Prometheus (9090) βββββββββββββββββββββββββββΆ Grafana (3000)
metrics store β²
(30-day TTL) β
node_exporter (9100)
βββββββββ΄ββββββββ
pve pve2
(bare-metal) (bare-metal)
CPU/mem/disk CPU/mem/disk
net/fs net/fs/ZFS
Office Display data paths:
Firewalla Zeek/ACL βββΊ Loki βββββββββββββββββββββββββββ
ββββΊ Grafana βββΊ Office Display
Prometheus βββΊ blackbox (ICMP/HTTP) ββββββββββββββββββββ€
βββΊ node_exporter (pve) βββββββββββββββββββββ€
βββΊ node_exporter (pve2) ββββββββββββββββββββ
| Dashboard | Description |
|---|---|
| Network Overview | Pipeline health stats, log volume over time, top talkers |
| DNS & Security | DNS query analysis, NXDOMAIN anomaly detection, blocked connections |
| Traffic & Devices | Per-device connection breakdown, protocol mix, bandwidth estimation |
| Infrastructure Health | Device reachability (ICMP), service health (HTTP), and latency trends |
| Office Display | Kiosk-optimized wall display combining Prometheus metrics and Loki logs |
The DNS & Security and Traffic & Devices dashboards include a Device IP filter for drilling into individual hosts.
- Docker and Docker Compose
- Firewalla configured to forward logs to your Loki instance
git clone https://github.com/PitziLabs/firewalla-grafana-stack.git
cd firewalla-grafana-stack
cp .env.example .env
# Edit .env and set a real password
nano .env
docker compose up -dGrafana will be available at http://<host>:3000 (login: admin / your password).
| Variable | Default | Description |
|---|---|---|
GRAFANA_ADMIN_PASSWORD |
changeme |
Grafana admin password |
loki/loki-config.yml β Key settings:
- Retention: 30 days (
720h) - Storage: Local filesystem with TSDB index
- Compaction: Every 10 minutes with retention cleanup
Firewalla logs arrive with a log_source label:
| Label | Source |
|---|---|
zeek_dns |
Zeek DNS query logs |
zeek_conn |
Zeek connection logs |
firewalla_acl |
Firewalla ACL block/alarm logs |
node_exporter collects system metrics from your bare-metal Proxmox hosts (CPU, memory, disk I/O, filesystem usage, network throughput, and ZFS ARC stats). It runs as a systemd service on each host β not inside Docker β because it needs direct access to /proc, /sys, and the ZFS kernel module.
# Deploy to pve (node 1)
scp scripts/deploy-node-exporter.sh root@<pve-ip>:/tmp/
ssh root@<pve-ip> 'bash /tmp/deploy-node-exporter.sh'
# Deploy to pve2 (node 2)
scp scripts/deploy-node-exporter.sh root@<pve2-ip>:/tmp/
ssh root@<pve2-ip> 'bash /tmp/deploy-node-exporter.sh'The script is idempotent β safe to re-run. It handles download, user creation, systemd unit, and verification.
curl http://<host-ip>:9100/metrics | headUpdate the node job targets in prometheus/prometheus.yml with the real host IPs (marked with # ---- UPDATE THESE IPs ----), then reload Prometheus:
docker compose exec prometheus kill -HUP 1ZFS note: ZFS metrics (
node_zfs_*) are collected automatically when the ZFS kernel module is present. The ZFS ARC Hit Rate panel in the Infrastructure Health dashboard will show "No data" on hosts without ZFS pools β this is expected for pve. Only pve2 has ZFS.
Home Assistant exports ~828 entity metrics via its built-in Prometheus integration. Prometheus scrapes these every 60 seconds.
- Enable the Prometheus integration in Home Assistant β add
prometheus:to yourconfiguration.yamland restart HA - Create a long-lived access token in HA: Profile β Long-Lived Access Tokens β Create Token
- Save the token to
prometheus/ha_token:echo "YOUR_TOKEN" > prometheus/ha_token
- Restart Prometheus to pick up the new scrape target:
docker compose restart prometheus
The office display dashboard shows four smart home panels sourced from HA metrics: lights currently on, Sonos speaker reachability, battery levels, and printer toner levels.
βββ docker-compose.yml
βββ .env.example
βββ scripts/
β βββ deploy-node-exporter.sh # Idempotent node_exporter installer for Proxmox hosts
βββ loki/
β βββ loki-config.yml
βββ prometheus/
β βββ prometheus.yml # Scrape config (ICMP + HTTP probes, HA, node_exporter targets)
β βββ blackbox.yml # Blackbox exporter module definitions
β βββ ha_token.example # Template for HA long-lived access token
βββ grafana/
βββ provisioning/
βββ datasources/
β βββ loki.yml
β βββ prometheus.yml
βββ playlists/
β βββ playlists.yml # Provisioned playlist for office display rotation
βββ dashboards/
βββ dashboards.yml
βββ network-overview.json
βββ dns-security.json
βββ traffic-devices.json
βββ infra-health.json
βββ office-display.json # Kiosk-optimized wall display
# Start the stack
docker compose up -d
# View logs
docker compose logs -f loki
docker compose logs -f grafana
docker compose logs -f prometheus
docker compose logs -f blackbox
# Restart after config changes
docker compose restart
# Verify Loki is receiving data
curl -s http://localhost:3100/loki/api/v1/labels | python3 -m json.tool
# Verify Prometheus targets are healthy
curl -s http://localhost:9090/api/v1/targets | python3 -m json.tool
# Manually test a blackbox ICMP probe
curl "http://localhost:9115/probe?target=192.168.1.1&module=icmp"
# Manually test a blackbox HTTP probe
curl "http://localhost:9115/probe?target=http://192.168.1.13:8123&module=http_2xx"
# Stop everything
docker compose downAnonymous viewer auth is enabled by default (GF_AUTH_ANONYMOUS_ENABLED=true, GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer). This allows Chromium in kiosk mode to display dashboards without a login session. This stack is LAN-only β do not enable anonymous auth on an internet-exposed Grafana instance.
The kiosk display hardware and OS-level autostart setup is documented in PitziLabs/homelab-infra.
http://<host>:3000/d/<dashboard-uid>?kiosk&refresh=30s
The Office Display dashboard (firewalla-office-display) is purpose-built for a wall-mounted screen. It combines Prometheus metrics (ICMP device status, HTTP service health, CPU/RAM gauges, network throughput, ping latency) and Loki log queries (DNS query volume, blocked connections) into a single 1920Γ1080 layout with no scrolling.
http://<host>:3000/d/firewalla-office-display?kiosk&refresh=30s
Append &inactive to hide the kiosk exit controls after 5 seconds of inactivity.
A provisioned playlist ("Office Display Rotation") cycles through all four dashboards every 60 seconds. After restarting Grafana, find the playlist ID at Dashboards β Playlists, then open:
http://<host>:3000/playlists/play/<playlist-id>?kiosk
To create it manually instead: go to Dashboards β Playlists β New Playlist, add the four dashboards, and set the interval to 60s.
chromium-browser --kiosk --app="http://<host>:3000/d/firewalla-office-display?kiosk&refresh=30s"For playlist rotation replace the URL with the playlist play URL above.
The CPU and RAM gauge panels filter by IP: 192.168.139.8.* for pve and 192.168.139.7.* for pve2. If these differ from your node_exporter targets, update the instance=~ regex in grafana/provisioning/dashboards/office-display.json and restart Grafana.
MIT