Skip to content

thepagent/openclaw-helm

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

43 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🦞 OpenClaw Helm Chart

Artifact Hub Helm 3 Kubernetes App Version Chart Version License: MIT

Helm chart for deploying OpenClaw on Kubernetes β€” an AI assistant that connects to messaging platforms and executes tasks autonomously.

Built on bjw-s app-template. For a detailed walkthrough, see the blog post.


Architecture

OpenClaw runs as a single-instance deployment (cannot scale horizontally):

Component Port Description
Gateway 18789 Main HTTP/WebSocket interface
Chromium 9222 Headless browser for automation (CDP, optional)

App Version: 2026.2.26


Installation

Prerequisites

  • Kubernetes >=1.26.0-0
  • Helm 3.0+
  • API key from a supported LLM provider (Anthropic, OpenAI, etc.)

Steps

  1. Add the repository:
helm repo add openclaw https://serhanekicii.github.io/openclaw-helm
helm repo update
  1. Create namespace and secret:
kubectl create namespace openclaw
kubectl create secret generic openclaw-env-secret -n openclaw \
  --from-literal=ANTHROPIC_API_KEY=sk-ant-xxx \
  --from-literal=OPENCLAW_GATEWAY_TOKEN=your-token
  1. Get default values:
helm show values openclaw/openclaw > values.yaml
  1. Reference your secret in values.yaml:
app-template:
  controllers:
    main:
      containers:
        main:
          envFrom:
            - secretRef:
                name: openclaw-env-secret
  1. Install:
helm install openclaw openclaw/openclaw -n openclaw -f values.yaml
  1. Pair your device:
# Access the web UI
kubectl port-forward -n openclaw svc/openclaw 18789:18789
# Open http://localhost:18789, enter your Gateway Token, click Connect

# Approve the pairing request
kubectl exec -n openclaw deployment/openclaw -- node dist/index.js devices list
kubectl exec -n openclaw deployment/openclaw -- node dist/index.js devices approve <REQUEST_ID>

Using a Fork or Local Image

If you maintain a fork of OpenClaw or build your own image, point to your container registry:

app-template:
  controllers:
    main:
      containers:
        main:
          image:
            repository: ghcr.io/your-org/openclaw-fork
            tag: "2026.2.26"

For images hosted in a private registry inside your cluster:

app-template:
  controllers:
    main:
      containers:
        main:
          image:
            repository: registry.internal/openclaw
            tag: "2026.2.26"
            pullPolicy: Always

Uninstall

helm uninstall openclaw -n openclaw
kubectl delete pvc -n openclaw -l app.kubernetes.io/name=openclaw  # optional: remove data

Configuration

All values are nested under app-template:. See values.yaml for full reference.

Values Table

Values

Key Type Default Description
app-template.chromiumVersion string "124" Chromium sidecar image version
app-template.configMaps.config.data."openclaw.json" string "{\n // Gateway configuration\n \"gateway\": {\n \"port\": 18789,\n \"mode\": \"local\",\n // IMPORTANT: trustedProxies uses exact IP matching only\n // - CIDR notation is NOT supported - list each proxy IP individually\n // - IPv6 exact addresses may work but are untested\n // - Recommend single-stack IPv4 deployments for simplicity\n \"trustedProxies\": [\"10.0.0.1\"]\n },\n\n // Browser configuration (Chromium sidecar)\n \"browser\": {\n \"enabled\": true,\n \"defaultProfile\": \"default\",\n \"profiles\": {\n \"default\": {\n \"cdpUrl\": \"http://localhost:9222\",\n \"color\": \"#4285F4\"\n }\n }\n },\n\n // Agent configuration\n \"agents\": {\n \"defaults\": {\n \"workspace\": \"/home/node/.openclaw/workspace\",\n \"model\": {\n // Uses ANTHROPIC_API_KEY from environment\n \"primary\": \"anthropic/claude-opus-4-6\"\n },\n \"userTimezone\": \"UTC\",\n \"timeoutSeconds\": 600,\n \"maxConcurrent\": 1\n },\n \"list\": [\n {\n \"id\": \"main\",\n \"default\": true,\n \"identity\": {\n \"name\": \"OpenClaw\",\n \"emoji\": \"🦞\"\n }\n }\n ]\n },\n\n // Session management\n \"session\": {\n \"scope\": \"per-sender\",\n \"store\": \"/home/node/.openclaw/sessions\",\n \"reset\": {\n \"mode\": \"idle\",\n \"idleMinutes\": 60\n }\n },\n\n // Logging\n \"logging\": {\n \"level\": \"info\",\n \"consoleLevel\": \"info\",\n \"consoleStyle\": \"compact\",\n \"redactSensitive\": \"tools\"\n },\n\n // Tools configuration\n \"tools\": {\n \"profile\": \"full\",\n \"web\": {\n \"search\": {\n \"enabled\": false\n },\n \"fetch\": {\n \"enabled\": true\n }\n }\n }\n\n // Channel configuration can be added here:\n // \"channels\": {\n // \"telegram\": {\n // \"botToken\": \"${TELEGRAM_BOT_TOKEN}\",\n // \"enabled\": true\n // },\n // \"discord\": {\n // \"token\": \"${DISCORD_BOT_TOKEN}\"\n // },\n // \"slack\": {\n // \"botToken\": \"${SLACK_BOT_TOKEN}\",\n // \"appToken\": \"${SLACK_APP_TOKEN}\"\n // }\n // }\n}\n"
app-template.configMaps.config.enabled bool true
app-template.configMode string "merge" Config mode: merge preserves runtime changes, overwrite for strict GitOps
app-template.controllers.main.containers.chromium object {"args":["--headless","--disable-gpu","--no-sandbox","--disable-dev-shm-usage","--remote-debugging-address=0.0.0.0","--remote-debugging-port=9222","--user-data-dir=/tmp/chromium"],"command":["chromium-browser"],"enabled":true,"env":{"XDG_CACHE_HOME":"/tmp"},"image":{"repository":"zenika/alpine-chrome","tag":"{{ .Values.chromiumVersion }}"},"probes":{"liveness":{"custom":true,"enabled":true,"spec":{"failureThreshold":6,"initialDelaySeconds":10,"periodSeconds":30,"tcpSocket":{"port":9222},"timeoutSeconds":5}},"readiness":{"custom":true,"enabled":true,"spec":{"initialDelaySeconds":5,"periodSeconds":10,"tcpSocket":{"port":9222}}},"startup":{"custom":true,"enabled":true,"spec":{"failureThreshold":12,"initialDelaySeconds":5,"periodSeconds":5,"tcpSocket":{"port":9222},"timeoutSeconds":5}}},"resources":{"limits":{"cpu":"1000m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000}} Chromium sidecar for browser automation (CDP on port 9222)
app-template.controllers.main.containers.chromium.enabled bool true Enable/disable the Chromium browser sidecar
app-template.controllers.main.containers.chromium.image.repository string "zenika/alpine-chrome" Chromium image repository
app-template.controllers.main.containers.chromium.image.tag string "{{ .Values.chromiumVersion }}" Chromium image tag
app-template.controllers.main.containers.main object {"args":["gateway","--bind","lan","--port","18789"],"command":["node","dist/index.js"],"env":{},"envFrom":[],"image":{"pullPolicy":"IfNotPresent","repository":"ghcr.io/openclaw/openclaw","tag":"{{ .Values.openclawVersion }}"},"probes":{"liveness":{"enabled":true,"spec":{"failureThreshold":3,"initialDelaySeconds":30,"periodSeconds":30,"tcpSocket":{"port":18789},"timeoutSeconds":5},"type":"TCP"},"readiness":{"enabled":true,"spec":{"failureThreshold":3,"initialDelaySeconds":10,"periodSeconds":10,"tcpSocket":{"port":18789},"timeoutSeconds":5},"type":"TCP"},"startup":{"enabled":true,"spec":{"failureThreshold":30,"initialDelaySeconds":5,"periodSeconds":5,"tcpSocket":{"port":18789},"timeoutSeconds":5},"type":"TCP"}},"resources":{"limits":{"cpu":"2000m","memory":"2Gi"},"requests":{"cpu":"200m","memory":"512Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000}} Main OpenClaw container
app-template.controllers.main.containers.main.image.pullPolicy string "IfNotPresent" Image pull policy
app-template.controllers.main.containers.main.image.repository string "ghcr.io/openclaw/openclaw" Container image repository
app-template.controllers.main.containers.main.image.tag string "{{ .Values.openclawVersion }}" Container image tag
app-template.controllers.main.containers.main.resources object {"limits":{"cpu":"2000m","memory":"2Gi"},"requests":{"cpu":"200m","memory":"512Mi"}} Resource requests and limits
app-template.controllers.main.initContainers.init-config.command list See values.yaml Init-config startup script
app-template.controllers.main.initContainers.init-config.env.CONFIG_MODE string `"{{ .Values.configMode default "merge" }}"`
app-template.controllers.main.initContainers.init-config.image.repository string "ghcr.io/openclaw/openclaw"
app-template.controllers.main.initContainers.init-config.image.tag string "{{ .Values.openclawVersion }}"
app-template.controllers.main.initContainers.init-config.securityContext.allowPrivilegeEscalation bool false
app-template.controllers.main.initContainers.init-config.securityContext.capabilities.drop[0] string "ALL"
app-template.controllers.main.initContainers.init-config.securityContext.readOnlyRootFilesystem bool true
app-template.controllers.main.initContainers.init-config.securityContext.runAsGroup int 1000
app-template.controllers.main.initContainers.init-config.securityContext.runAsNonRoot bool true
app-template.controllers.main.initContainers.init-config.securityContext.runAsUser int 1000
app-template.controllers.main.initContainers.init-skills.command list See values.yaml Init-skills startup script
app-template.controllers.main.initContainers.init-skills.env.HOME string "/tmp"
app-template.controllers.main.initContainers.init-skills.env.NPM_CONFIG_CACHE string "/tmp/.npm"
app-template.controllers.main.initContainers.init-skills.image.repository string "ghcr.io/openclaw/openclaw"
app-template.controllers.main.initContainers.init-skills.image.tag string "{{ .Values.openclawVersion }}"
app-template.controllers.main.initContainers.init-skills.securityContext.allowPrivilegeEscalation bool false
app-template.controllers.main.initContainers.init-skills.securityContext.capabilities.drop[0] string "ALL"
app-template.controllers.main.initContainers.init-skills.securityContext.readOnlyRootFilesystem bool true
app-template.controllers.main.initContainers.init-skills.securityContext.runAsGroup int 1000
app-template.controllers.main.initContainers.init-skills.securityContext.runAsNonRoot bool true
app-template.controllers.main.initContainers.init-skills.securityContext.runAsUser int 1000
app-template.controllers.main.replicas int 1 Number of replicas (must be 1, OpenClaw doesn't support horizontal scaling)
app-template.controllers.main.strategy string "Recreate" Deployment strategy
app-template.defaultPodOptions.securityContext object {"fsGroup":1000,"fsGroupChangePolicy":"OnRootMismatch"} Pod security context
app-template.ingress.main.enabled bool false Enable ingress resource creation
app-template.networkpolicies.main.controller string "main"
app-template.networkpolicies.main.enabled bool false Enable network policy (default deny-all with explicit allow rules)
app-template.networkpolicies.main.policyTypes[0] string "Ingress"
app-template.networkpolicies.main.policyTypes[1] string "Egress"
app-template.networkpolicies.main.rules.egress[0].ports[0].port int 53
app-template.networkpolicies.main.rules.egress[0].ports[0].protocol string "UDP"
app-template.networkpolicies.main.rules.egress[0].ports[1].port int 53
app-template.networkpolicies.main.rules.egress[0].ports[1].protocol string "TCP"
app-template.networkpolicies.main.rules.egress[0].to[0].namespaceSelector.matchLabels."kubernetes.io/metadata.name" string "kube-system"
app-template.networkpolicies.main.rules.egress[0].to[0].podSelector.matchLabels.k8s-app string "kube-dns"
app-template.networkpolicies.main.rules.egress[1].to[0].ipBlock.cidr string "0.0.0.0/0"
app-template.networkpolicies.main.rules.egress[1].to[0].ipBlock.except[0] string "10.0.0.0/8"
app-template.networkpolicies.main.rules.egress[1].to[0].ipBlock.except[1] string "172.16.0.0/12"
app-template.networkpolicies.main.rules.egress[1].to[0].ipBlock.except[2] string "192.168.0.0/16"
app-template.networkpolicies.main.rules.egress[1].to[0].ipBlock.except[3] string "169.254.0.0/16"
app-template.networkpolicies.main.rules.egress[1].to[0].ipBlock.except[4] string "100.64.0.0/10"
app-template.networkpolicies.main.rules.ingress[0].from[0].namespaceSelector.matchLabels."kubernetes.io/metadata.name" string "gateway-system"
app-template.networkpolicies.main.rules.ingress[0].ports[0].port int 18789
app-template.networkpolicies.main.rules.ingress[0].ports[0].protocol string "TCP"
app-template.openclawVersion string "2026.2.26" OpenClaw image version (used by all OpenClaw containers)
app-template.persistence.config.advancedMounts.main.init-config[0].path string "/config"
app-template.persistence.config.advancedMounts.main.init-config[0].readOnly bool true
app-template.persistence.config.enabled bool true
app-template.persistence.config.identifier string "config"
app-template.persistence.config.type string "configMap"
app-template.persistence.data.accessMode string "ReadWriteOnce" PVC access mode
app-template.persistence.data.advancedMounts.main.init-config[0].path string "/home/node/.openclaw"
app-template.persistence.data.advancedMounts.main.init-skills[0].path string "/home/node/.openclaw"
app-template.persistence.data.advancedMounts.main.main[0].path string "/home/node/.openclaw"
app-template.persistence.data.enabled bool true
app-template.persistence.data.size string "5Gi" PVC storage size
app-template.persistence.data.type string "persistentVolumeClaim"
app-template.persistence.tmp.advancedMounts.main.chromium[0].path string "/tmp"
app-template.persistence.tmp.advancedMounts.main.init-config[0].path string "/tmp"
app-template.persistence.tmp.advancedMounts.main.init-skills[0].path string "/tmp"
app-template.persistence.tmp.advancedMounts.main.main[0].path string "/tmp"
app-template.persistence.tmp.enabled bool true
app-template.persistence.tmp.type string "emptyDir"
app-template.service.main.controller string "main"
app-template.service.main.ipFamilies[0] string "IPv4"
app-template.service.main.ipFamilyPolicy string "SingleStack" IPv4-only (see trustedProxies note in gateway config)
app-template.service.main.ports.http.port int 18789 Gateway service port

Config Mode

The configMode setting controls how Helm-managed config merges with runtime changes:

Mode Behavior
merge (default) Helm values are deep-merged with existing config. Runtime changes (e.g., paired devices, UI settings) are preserved.
overwrite Helm values completely replace existing config. Use for strict GitOps where config should match values.yaml exactly.
app-template:
  configMode: overwrite  # or "merge" (default)
ArgoCD with Config Merge

When using configMode: merge with ArgoCD, prevent ArgoCD from overwriting runtime config changes by ignoring the ConfigMap:

# Application manifest
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: openclaw
spec:
  ignoreDifferences:
    - group: ""
      kind: ConfigMap
      name: openclaw
      jsonPointers:
        - /data

This allows:

  • ArgoCD manages deployments, services, etc.
  • Runtime config changes (paired devices, UI settings) persist on PVC
  • Helm values still merge on pod restart

Security

The chart follows security best practices:

  • All containers run as non-root (UID 1000)
  • Read-only root filesystem on all containers
  • All capabilities dropped
  • Privilege escalation disabled
  • Network policies available for workload isolation

Important: OpenClaw has shell access and processes untrusted input. Use network policies and limit exposure. See the OpenClaw Security Guide for best practices.

Network Policy

Network policies isolate OpenClaw from internal cluster services, limiting blast radius if compromised:

app-template:
  networkpolicies:
    main:
      enabled: true

Default policy allows:

  • Ingress from gateway-system namespace on port 18789
  • Egress to kube-dns
  • Egress to public internet (blocks private/reserved ranges)

Requires a CNI with NetworkPolicy support (Calico, Cilium).

Allowing Internal Services

To allow OpenClaw to reach internal services (e.g., Vault, Ollama), add egress rules:

app-template:
  networkpolicies:
    main:
      enabled: true
      rules:
        egress:
          # DNS (required)
          - to:
              - namespaceSelector:
                  matchLabels:
                    kubernetes.io/metadata.name: kube-system
                podSelector:
                  matchLabels:
                    k8s-app: kube-dns
            ports:
              - protocol: UDP
                port: 53
          # Public internet (blocks RFC1918)
          - to:
              - ipBlock:
                  cidr: 0.0.0.0/0
                  except:
                    - 10.0.0.0/8
                    - 172.16.0.0/12
                    - 192.168.0.0/16
          # Vault
          - to:
              - namespaceSelector:
                  matchLabels:
                    kubernetes.io/metadata.name: vault
            ports:
              - protocol: TCP
                port: 8200
          # Ollama
          - to:
              - namespaceSelector:
                  matchLabels:
                    kubernetes.io/metadata.name: ollama
            ports:
              - protocol: TCP
                port: 11434

Browser Automation

Chromium sidecar provides headless browser via CDP on port 9222.

To disable:

app-template:
  controllers:
    main:
      containers:
        chromium:
          enabled: false

Skills

The init-skills container provides declarative skill management from ClawHub:

app-template:
  controllers:
    main:
      initContainers:
        init-skills:
          command:
            - sh
            - -c
            - |
              cd /home/node/.openclaw/workspace && mkdir -p skills
              for skill in weather; do
                if ! npx -y clawhub install "$skill" --no-input; then
                  echo "WARNING: Failed to install skill: $skill"
                fi
              done

Runtime Dependencies

Some features (interfaces, skills) require additional runtimes or packages not included in the base image. The init-skills init container handles this -- install extra tooling to the PVC at /home/node/.openclaw so it persists across pod restarts and is available at runtime.

This approach is necessary because all containers run with a read-only root filesystem as non-root (UID 1000). Default package manager paths (e.g., /usr/local/lib/node_modules) are not writable. Redirecting install paths to the PVC solves this.

pnpm (e.g., MS Teams interface)

Interfaces like MS Teams require pnpm packages. The read-only root filesystem prevents writing to default pnpm paths (/usr/local/lib/node_modules, ~/.local/share/pnpm, etc.). The fix is to install pnpm to the PVC and redirect its directories to writable mounts.

The init-skills container already sets HOME=/tmp, so pnpm's cache, state, and config writes land on /tmp (writable emptyDir). The content-addressable store goes on the PVC so that hardlinks work (same filesystem as node_modules) and persist across restarts.

1. Install pnpm and packages in init-skills:

app-template:
  controllers:
    main:
      initContainers:
        init-skills:
          command:
            - sh
            - -c
            - |
              PNPM_HOME=/home/node/.openclaw/pnpm
              mkdir -p "$PNPM_HOME"
              if [ ! -f "$PNPM_HOME/pnpm" ]; then
                echo "Installing pnpm..."
                curl -fsSL https://get.pnpm.io/install.sh | env PNPM_HOME="$PNPM_HOME" SHELL=/bin/sh sh -
              fi
              export PATH="$PNPM_HOME:$PATH"
              echo "Installing interface dependencies..."
              cd /home/node/.openclaw
              pnpm install <your-package> --store-dir /home/node/.openclaw/.pnpm-store

2. Expose pnpm to the main container:

app-template:
  controllers:
    main:
      containers:
        main:
          env:
            PATH: /home/node/.openclaw/pnpm:/home/node/.openclaw/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
            PNPM_HOME: /home/node/.openclaw/pnpm
            PNPM_STORE_DIR: /home/node/.openclaw/.pnpm-store
uv (Python package manager)

For skills that require Python:

1. Install uv in init-skills:

app-template:
  controllers:
    main:
      initContainers:
        init-skills:
          command:
            - sh
            - -c
            - |
              mkdir -p /home/node/.openclaw/bin
              if [ ! -f /home/node/.openclaw/bin/uv ]; then
                echo "Installing uv..."
                curl -LsSf https://astral.sh/uv/install.sh | env UV_INSTALL_DIR=/home/node/.openclaw/bin sh
              fi

2. Add to PATH in main container:

app-template:
  controllers:
    main:
      containers:
        main:
          env:
            PATH: /home/node/.openclaw/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Automatic Rollouts on ConfigMap/Secret Changes

For automatic pod restarts when ConfigMap/Secret changes, use Stakater Reloader or ArgoCD. See the blog post for detailed setup.

app-template:
  defaultPodOptions:
    annotations:
      reloader.stakater.com/auto: "true"

Persistence

Persistent storage is enabled by default (5Gi).

To disable (data lost on restart):

app-template:
  persistence:
    data:
      enabled: false
Ingress
app-template:
  ingress:
    main:
      enabled: true
      className: your-ingress-class
      hosts:
        - host: openclaw.example.com
          paths:
            - path: /
              pathType: Prefix
              service:
                identifier: main
                port: http
      tls:
        - secretName: openclaw-tls
          hosts:
            - openclaw.example.com
Internal CA Trust

For HTTPS to internal services with private CAs:

app-template:
  persistence:
    ca-bundle:
      enabled: true
      type: configMap
      name: ca-bundle
      advancedMounts:
        main:
          main:
            - path: /etc/ssl/certs/ca-bundle.crt
              subPath: ca-bundle.crt
              readOnly: true
  controllers:
    main:
      containers:
        main:
          env:
            REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-bundle.crt
Resource Limits

Default resources for main container:

app-template:
  controllers:
    main:
      containers:
        main:
          resources:
            requests:
              cpu: 200m
              memory: 512Mi
            limits:
              cpu: 2000m
              memory: 2Gi

Troubleshooting

Debug Commands
# Pod status
kubectl get pods -n openclaw

# Logs
kubectl logs -n openclaw deployment/openclaw

# Port forward
kubectl port-forward -n openclaw svc/openclaw 18789:18789

Development

helm lint charts/openclaw
helm dependency update charts/openclaw
helm template test charts/openclaw --debug

Dependencies

Repository Name Version
https://bjw-s-labs.github.io/helm-charts/ app-template 4.6.2

License

MIT

About

🦞 Helm chart for OpenClaw - personal AI assistant

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors