Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 16 additions & 2 deletions self-hosting-helm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -322,6 +322,10 @@ The chart will automatically run database migrations before deploying the new ve

## Backup and restore

<Warning>
These steps cover PostgreSQL backups and restore. They do not automatically back up or restore uploaded files if your deployment stores them outside PostgreSQL.
</Warning>

### PostgreSQL backups with CloudNativePG

CloudNativePG supports volume snapshot backups:
Expand Down Expand Up @@ -352,14 +356,24 @@ kubectl exec -n sure $PRIMARY_POD -- pg_dump -U sure sure_production > backup.sq

### Restore from backup

1. Make sure app traffic is stopped or the deployment is in maintenance mode.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For a more guided experience, it is helpful to provide the specific command to stop the application workloads. This ensures that no new data is written to the database during the restore process.

Example command:

kubectl scale deployment -n sure sure-web sure-worker --replicas=0


2. Copy the SQL backup to the primary PostgreSQL pod:

```bash
# Copy backup to pod
kubectl cp backup.sql sure/$PRIMARY_POD:/tmp/backup.sql
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The $PRIMARY_POD variable is defined in the previous "Manual backup" section. If a user jumps directly to the restore steps, this command will fail because the variable is not set in their current shell session. Including the lookup command here makes the restore process self-contained and more robust.

PRIMARY_POD=$(kubectl get pod -n sure -l cnpg.io/cluster=sure-db,role=primary -o name)
kubectl cp backup.sql sure/$PRIMARY_POD:/tmp/backup.sql

```
Comment on lines +361 to +365
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
FILE="$(fd '^self-hosting-helm\.mdx$' | head -n1)"
rg -n -C2 'PRIMARY_POD|kubectl cp backup\.sql|kubectl exec -n sure \$PRIMARY_POD' "$FILE"

Repository: we-promise/docs

Length of output: 536


🌐 Web query:

In Kubernetes, for kubectl cpfile-spec[[namespace/]pod:]file/path, is pod/valid as the pod token, or must it be just?

💡 Result:

No, pod/ is not valid as the pod token in the kubectl cp file-spec [[namespace/]pod:]file/path. The pod token must be just (optionally prefixed by namespace/ if specifying a non-default namespace). The official Kubernetes documentation shows examples using :path for default namespace and /:path for specific namespaces, with no pod/ prefix. The source code parser in pkg/cmd/cp/cp.go splits the part before the colon (:) by / and accepts exactly 1 piece (PodName) or 2 pieces (PodNamespace/PodName). Three pieces like pod/ would result in more than 2, causing "filespec must match the canonical format: [[namespace/]pod:]file/path" error. All examples across docs, man pages, and tutorials consistently use or namespace/, never pod/. The pod/ prefix appears only in explanatory text like "pod token", not in actual syntax.

Citations:


Fix PRIMARY_POD format for kubectl cp in restore steps.

Line 348 sets PRIMARY_POD using -o name, which outputs pod/<name> format. When this variable is used in line 364 as sure/$PRIMARY_POD:/tmp/backup.sql, it expands to sure/pod/<name>, which violates kubectl cp file-spec syntax. The kubectl cp parser expects either <pod>:path or <namespace>/<pod>:path (two parts when split by /), not three parts. This causes a "filespec must match the canonical format: [[namespace/]pod:]file/path" error.

Extract only the pod name (without the pod/ prefix) using -o jsonpath='{.items[0].metadata.name}' instead of -o name.

Suggested doc patch
-PRIMARY_POD=$(kubectl get pod -n sure -l cnpg.io/cluster=sure-db,role=primary -o name)
+PRIMARY_POD=$(kubectl get pod -n sure -l cnpg.io/cluster=sure-db,role=primary -o jsonpath='{.items[0].metadata.name}')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@self-hosting-helm.mdx` around lines 361 - 365, The PRIMARY_POD variable is
being set with kubectl -o name (which yields "pod/<name>") and later used in the
kubectl cp command (kubectl cp backup.sql sure/$PRIMARY_POD:/tmp/backup.sql)
causing an invalid three-part filespec; change how PRIMARY_POD is populated
(e.g., use kubectl with -o jsonpath='{.items[0].metadata.name}' or otherwise
strip the "pod/" prefix) so it contains only the pod name, then keep the kubectl
cp invocation as sure/$PRIMARY_POD:/tmp/backup.sql so the filespec matches the
required [[namespace/]pod:]path format.


3. Restore the database:

# Restore
```bash
kubectl exec -n sure $PRIMARY_POD -- psql -U sure sure_production -f /tmp/backup.sql
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Restoring a SQL dump into an existing database that already contains schema and data can lead to conflicts (e.g., relation already exists errors). It is recommended to mention that the target database should be empty, or suggest using pg_dump --clean --if-exists when creating the backup to ensure a smooth restoration.

```

4. If your deployment uses uploaded files stored outside PostgreSQL, restore those separately using the matching volume snapshot or object-storage recovery process.

5. Verify that the app starts cleanly and your data appears as expected.

## Troubleshooting

### View logs
Expand Down
58 changes: 50 additions & 8 deletions self-hosting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -221,6 +221,20 @@ docker compose up --no-deps -d web worker

The Docker Compose configuration includes an optional backup service that automatically backs up your PostgreSQL database.

<Warning>
The backup service only creates PostgreSQL backups. It does not back up local files stored in Sure's `storage` directory.
If your deployment uses local file storage, back up that directory separately. If you use external object storage such as S3 or R2, make sure that storage is protected with its own backup and retention policy.
</Warning>

### What to back up

For a complete recovery plan, make sure you know which of these apply to your deployment:

- **PostgreSQL database**: accounts, transactions, settings, users, and metadata
- **Local file storage**: uploaded files stored on disk by the app
- **External object storage**: uploaded files stored in S3, R2, or another object store
- **Environment and deployment config**: your `.env`, `compose.yml`, secrets, and any reverse proxy or DNS setup needed to bring the app back online

### Enabling backups

The backup service uses Docker Compose profiles and is disabled by default. To enable it:
Expand Down Expand Up @@ -268,28 +282,56 @@ You can use cron syntax or these shortcuts:
- `@monthly` - Once per month
- Custom cron: `0 2 * * *` (2 AM daily)

### Restoring from backup
### Restore from a PostgreSQL backup

To restore your database from a backup:
Use this process when you have a SQL dump created by the backup service or with `pg_dump`.

> [!NOTE]
> If you customized the PostgreSQL username, password, or database name in your `.env` or `compose.yml`, replace `sure_user` and `sure_production` in the commands below.

1. Stop the application containers so they do not write to the database during the restore:

1. Stop the application:
```bash
docker compose down
docker compose stop web worker
```

2. Locate your backup file in the backup directory (e.g., `/opt/sure-data/backups`)
2. Start or keep the database container running:

3. Restore the backup:
```bash
docker compose up -d db
```

3. Locate the backup file in your backup directory, for example `/opt/sure-data/backups`.

4. Restore the SQL backup into PostgreSQL:

```bash
docker compose exec -T db psql -U sure_user -d sure_production < /path/to/backup.sql
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the Helm restore process, piping a SQL dump into an existing PostgreSQL database via psql may result in errors if the tables already exist. Consider adding a note advising users to ensure the database is empty or to use a 'clean' dump to avoid restoration failures.

```

4. Restart the application:
5. Restart the app:

```bash
docker compose up -d
docker compose up -d web worker
```

### Restore local uploaded files

If your Sure instance stores uploaded files on the local filesystem, restoring the database alone is not enough. You must also restore the app's storage directory from the matching file backup.

The exact host path depends on how you mapped volumes in `compose.yml`. Restore the same directory that Sure uses for local storage, then restart the app containers.

Comment on lines +312 to +323
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Avoid conflicting restart order in restore flow.

Line 315 restarts web/worker, but Line 322 later says restore local storage then restart containers. For full recovery, this is contradictory and can bring the app up before file state is restored.

Suggested doc adjustment
-5. Restart the app:
+5. If you are restoring only PostgreSQL, restart the app:

 ```bash
 docker compose up -d web worker

Restore local uploaded files

If your Sure instance stores uploaded files on the local filesystem, restoring the database alone is not enough. You must also restore the app's storage directory from the matching file backup.

-The exact host path depends on how you mapped volumes in compose.yml. Restore the same directory that Sure uses for local storage, then restart the app containers.
+The exact host path depends on how you mapped volumes in compose.yml. Restore the same directory that Sure uses for local storage before starting the app containers.

If you are using external object storage instead of local disk, restore those files using that provider's backup or versioning workflow instead.

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @self-hosting.mdx around lines 312 - 323, The doc currently restarts
containers (the "docker compose up -d web worker" step under the "Restart the
app" section) before instructing users to restore local uploaded files under the
"Restore local uploaded files" heading, which can bring the app up with missing
files; change the flow and wording so the restore happens first and the restart
happens after: move or edit the sentence that says "Restore the same directory
that Sure uses for local storage, then restart the app containers." to
explicitly instruct restoring the host storage directory before starting the app
containers and remove the earlier immediate restart instruction, ensuring the
"docker compose up -d web worker" command appears only after the restore
guidance.


</details>

<!-- fingerprinting:phantom:poseidon:hawk:7b3d8bf9-5981-403d-9ec7-82b63ac1b58f -->

<!-- 4e71b3a2 -->

<!-- This is an auto-generated comment by CodeRabbit -->

If you are using external object storage instead of local disk, restore those files using that provider's backup or versioning workflow instead.

### Verify the restore

After restoring, check the following:

- You can sign in successfully
- Your accounts and transactions appear as expected
- Uploaded files open correctly, if you use uploads
- The web and worker containers start cleanly without repeated errors

### Verifying backups

Check that backups are running correctly:
Expand Down