Summary
An external dogfooder (me, architecting a terminal-session dogfood of the full claim → attest → verify → anchor state machine on 2026-04-17) hit a hard stop at the first health check: curl localhost:8351/health returned HTTP 000 and there is no publicly-documented path to bring the backend up on a fresh workstation.
This is a bootstrap-docs gap, not a stack gap. The pipeline itself is excellent; the onboarding surface is where I got stuck.
Context
I was running the dogfood-Run-1 prompt (cross-repo, regen-network/claims-engine workspace). The prompt:
- Expects
localhost:8351 reachable
- Expects
regen CLI on $PATH with a funded claims-service key in the test keyring
- Falls back from MCP tools to curl cleanly
- Has a Stop Condition 1 that fires if the backend won't start
On a non-Darren workstation (macOS, Darwin 25.3.0):
- Port 8351: no listener
~/.config/personal-koi/start.sh: does not exist (DarrenZal/personal-koi-mcp docs/LOCAL_STACK_RUNBOOK.md references this script but it is in a private config directory, not published)
regen keys list --keyring-backend test: empty (no claims-service key, no documented path to create + fund one)
- regen-prod README: has operational usage docs, but no "from fresh machine to first health check" walkthrough
Stop Condition 1 fires. The backend is runnable by exactly one person today.
What's missing from regen-prod at the repo level
Grouped by severity for external reproducibility:
- No top-level
BOOTSTRAP.md (or Installation section of README) that goes from fresh Ubuntu/macOS to curl localhost:8351/health returning 200. The existing README has great operational content (federation, graph traversal, phase descriptions) but the "first 30 minutes" path isn't there.
- No
docker-compose.yml for the main stack. docker-compose.terminusdb.yml exists for the TerminusDB sub-stack — extending the same pattern to cover Postgres + pgvector + the BGE embedding service + the FastAPI container would be mechanical for someone who knows the stack, and would remove the biggest source of install variance.
- Migration order + target DB are implicit.
migrations/ has the SQL; the order, target DB name (personal_koi vs eliza), and which ones are required vs optional for a claims-only deploy are not called out.
claims-service keyring provisioning. The V1 dogfood results reference this key as existing; there's no documented procedure to create it, fund it via the Vitwit faucet, and verify it from a cold workstation.
- DB identity is under-documented. Production uses
eliza with UPPERCASE entity types; personal deploys use personal_koi with mixed-case. Making this an explicit env var (e.g. KOI_DB_NAME, KOI_ENTITY_TYPE_CASE) with a documented default would prevent confused search-hit-zero bugs.
Suggested fix outlines
(Happy to take any or all of these as separate PRs or issues if that's easier to land — just signaling direction, not prescribing.)
- A. Add
BOOTSTRAP.md that walks from zero to health-check-200 with concrete commands.
- B. Add
docker-compose.yml covering Postgres + pgvector + BGE embedding server + FastAPI.
- C. Add
scripts/bootstrap-claims-dogfood.sh that (i) writes .env from .env.example, (ii) runs claims-relevant migrations in order, (iii) creates + funds the claims-service keyring entry interactively, (iv) starts the stack, (v) runs the existing smoke tests.
- D. Either move
~/.config/personal-koi/start.sh into the koi-processor repo as scripts/start-backend.sh (parameterized for install path) or coordinate with DarrenZal/personal-koi-mcp to publish it there — currently the runbook points at a non-repo script.
Why this matters beyond one dogfood session
- External reviewers of the Ethereum interop grant deliverable will hit the same wall if they try to verify claims end-to-end.
- The 2026 Q2 "dogfood the pipeline" GTM narrative requires that anyone outside the core team can run it.
- For the published
claims-engine-dogfood-results.md artifact to be a reproducible piece of evidence (rather than a "you had to be there" artifact), the install path has to be writable down.
What I did instead
Wrote the blocker up in the workspace, captured the Step 4 judge rubric off-backend (the judgment layer turns out to be nicely decoupled from storage — that was the most valuable positive finding), and am proceeding with a dry-rubric Run 2 while the backend story is resolved.
Happy to be pointed at docs that already exist and that I missed — filing this so the gap is tracked rather than rediscovered by the next dogfooder.
Filed via the claims-engine dogfood workstream. Context in our internal workspace at workspaces/claims-engine/data/dogfood-2026-04/dogfood-bootstrap-blocker.md (sensitive, not published).
Summary
An external dogfooder (me, architecting a terminal-session dogfood of the full claim → attest → verify → anchor state machine on 2026-04-17) hit a hard stop at the first health check:
curl localhost:8351/healthreturned HTTP 000 and there is no publicly-documented path to bring the backend up on a fresh workstation.This is a bootstrap-docs gap, not a stack gap. The pipeline itself is excellent; the onboarding surface is where I got stuck.
Context
I was running the dogfood-Run-1 prompt (cross-repo,
regen-network/claims-engineworkspace). The prompt:localhost:8351reachableregenCLI on$PATHwith a fundedclaims-servicekey in thetestkeyringOn a non-Darren workstation (macOS, Darwin 25.3.0):
~/.config/personal-koi/start.sh: does not exist (DarrenZal/personal-koi-mcpdocs/LOCAL_STACK_RUNBOOK.mdreferences this script but it is in a private config directory, not published)regen keys list --keyring-backend test: empty (noclaims-servicekey, no documented path to create + fund one)Stop Condition 1 fires. The backend is runnable by exactly one person today.
What's missing from regen-prod at the repo level
Grouped by severity for external reproducibility:
BOOTSTRAP.md(orInstallationsection of README) that goes from fresh Ubuntu/macOS tocurl localhost:8351/healthreturning 200. The existing README has great operational content (federation, graph traversal, phase descriptions) but the "first 30 minutes" path isn't there.docker-compose.ymlfor the main stack.docker-compose.terminusdb.ymlexists for the TerminusDB sub-stack — extending the same pattern to cover Postgres + pgvector + the BGE embedding service + the FastAPI container would be mechanical for someone who knows the stack, and would remove the biggest source of install variance.migrations/has the SQL; the order, target DB name (personal_koivseliza), and which ones are required vs optional for a claims-only deploy are not called out.claims-servicekeyring provisioning. The V1 dogfood results reference this key as existing; there's no documented procedure to create it, fund it via the Vitwit faucet, and verify it from a cold workstation.elizawith UPPERCASE entity types; personal deploys usepersonal_koiwith mixed-case. Making this an explicit env var (e.g.KOI_DB_NAME,KOI_ENTITY_TYPE_CASE) with a documented default would prevent confused search-hit-zero bugs.Suggested fix outlines
(Happy to take any or all of these as separate PRs or issues if that's easier to land — just signaling direction, not prescribing.)
BOOTSTRAP.mdthat walks from zero to health-check-200 with concrete commands.docker-compose.ymlcovering Postgres + pgvector + BGE embedding server + FastAPI.scripts/bootstrap-claims-dogfood.shthat (i) writes.envfrom.env.example, (ii) runs claims-relevant migrations in order, (iii) creates + funds theclaims-servicekeyring entry interactively, (iv) starts the stack, (v) runs the existing smoke tests.~/.config/personal-koi/start.shinto thekoi-processorrepo asscripts/start-backend.sh(parameterized for install path) or coordinate withDarrenZal/personal-koi-mcpto publish it there — currently the runbook points at a non-repo script.Why this matters beyond one dogfood session
claims-engine-dogfood-results.mdartifact to be a reproducible piece of evidence (rather than a "you had to be there" artifact), the install path has to be writable down.What I did instead
Wrote the blocker up in the workspace, captured the Step 4 judge rubric off-backend (the judgment layer turns out to be nicely decoupled from storage — that was the most valuable positive finding), and am proceeding with a dry-rubric Run 2 while the backend story is resolved.
Happy to be pointed at docs that already exist and that I missed — filing this so the gap is tracked rather than rediscovered by the next dogfooder.
Filed via the
claims-enginedogfood workstream. Context in our internal workspace atworkspaces/claims-engine/data/dogfood-2026-04/dogfood-bootstrap-blocker.md(sensitive, not published).