Canonry tells you whether AI answer engines mention your site for the queries you care about, which competitors they cite instead, and what to fix.
In one run, you can see:
- whether your brand or domain appears in AI-generated answers
- which sources are cited instead of you
- evidence from each answer engine (Gemini, ChatGPT, Claude, Perplexity)
- recommended content or authority gaps to address
npm install -g @ainyc/canonry
canonry init
canonry serveOpen http://localhost:4100/setup. A guided wizard walks you through:
- System check — verifies your instance is ready and providers are configured
- Create project — name, domain, and locale
- Add queries — type them in or auto-generate from your site with AI
- Add competitors (optional) — domains you want to benchmark against
- Launch — triggers your first visibility sweep, then opens the project
Prefer the terminal?
canonry project create my-site --domain example.com
canonry query add my-site "your first query" "second query"
canonry run my-site --wait
canonry evidence my-site
canonry insights my-siteYou should now see:
- whether your domain was cited for each query
- which competitors or sources appeared instead
- raw answer evidence per provider
- recommended next actions from the insight engine
| Problem | Fix |
|---|---|
| No provider key configured | The setup wizard checks this in step 1. Grab a free Gemini key from aistudio.google.com and paste it at /settings, or set GEMINI_API_KEY as an env var and restart canonry serve |
| No results after a run | Sweeps are async. The wizard navigates you to the project page when done — check the Runs tab, or use canonry run <project> --wait from the CLI |
| Not sure what queries to test | The wizard's step 3 can auto-generate queries by analyzing your site with AI. Or start with 3–5 commercial-intent queries your customers would ask an AI assistant |
npm install fails with node-gyp errors |
Install build tools for better-sqlite3: xcode-select --install (macOS), apt-get install python3 make g++ (Debian/Ubuntu) — see the troubleshooting guide |
Configure providers during canonry init, via the web dashboard at /settings, or as environment variables:
| Provider | Key source | Env var |
|---|---|---|
| Gemini | aistudio.google.com/apikey | GEMINI_API_KEY |
| OpenAI | platform.openai.com/api-keys | OPENAI_API_KEY |
| Claude | console.anthropic.com | ANTHROPIC_API_KEY |
| Perplexity | perplexity.ai/settings/api | PERPLEXITY_API_KEY |
| Local LLMs | Any OpenAI-compatible endpoint (Ollama, LM Studio, vLLM) | LOCAL_LLM_URL |
Integration setup guides: Google Search Console | Google Analytics | Bing Webmaster | WordPress
A typical monitoring cycle — manual or agent-driven:
canonry apply canonry.yaml --format json # define projects from YAML specs
canonry run my-project --wait --format json # sweep all providers
canonry evidence my-project --format json # inspect citation evidence
canonry insights my-project --format json # DB-backed insight analysis
canonry health my-project --format json # visibility health snapshot
canonry content targets my-project --format json # ranked content opportunitiesSchedule cron-based sweeps and subscribe a webhook for agent-driven workflows:
canonry schedule my-project --cron "0 6 * * *"
canonry notify add my-project --url https://my-agent.example.com/hooks/canonryapiVersion: canonry/v1
kind: Project
metadata:
name: my-project
spec:
canonicalDomain: example.com
country: US
language: en
queries:
- best dental implants near me
- emergency dentist open now
competitors:
- competitor.com
providers:
- gemini
- openai
- claude
- perplexitycanonry apply canonry.yaml
canonry apply project-a.yaml project-b.yamlCanonry is an agent-first AEO operating platform. AEO (Answer Engine Optimization) is about ensuring your content appears accurately in AI-generated answers. As search shifts from links to synthesized responses, you need something that monitors, analyzes, and acts across engines continuously.
Canonry ships a built-in AI agent — Aero — that reads project state, analyzes regressions, acts through a typed tool surface, and wakes up unprompted when runs complete. Users who prefer their own agent (Claude Code, Codex, custom) consume Canonry through the same CLI/API surface.
# One-shot turn — Aero picks the right tools and analyzes on its own.
canonry agent ask my-project "Why did the last run fail? Recommend a fix."
# Pick a specific LLM:
ANTHROPIC_API_KEY=... canonry agent ask my-project "…" --provider anthropic
ZAI_API_KEY=... canonry agent ask my-project "…" --provider zaiAero uses whichever LLM has an API key configured in ~/.canonry/config.yaml or exported as an env var. Conversations persist across invocations per project. Aero also wakes up unprompted after each run completes — analyzing the new data and writing the result back to the project's transcript.
Canonry's CLI and API are the agent interface. Every command supports --format json; every dashboard view has a matching API endpoint.
- Monitor visibility sweeps across providers on a schedule, tracking citation changes over time
- Analyze regressions, emerging opportunities, and correlations with site changes
- Coordinate fixes across content, schema markup, indexing submissions, and
llms.txt - Report results in a machine-readable form agents can act on
- Multi-provider. Query Gemini, OpenAI, Claude, Perplexity, and local LLMs from a single platform.
- Content opportunity engine. Per-query recommendations typed by action (
create/expand/refresh/add-schema) with auditable score breakdowns, drivers, and demand-source labels. Combines GSC ranking signals with competitor citation evidence. - Config-as-code. Kubernetes-style YAML files. Version control your monitoring, let agents apply changes declaratively.
- Self-hosted. Runs locally with SQLite. No cloud account required.
- Integrations. Google Search Console, Google Analytics 4, Bing Webmaster Tools, WordPress.
- Backlinks (Common Crawl). Workspace-level release sync via DuckDB, per-project inbound-link extraction.
- Location-aware. Project-scoped locations for geo-targeted monitoring.
- Scheduled monitoring. Cron-based recurring runs with webhook notifications.
For MCP clients like Claude Desktop, Cursor, Codex, or custom shells that prefer a typed tool catalog over shell commands, Canonry ships a stdio adapter:
canonry mcp install --client claude-desktop # or: cursor
canonry mcp install --client claude-desktop --read-only # 45 read API tools only
canonry mcp config --client codex # print snippet for unsupported clientsinstall merges a canonry entry into the client's config, backs up the original, and is idempotent. Restart the client after install. The adapter exposes 67 API tools — projects, runs, snapshots, insights, health, query and competitor management, schedules, GSC and GA reads, and the config-as-code apply path. See docs/mcp.md for the full surface.
Wire a webhook for run/insight events if you prefer your own agent (Claude Code, Codex, custom):
canonry agent attach my-project --url https://my-agent.example.com/hooks/canonryYour agent receives run.completed, insight.critical, insight.high, and citation.gained notifications. Detach with canonry agent detach my-project.
All endpoints under /api/v1/. Authenticate with Authorization: Bearer cnry_....
The canonical surface is served at GET /api/v1/openapi.json (no auth required).
Every dashboard view has a matching API endpoint and CLI command. The surface is grouped by domain:
| Domain | What it covers | Highlights |
|---|---|---|
| Projects | Create, read, update, delete projects; locations; export | PUT /projects/{name}, GET /projects, GET /projects/{name}/export |
| Apply | Config-as-code — declarative multi-project upsert | POST /apply |
| Queries / Competitors | Per-project query and competitor management | POST/DELETE /projects/{name}/queries, /competitors |
| Runs | Trigger, list, cancel, and inspect visibility sweeps | POST /projects/{name}/runs, GET /runs, POST /runs/{id}/cancel |
| Schedules | Cron-based recurring sweeps | GET/PUT /projects/{name}/schedule |
| History / Snapshots | Timeline + run diffs + per-query citation state | GET /projects/{name}/timeline, /snapshots/diff, /history |
| Intelligence | DB-backed insights + health snapshots + dismissal | GET /projects/{name}/insights, /health, POST /insights/{id}/dismiss |
| Content | Action-typed content opportunities, gaps, and grounding-source map | GET /projects/{name}/content/targets, /gaps, /sources |
| Notifications | Webhook subscriptions per project (agent or user-defined) | GET/POST/DELETE /projects/{name}/notifications, POST /.../test |
| Analytics | Aggregated dashboard analytics | GET /projects/{name}/analytics |
| Google (GSC + OAuth) | Search Console integration, OAuth flow, property selection, URL inspection | /google/*, /projects/{name}/google/* |
| Google Analytics (GA4) | Traffic, social referrals, attribution, AI referrals | /projects/{name}/ga/* |
| Bing Webmaster | Coverage, URL inspection, keyword stats | /projects/{name}/bing/* |
| WordPress | Content publishing + site management integration | /projects/{name}/wordpress/* |
| CDP (ChatGPT browser provider) | Chrome DevTools Protocol health and session status | /cdp/* |
| Settings / Auth / Telemetry | Server config, API key management, opt-in telemetry | /settings, /telemetry |
| OpenAPI | Full spec | GET /openapi.json (no auth) |
For the complete list of ~118 endpoints with request/response schemas, query GET /api/v1/openapi.json or browse the per-domain route handlers under packages/api-routes/src/.
Canonry ships a bundled canonry-setup skill that turns Aero (or any Claude-powered agent) into an AEO/SEO operator. Claude Code picks it up automatically from .claude/skills/canonry-setup/ when you open this repo; the same content lives under skills/canonry-setup/ for portable use with other harnesses.
The skill covers the end-to-end answer-engine optimization loop:
- AEO monitoring. Running citation sweeps via
canonry run/canonry evidence/canonry status, including per-query citation state and regressions. - Technical SEO audits. Driving the companion
@ainyc/aeo-auditCLI for 14-factor scoring — structured data (JSON-LD), content depth, AI-readable files (llms.txt,llms-full.txt), E-E-A-T signals, FAQ blocks, definition blocks, H1/alt/meta hygiene. - Indexing diagnosis. Google Search Console and Bing Webmaster Tools coverage, URL inspection, and one-shot submissions via
canonry google request-indexing/canonry bing request-indexing. - Schema & content execution. Patterns for injecting LocalBusiness/FAQPage JSON-LD, writing
llms.txtwith service-area detail, trimming query lists to high-intent queries, and handling WordPress/Elementor specifics. - Diagnose → prioritize → execute → monitor → report workflow. Opinionated defaults for new sites (0 citations), regressions on established sites, and county-level targeting — with guardrails like "never fabricate citation data" and "back up
~/.canonry/config.yamlbefore editing".
See skills/canonry-setup/SKILL.md plus the reference files under skills/canonry-setup/references/ for the full playbook.
See docs/deployment.md for local, reverse proxy, sub-path, Tailscale, systemd, and Docker guides.
docker build -t canonry .
docker run --rm -p 4100:4100 -e GEMINI_API_KEY=your-key -v canonry-data:/data canonryPublished images: Docker Hub | GHCR
Click deploy, add a volume at /data, generate a domain. No env vars required to start. Configure providers via the dashboard.
Create a Web Service with runtime Docker, attach a disk at /data. Health check: /health.
- Node.js >= 22.14.0
- At least one provider API key (configurable after startup)
git clone https://github.com/ainyc/canonry.git
cd canonry
pnpm install
pnpm run typecheck && pnpm run test && pnpm run lintSee docs/README.md for the full architecture, roadmap, ADR index, and doc map.
See CONTRIBUTING.md.
FSL-1.1-ALv2. Free to use, modify, and self-host. Each version converts to Apache 2.0 after two years.
Built by AI NYC
