-
Notifications
You must be signed in to change notification settings - Fork 3
Doc Tools MCP Server #157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Doc Tools MCP Server #157
Conversation
- Add development guides for extensions and macros - Add user guides with examples and usage patterns - Add reference documentation for APIs - Add README files for both components
- Implement MCP server with tool handlers - Add comprehensive test suite for MCP functionality - Add MCP tools directory with server utilities
- Enhance doc-tools-mcp.js with improved tool handlers - Update doc-tools.js with new command routing - Refactor setup-mcp.js for better initialization
- Add script to generate CLI reference documentation - Add generated CLI reference in AsciiDoc format
- Simplify main README - Reference new documentation files for details
Documentation moved to mcp/README.adoc
Previously, the MCP server only worked when run from the docs-extensions-and-macros source repository by checking for bin/doc-tools.js. This prevented using the MCP server from documentation repositories that have doc-tools installed as a dependency. Now the validation: 1. First checks if we're in the source repo (bin/doc-tools.js exists) 2. Falls back to checking if doc-tools is available via npx This allows the MCP server to work from any repository that has doc-tools installed or available, while still working from the source repository for development.
✅ Deploy Preview for docs-extensions-and-macros ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
📝 WalkthroughWalkthroughThis pull request introduces a comprehensive Model Context Protocol (MCP) server integration for the Redpanda documentation tooling ecosystem. The changes add a new MCP server entrypoint ( Sequence Diagram(s)sequenceDiagram
actor User
participant Claude as Claude Code<br/>(MCP Client)
participant MCPServer as bin/doc-tools-mcp.js<br/>(MCP Server)
participant ToolModule as bin/mcp-tools/<br/>Tool Modules
participant JobQueue as Job Queue<br/>Management
participant External as External<br/>Tools/APIs
User->>Claude: Request documentation generation
Claude->>MCPServer: Call MCP Tool (e.g., generate_property_docs)
MCPServer->>MCPServer: Validate request via<br/>CallToolRequestSchema
alt Async/Background Job
MCPServer->>JobQueue: createJob(toolName, command)
JobQueue->>MCPServer: Return job_id
MCPServer->>Claude: Respond with job_id
JobQueue->>ToolModule: executeJob (async)
ToolModule->>External: spawnSync/execSync<br/>(npx doc-tools ...)
External-->>ToolModule: stdout/stderr/exitCode
ToolModule->>JobQueue: updateJobProgress
JobQueue->>MCPServer: Send MCP notification
MCPServer->>Claude: Progress update
else Sync Tool Execution
MCPServer->>ToolModule: executeTool(toolName, args)
ToolModule->>External: Execute command/file ops
External-->>ToolModule: Result/Error
ToolModule-->>MCPServer: Structured response
MCPServer-->>Claude: Tool result JSON
end
Claude->>User: Display generated docs/results
sequenceDiagram
participant CLI as User/CI
participant DocToolsCLI as bin/doc-tools.js
participant ToolImpl as bin/mcp-tools/<br/>Tool Implementation
participant Repo as Repository
participant External as External<br/>Services
CLI->>DocToolsCLI: npx doc-tools [command] [options]
DocToolsCLI->>DocToolsCLI: Parse command & options
DocToolsCLI->>ToolImpl: Delegate to tool module<br/>(e.g., property-docs.js)
ToolImpl->>Repo: findRepoRoot()
Repo-->>ToolImpl: repoRoot path
ToolImpl->>Repo: getAntoraStructure(repoRoot)
Repo-->>ToolImpl: Antora metadata
ToolImpl->>ToolImpl: Validate doc-tools availability
alt Success Path
ToolImpl->>External: spawnSync/execSync<br/>npx doc-tools generate [type]
External-->>ToolImpl: Command output
ToolImpl->>ToolImpl: Parse output for metrics/counts
ToolImpl-->>DocToolsCLI: Structured result<br/>{success, files, data}
else Error Path
ToolImpl-->>DocToolsCLI: Error result<br/>{success: false, message, suggestion}
end
DocToolsCLI-->>CLI: Exit code + output
Estimated code review effort🎯 5 (Critical) | ⏱️ ~90+ minutes Key areas requiring detailed review:
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Nitpick comments (28)
macros/DEVELOPMENT.adoc (1)
274-289: HTML escaping implementation is incomplete.The escapeHtml function shown in lines 278-285 only covers 5 common entities but omits others like
\n,\r, etc. that could be relevant. However, for inline macro usage, this is probably sufficient. Consider cross-referencing with OWASP XSS prevention guidelines if security is a priority.For completeness, consider referencing a well-tested HTML escaping library rather than a manual implementation:
// Alternative: Use a standard library const escapeHtml = require('escape-html');Verify that
escape-htmlor similar is available in project dependencies if you want to recommend this approach.CONTRIBUTING.adoc (1)
119-130: Clarify path example might confuse Windows users.Line 125 shows an absolute path example
/absolute/path/to/docs-extensions-and-macros/...which may be unclear for Windows developers. Consider either:
- Adding platform-specific notes
- Showing relative path examples
- Documenting how to convert absolute paths per platform
This is minor since the setup can use
npm installto avoid this complexity.bin/mcp-tools/utils.js (1)
61-63: Consider adding defensive type checking.The
normalizeVersionfunction could be more robust by checking if the input is a string before callingstartsWith. While the current usage may guarantee string inputs, defensive coding would prevent potential runtime errors if the function is called with unexpected input.function normalizeVersion(version) { + if (typeof version !== 'string') { + throw new TypeError('version must be a string'); + } return version.startsWith('v') ? version.substring(1) : version; }bin/mcp-tools/versions.js (2)
22-35: Add defensive parsing for version information.The parsing logic uses
split('=')[1]which could returnundefinedif the line doesn't contain an '=' character or has an unexpected format. Additionally, there's no validation that the extracted version is not an empty string.Apply this diff to add defensive checks:
// Parse the output (format: REDPANDA_VERSION=vX.Y.Z\nREDPANDA_DOCKER_REPO=redpanda) const lines = output.trim().split('\n'); const versionLine = lines.find(l => l.startsWith('REDPANDA_VERSION=')); const dockerLine = lines.find(l => l.startsWith('REDPANDA_DOCKER_REPO=')); if (!versionLine) { return { success: false, error: 'Failed to parse version from output' }; } -const version = versionLine.split('=')[1]; -const dockerRepo = dockerLine ? dockerLine.split('=')[1] : 'redpanda'; +const version = versionLine.split('=')[1]?.trim(); +if (!version) { + return { + success: false, + error: 'Version string is empty' + }; +} +const dockerRepo = dockerLine?.split('=')[1]?.trim() || 'redpanda';
65-78: Add defensive parsing for Console version information.Same parsing issue as in
getRedpandaVersion: thesplit('=')[1]operation could returnundefinedif the format is unexpected, and there's no validation for empty strings.Apply this diff to add defensive checks:
// Parse the output (format: CONSOLE_VERSION=vX.Y.Z\nCONSOLE_DOCKER_REPO=console) const lines = output.trim().split('\n'); const versionLine = lines.find(l => l.startsWith('CONSOLE_VERSION=')); const dockerLine = lines.find(l => l.startsWith('CONSOLE_DOCKER_REPO=')); if (!versionLine) { return { success: false, error: 'Failed to parse version from output' }; } -const version = versionLine.split('=')[1]; -const dockerRepo = dockerLine ? dockerLine.split('=')[1] : 'console'; +const version = versionLine.split('=')[1]?.trim(); +if (!version) { + return { + success: false, + error: 'Version string is empty' + }; +} +const dockerRepo = dockerLine?.split('=')[1]?.trim() || 'console';bin/mcp-tools/review.js (7)
44-90: Avoid duplicating scoring logic betweenreviewGeneratedDocsandgenerateReviewReportYou recompute deductions from
issuesintorunningScore, but the final line usesquality_scorefrom the input. This can drift from the real scoring rules inreviewGeneratedDocs(for example,Invalid $referrors are treated as −5 there but show as −3 here, and “other warnings” get an extra capped deduction here). Consider either:
- deriving
quality_scoreexclusively from a single shared scoring helper, or- using this section to describe the scoring heuristics conceptually without recomputing detailed numbers.
That keeps the explanation aligned with the actual score and avoids subtle mismatches when rules evolve.
61-76:missingDescriptionsscoring vs reported warning count can divergeIn the properties path,
missingDescriptionsis incremented for all properties (including deprecated ones), but “Missing description” warnings are only added for non-deprecated properties. The quality score deduction usesmissingDescriptions, while this breakdown derivesmissingDescCountfrom warning issues only. This means the score can be penalized for missing descriptions on deprecated properties that don’t appear in the warnings list, which may confuse readers comparing counts.Either exclude deprecated properties from the
missingDescriptionscounter or base the deduction strictly on the warning issues you surface.
210-220: Clean up unusedsteplogic in “Next Steps” sectionThe
stepvariable is computed but not actually used to vary the output (both branches of the ternary emit the same'. 'prefix), so it adds complexity without effect. You can simplify this block by:
- removing
stepentirely and just emitting ordered-list lines, or- if you intended conditional numbering, wiring
stepinto the emitted text.Right now it’s effectively dead logic.
303-337: UnusedmissingExamplescounter and duplicateshouldHaveExampleloopWithin the properties branch you:
- increment
missingExamplesin the firstallProperties.forEachfor properties that “should have examples”, and- immediately do a second
allProperties.forEachusingshouldHaveExampleagain to buildpropertiesNeedingExamples.Only
propertiesNeedingExamplesis used later;missingExamplesis never read. You can dropmissingExamplesand fold the two passes into one to simplify the flow and avoid confusion about which count is authoritative.
398-427:invalidXrefPatternis quite broad; consider tightening to avoid false positivesThe regex
/xref:\.\/|xref:(?![\w-]+:)/gwill flag:
- any
xref:./...(intended), but also- some valid Antora IDs that don’t strictly match
[\w-]+in the component part (for example, components or modules with dots or slashes in the ID).If you rely on this for “invalid xref” scoring, consider tightening it (or making it configurable) so that valid-but-unusual Antora IDs aren’t penalized. Even a short comment describing the expected ID shape here would help future maintainers.
462-555:rpcn_connectorsscoring explanation may disagree with actual penaltiesFor the
rpcn_connectorsdoc_type:
- Invalid
$refentries reducequalityScoreby 5 each.- Duplicate descriptions reduce
qualityScoreby 3 each.But in
generateReviewReportall errors default to a 3–5 point deduction based on message text, and “other warnings” get an additional capped deduction that isn’t applied here. This can make the “Scoring Breakdown” section misrepresent how the score was actually computed for connector docs.If you keep doc-type-specific scoring in
reviewGeneratedDocs, you might:
- omit the fine-grained error deduction math from the report for non-
propertiestypes, or- pass an explicit breakdown structure alongside
quality_scoreinstead of recomputing it heuristically.
611-620:issues/suggestionsassumed to be arrays ingenerateReviewReport
generateReviewReportcalls.filteronissuesand iteratessuggestions, but doesn’t guard againstundefinedor non-array inputs. All current call sites populate them, but this function is exported and could be reused elsewhere.To make it more robust to misuse (and easier to unit test in isolation), consider normalizing:
const issues = Array.isArray(results.issues) ? results.issues : []; const suggestions = Array.isArray(results.suggestions) ? results.suggestions : [];before computing the report.
bin/mcp-tools/job-queue.js (2)
71-101:parseCommandis minimal; document limitations or constrain usageThe custom
parseCommandhandles only spaces and simple quoted arguments; it doesn’t support escaping, nested quotes, or platform-specific path quirks. Since you already prefer the array form ([executable, ...args]), it would help to:
- clearly document that string commands are best-effort and should only be used for simple cases, and/or
- log or surface a warning when falling back to the string parsing path to discourage new use.
That reduces the risk of subtle argument mis-parsing later.
423-427: Unref the cleanup interval when used in long-running processes with testsThe periodic
setIntervalis appropriate for the MCP server, but when this module is loaded in short-lived scripts or Jest tests it can keep the Node process alive longer than needed. Consider calling.unref()on the interval handle or wiring interval setup through an explicitstartJobCleanup()call invoked only in the server entrypoint.That gives you the same cleanup behaviour without surprising test/CLI lifecycles.
bin/mcp-tools/helm-docs.js (1)
35-56: Consider using proper shell escaping for command arguments.The command construction uses double-quote wrapping for user-supplied arguments, which could be vulnerable to command injection if inputs contain quotes or shell metacharacters. While the MCP server context is relatively controlled, it's a best practice to use proper shell escaping.
Consider using a library like
shell-quoteor constructing the command as an array to avoid shell injection:const { execFileSync } = require('child_process'); // Then build args array instead const args = ['generate', 'helm-spec']; if (args.chart_dir) args.push('--chart-dir', args.chart_dir); if (args.tag) args.push('--tag', normalizeVersion(args.tag)); // ... etc const output = execFileSync('npx', ['doc-tools', ...args], { cwd: repoRoot.root, encoding: 'utf8', maxBuffer: MAX_EXEC_BUFFER_SIZE, timeout: DEFAULT_COMMAND_TIMEOUT });Or at minimum, escape special shell characters in the input values before constructing the command string.
mcp/prompts/property-docs-guide.md (1)
121-147: Add language specifiers to fenced code blocks.The code blocks at lines 121, 128, 138, and 145 are missing language specifiers. Adding
asciidocidentifiers would improve syntax highlighting and make the examples clearer.Apply these changes:
Wrong: -``` +```asciidoc ifdef::env-cloud[] This property is read-only in Redpanda Cloud. endif::[]Right:
-+asciidoc
Controls the maximum segment size for topics.... Wrong: -``` +```asciidoc Enable shadow linking for disaster recovery. include::reference:partial$enterprise-licensed-property.adoc[]Right:
-+asciidoc
Enable shadow linking for disaster recovery.bin/mcp-tools/cloud-regions.js (1)
38-70: Consider using proper shell escaping for command arguments.Similar to the issue in
bin/mcp-tools/helm-docs.js, this command construction is vulnerable to command injection if user-supplied arguments (output, path, template) contain quotes or shell metacharacters.Consider using
execFileSyncwith an arguments array instead:const { execFileSync } = require('child_process'); const args = ['generate', 'cloud-regions']; if (args.output) args.push('--output', args.output); if (args.format) args.push('--format', args.format); if (args.owner) args.push('--owner', args.owner); if (args.repo) args.push('--repo', args.repo); if (args.path) args.push('--path', args.path); if (args.ref) args.push('--ref', args.ref); if (args.template) args.push('--template', args.template); if (args.dry_run) args.push('--dry-run'); const output = execFileSync('npx', ['doc-tools', ...args], { cwd: repoRoot.root, encoding: 'utf8', maxBuffer: MAX_EXEC_BUFFER_SIZE, timeout: DEFAULT_COMMAND_TIMEOUT });bin/mcp-tools/rpk-docs.js (1)
36-42: Version parameter needs proper sanitization or escaping.The version parameter is added directly to the command string without proper escaping. While the normalization adds a 'v' prefix, it doesn't prevent command injection if the version contains shell metacharacters.
Consider validating that the version matches expected format before using it:
try { // Normalize version let version = args.version; if (!version.startsWith('v')) { version = `v${version}`; } + + // Validate version format + if (!/^v\d+\.\d+\.\d+(-[a-zA-Z0-9.]+)?$/.test(version)) { + return { + success: false, + error: 'Invalid version format', + suggestion: 'Version must be in format: v1.2.3 or v1.2.3-beta' + }; + } const cmd = `npx doc-tools generate rpk-docs --tag ${version}`;Or use
execFileSyncwith an arguments array as suggested in other files..github/workflows/test-mcp.yml (1)
40-47: Update the test reporter action to the latest version.The workflow uses
dorny/test-reporter@v1, but the latest stable version is v2.2.0. Consider updating to benefit from bug fixes and improvements:uses: dorny/test-reporter@v2bin/mcp-tools/rpcn-docs.js (1)
5-5: Remove unusedexecSyncimport.The function uses
spawnSync(correctly), butexecSyncis imported and never used.Apply this diff:
-const { execSync, spawnSync } = require('child_process'); +const { spawnSync } = require('child_process');bin/mcp-tools/metrics-docs.js (1)
5-5: Remove unusedexecSyncimport.The function correctly uses
spawnSync, butexecSyncis imported and never used.Apply this diff:
-const { execSync, spawnSync } = require('child_process'); +const { spawnSync } = require('child_process');bin/doc-tools-mcp.js (1)
463-525: Consider hardeningexecuteToolerror handling
CallToolRequestSchemadelegates all non‑job tools toexecuteTool(name, args || {})without a guard. IfexecuteToolever throws (e.g., unexpected fs/child_process error), the whole MCP server will crash.Consider wrapping this call in a
try/catchand returning a structured error payload instead, similar to the job error path:- // Handle regular tools - const result = executeTool(name, args || {}); + // Handle regular tools + let result; + try { + result = executeTool(name, args || {}); + } catch (err) { + result = { + success: false, + error: err.message || String(err) + }; + }This keeps the server resilient to unexpected tool failures without changing the normal success path.
tools/generate-cli-docs.js (1)
33-66: Path sanitization is good; consider OS‑agnostic matching as a future polishThe
sanitizePathInDescriptionlogic correctly normalizes repo‑root and home‑directory paths for help output. ThelongPathPatternregex, however, assumes/separators and the literaldocs-extensions-and-macrossegment, so on Windows it may never match the absolute paths you want to rewrite.Not urgent, but if you ever need fully cross‑platform sanitization, consider:
- Building a pattern from
repoRootusingpath.seprather than hard‑coding/.- Or normalizing any candidate path to POSIX style before applying the regex.
Functionality today is fine; this would just make the sanitizer more robust on Windows.
__tests__/mcp/doc-tools-mcp.test.js (1)
34-193: Avoid re‑implementingvalidateDocToolsCommandin tests; exercise the real validator insteadThe test suite defines its own
validateDocToolsCommand()that mirrors the internal implementation and then validatesdangerousCommandsandsafeCommandsagainst that helper. This means:
- These tests will still pass even if the real validator inside
bin/mcp-tools/index.jsdiverges.- One of the “safe” examples (
generate rp-connect-docs) no longer matches a real command name (rpcn-connector-docs), but the discrepancy is invisible because only the local helper is called.You already test one bad case indirectly via:
executeTool('run_doc_tools_command', { command: 'test; malicious' });For stronger protection:
- Drop or greatly simplify the local
validateDocToolsCommand()reimplementation.- Drive all validation tests through
executeTool('run_doc_tools_command', { command })so they exercise the real code path and will fail if that logic changes.- While you’re there, update the example command string to
generate rpcn-connector-docsto reflect the current CLI.This keeps the contract tests coupled to actual behavior instead of a duplicated spec.
cli-utils/setup-mcp.js (1)
211-350: Clarify howtargetand multiple config locations are actually used in setupMCPRight now
setupMCPacceptstarget(auto | code | desktop) and you’ve implemented helpers for multiple config files (getConfigPaths/findConfigFiles), but:
targetis currently ignored in the setup flow.setupMCPonly inspects~/.claude.jsonfor existing configuration.findConfigFiles()is used only inshowStatus()for Claude Desktop; it never influences howsetupMCPdecides where to add/update the server.If Claude’s CLI truly abstracts away the distinction (i.e.,
claude mcp add --scope userconsistently targets the right config in all cases), ignoringtargetis fine—but then the flag and some of the helper logic are misleading.I’d recommend one of:
- Short‑term: Document that
--targetis currently informational only, or drop it from the CLI until there’s a real behavioral difference.- Longer‑term: Wire
targetintosetupMCPand reusegetConfigPaths/findConfigFilesso you actually respect “code vs desktop” and handle both config formats consistently.This will keep the CLI behavior aligned with the help text and reduce future confusion when debugging MCP setup.
bin/doc-tools.js (3)
784-834:generatePropertyComparisonReportis solid; be awarefindRepoRoot()can hard‑exit on failureThe comparison helper is well‑structured and defensive (checks file existence, cleans up temp dirs, logs failures without throwing). One thing to keep in mind is that
findRepoRoot()itself will log andprocess.exit(1)if it can’t find a.gitorpackage.jsonup the tree.Given this function is only called from the property‑docs command—which already assumes it’s run inside a repo—that’s probably acceptable. Just be aware that moving
generatePropertyComparisonReportinto a more generic library context would bring that hard exit with it; in that case, you’d wantfindRepoRootto throw or returnnullinstead of exiting.
850-912: Avoid building adiffshell command string; pass arguments directly tospawnSync
diffDirscurrently constructs a shell command string and runs it withshell: true:const cmd = `diff -ru "${oldTempDir}" "${newTempDir}" > "${patch}" || true`; const res = spawnSync(cmd, { stdio: 'inherit', shell: true });Because
oldTempDir/newTempDir/patchcan in principle contain spaces or shell‑significant characters, this is brittle and (in the explicit‑dir case) could be abused if those paths ever come from external input.A safer pattern:
- Invoke
diffdirectly with an args array.- Capture its stdout and write
patchviafs.writeFileSync, instead of using shell redirection.For example:
- const patch = path.join(diffDir, 'changes.patch'); - const cmd = `diff -ru "${oldTempDir}" "${newTempDir}" > "${patch}" || true`; - const res = spawnSync(cmd, { stdio: 'inherit', shell: true }); + const patch = path.join(diffDir, 'changes.patch'); + const res = spawnSync('diff', ['-ru', oldTempDir, newTempDir], { + encoding: 'utf8', + stdio: ['ignore', 'pipe', 'inherit'], + }); + if (res.error) { + console.error(`❌ diff failed: ${res.error.message}`); + process.exit(1); + } + fs.writeFileSync(patch, res.stdout ?? '', 'utf8');The existing cleanup logic can remain unchanged. This keeps behavior the same while removing shell parsing from the equation.
2314-2366: Cloud‑regions command behavior matches the docs; small UX improvement opportunity on format validationThe
cloud-regionssubcommand correctly:
- Enforces presence of a GitHub token via the three env var names.
- Resolves the template path against
findRepoRoot()and verifies it exists.- Always writes output relative to repo root when not in
--dry-runmode.One minor UX improvement to consider: you currently lowercase
options.formatand pass it straight through, so unsupported values silently flow intogenerateCloudRegions. You could pre‑validate here:const fmt = (options.format || 'md').toLowerCase(); if (!['md', 'adoc'].includes(fmt)) { console.error(`❌ Unsupported format: ${fmt}. Use "md" or "adoc".`); process.exit(1); }Not required, but it would give faster feedback when someone mistypes
--format.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Jira integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (43)
.github/workflows/test-mcp.yml(1 hunks).gitignore(1 hunks)CLI_REFERENCE.adoc(1 hunks)CONTRIBUTING.adoc(1 hunks)README.adoc(1 hunks)__tests__/mcp/README.adoc(1 hunks)__tests__/mcp/cli-contract.test.js(1 hunks)__tests__/mcp/doc-tools-mcp.test.js(1 hunks)__tests__/mcp/integration.test.js(1 hunks)bin/doc-tools-mcp.js(1 hunks)bin/doc-tools.js(14 hunks)bin/mcp-tools/antora.js(1 hunks)bin/mcp-tools/cloud-regions.js(1 hunks)bin/mcp-tools/crd-docs.js(1 hunks)bin/mcp-tools/helm-docs.js(1 hunks)bin/mcp-tools/index.js(1 hunks)bin/mcp-tools/job-queue.js(1 hunks)bin/mcp-tools/metrics-docs.js(1 hunks)bin/mcp-tools/openapi.js(1 hunks)bin/mcp-tools/property-docs.js(1 hunks)bin/mcp-tools/review.js(1 hunks)bin/mcp-tools/rpcn-docs.js(1 hunks)bin/mcp-tools/rpk-docs.js(1 hunks)bin/mcp-tools/utils.js(1 hunks)bin/mcp-tools/versions.js(1 hunks)cli-utils/setup-mcp.js(1 hunks)extensions/DEVELOPMENT.adoc(1 hunks)extensions/README.adoc(1 hunks)extensions/REFERENCE.adoc(1 hunks)extensions/USER_GUIDE.adoc(1 hunks)macros/DEVELOPMENT.adoc(1 hunks)macros/README.adoc(1 hunks)macros/REFERENCE.adoc(1 hunks)macros/USER_GUIDE.adoc(1 hunks)mcp/CLI_INTERFACE.adoc(1 hunks)mcp/DEVELOPMENT.adoc(1 hunks)mcp/README.adoc(1 hunks)mcp/USER_GUIDE.adoc(1 hunks)mcp/prompts/README.adoc(1 hunks)mcp/prompts/property-docs-guide.md(1 hunks)mcp/prompts/rpcn-connector-docs-guide.md(1 hunks)package.json(5 hunks)tools/generate-cli-docs.js(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (14)
bin/mcp-tools/cloud-regions.js (3)
bin/mcp-tools/crd-docs.js (7)
require(5-5)require(6-6)require(7-7)repoRoot(20-20)flags(41-41)cmd(60-60)output(62-68)bin/mcp-tools/index.js (12)
require(8-8)require(11-11)require(14-14)require(17-17)require(18-18)require(19-19)require(20-20)require(21-21)require(22-22)require(23-23)repoRoot(39-39)output(97-99)bin/mcp-tools/utils.js (2)
MAX_EXEC_BUFFER_SIZE(11-11)DEFAULT_COMMAND_TIMEOUT(12-12)
tools/generate-cli-docs.js (6)
bin/mcp-tools/crd-docs.js (4)
require(5-5)require(6-6)require(7-7)repoRoot(20-20)bin/mcp-tools/index.js (11)
require(8-8)require(11-11)require(14-14)require(17-17)require(18-18)require(19-19)require(20-20)require(21-21)require(22-22)require(23-23)repoRoot(39-39)bin/mcp-tools/metrics-docs.js (1)
repoRoot(16-16)bin/mcp-tools/openapi.js (1)
repoRoot(23-23)bin/mcp-tools/property-docs.js (1)
repoRoot(19-19)bin/mcp-tools/rpcn-docs.js (1)
repoRoot(15-15)
bin/mcp-tools/rpcn-docs.js (4)
bin/mcp-tools/openapi.js (3)
repoRoot(23-23)structure(24-24)output(83-89)bin/mcp-tools/rpk-docs.js (3)
repoRoot(16-16)structure(17-17)output(44-50)bin/mcp-tools/utils.js (2)
MAX_EXEC_BUFFER_SIZE(11-11)DEFAULT_COMMAND_TIMEOUT(12-12)__tests__/mcp/cli-contract.test.js (1)
output(17-21)
__tests__/mcp/doc-tools-mcp.test.js (3)
bin/mcp-tools/antora.js (4)
fs(5-5)path(6-6)yaml(7-7)repoInfo(18-18)bin/mcp-tools/utils.js (6)
fs(5-5)path(6-6)MAX_RECURSION_DEPTH(10-10)MAX_EXEC_BUFFER_SIZE(11-11)DEFAULT_SKIP_DIRS(13-13)PLAYBOOK_NAMES(14-18)bin/mcp-tools/rpk-docs.js (1)
cmd(42-42)
bin/mcp-tools/index.js (1)
__tests__/mcp/doc-tools-mcp.test.js (2)
validateDocToolsCommand(35-49)dangerousChars(41-41)
bin/mcp-tools/antora.js (1)
bin/mcp-tools/utils.js (3)
DEFAULT_SKIP_DIRS(13-13)PLAYBOOK_NAMES(14-18)MAX_RECURSION_DEPTH(10-10)
bin/mcp-tools/versions.js (1)
bin/mcp-tools/index.js (10)
require(8-8)require(11-11)require(14-14)require(17-17)require(18-18)require(19-19)require(20-20)require(21-21)require(22-22)require(23-23)
bin/mcp-tools/utils.js (3)
bin/mcp-tools/crd-docs.js (3)
require(5-5)require(6-6)require(7-7)bin/mcp-tools/helm-docs.js (3)
require(5-5)require(6-6)require(7-7)bin/mcp-tools/index.js (10)
require(8-8)require(11-11)require(14-14)require(17-17)require(18-18)require(19-19)require(20-20)require(21-21)require(22-22)require(23-23)
bin/mcp-tools/job-queue.js (2)
bin/mcp-tools/property-docs.js (1)
jobId(52-54)bin/mcp-tools/utils.js (1)
DEFAULT_COMMAND_TIMEOUT(12-12)
cli-utils/setup-mcp.js (1)
bin/doc-tools-mcp.js (6)
require(17-17)require(18-18)require(19-24)require(25-25)require(26-26)result(515-515)
bin/mcp-tools/property-docs.js (4)
bin/mcp-tools/job-queue.js (1)
jobId(42-42)bin/mcp-tools/metrics-docs.js (3)
repoRoot(16-16)structure(17-17)output(72-72)bin/mcp-tools/openapi.js (3)
repoRoot(23-23)structure(24-24)output(83-89)bin/mcp-tools/utils.js (2)
MAX_EXEC_BUFFER_SIZE(11-11)DEFAULT_COMMAND_TIMEOUT(12-12)
bin/mcp-tools/review.js (1)
bin/mcp-tools/utils.js (1)
path(6-6)
bin/mcp-tools/openapi.js (7)
bin/mcp-tools/crd-docs.js (8)
require(5-5)require(6-6)require(7-7)repoRoot(20-20)structure(21-21)flags(41-41)cmd(60-60)output(62-68)bin/mcp-tools/helm-docs.js (8)
require(5-5)require(6-6)require(7-7)repoRoot(20-20)structure(21-21)flags(33-33)cmd(56-56)output(58-64)bin/mcp-tools/index.js (12)
require(8-8)require(11-11)require(14-14)require(17-17)require(18-18)require(19-19)require(20-20)require(21-21)require(22-22)require(23-23)repoRoot(39-39)output(97-99)bin/mcp-tools/property-docs.js (4)
repoRoot(19-19)structure(20-20)output(67-73)filesGenerated(77-77)bin/mcp-tools/rpcn-docs.js (3)
repoRoot(15-15)structure(16-16)output(71-71)bin/mcp-tools/rpk-docs.js (4)
repoRoot(16-16)structure(17-17)cmd(42-42)output(44-50)bin/mcp-tools/utils.js (2)
MAX_EXEC_BUFFER_SIZE(11-11)DEFAULT_COMMAND_TIMEOUT(12-12)
bin/doc-tools.js (1)
cli-utils/setup-mcp.js (2)
options(148-153)options(212-212)
🪛 Biome (2.1.2)
cli-utils/setup-mcp.js
[error] 64-64: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🪛 LanguageTool
mcp/prompts/rpcn-connector-docs-guide.md
[style] ~86-~86: The words ‘Explain’ and ‘explanation’ are quite similar. Consider replacing ‘Explain’ with a different word.
Context: ...cify a different branch if needed 6. Explain your changes: Provide a clear explana...
(VERB_NOUN_SENT_LEVEL_REP)
🪛 markdownlint-cli2 (0.18.1)
mcp/prompts/property-docs-guide.md
121-121: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
128-128: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
138-138: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
145-145: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: test-property-extractor (3.11)
- GitHub Check: test-all (18)
- GitHub Check: test-property-extractor (3.9)
- GitHub Check: test-asciidoc-extensions (20)
- GitHub Check: test-all (20)
- GitHub Check: test-asciidoc-extensions (18)
- GitHub Check: test-extensions (18)
- GitHub Check: test-extensions (20)
- GitHub Check: test-macros (20)
- GitHub Check: test-macros (18)
- GitHub Check: Redirect rules - docs-extensions-and-macros
- GitHub Check: Header rules - docs-extensions-and-macros
- GitHub Check: Pages changed - docs-extensions-and-macros
- GitHub Check: Check MCP-CLI Compatibility
- GitHub Check: Verify CLI Contract
- GitHub Check: Test MCP Server Integration
🔇 Additional comments (59)
extensions/USER_GUIDE.adoc (6)
1-91: Well-structured getting started and configuration guidance.The installation, basic configuration, and pattern sections are clear, well-organized, and provide good examples. The IMPORTANT note at line 29 effectively disambiguates a common pitfall.
196-259: Troubleshooting section is comprehensive and helpful.The troubleshooting section provides practical solutions for common issues with clear, actionable steps. Structure and content quality are good. Verification of referenced documentation files is covered in previous comment.
260-332: Best practices section covers essential operational guidance.The development vs production distinction, security considerations, and testing guidance are valuable and well-presented. The practical examples help users understand implementation without over-prescribing solutions.
334-338: External documentation links are current.The Antora documentation links for playbooks and extensions are valid and current. No changes needed here.
334-338: No issues found — all documentation links are valid.The external Antora documentation link is active and accessible, and the internal file references (REFERENCE.adoc and DEVELOPMENT.adoc) both exist in the extensions directory, so the relative links are correct. No changes needed.
93-195: Review comment is incorrect regarding algolia-indexer verification.The algolia-indexer is referenced as an external npm package dependency (
@redpanda-data/docs-extensions-and-macros/extensions/algolia-indexer), not as a local extension file in the codebase. All other referenced extensions are verified to exist locally:
- ✓
version-fetcher/set-latest-version.js- ✓
generate-index-data.js- ✓
produce-redirects.js- ✓
generate-rp-connect-categories.js- ✓
generate-rp-connect-info.js- ✓
unlisted-pages.jsDocumentation file references (lines 235, 258, 337) use correct relative links (
link:REFERENCE.adoc[Reference docs],link:DEVELOPMENT.adoc[Development guide]), and bothextensions/REFERENCE.adocandextensions/DEVELOPMENT.adocexist in the codebase. The USER_GUIDE.adoc content is accurate as-is..gitignore (1)
14-14: Good practice for environment isolation.The
.enventry prevents accidental commits of local environment variables (API keys, credentials). Aligns well with the new MCP tooling and setup automation introduced in this PR.macros/README.adoc (1)
1-105: All referenced documentation files confirmed to exist.The verification confirms that
USER_GUIDE.adoc,REFERENCE.adoc, andDEVELOPMENT.adocall exist in themacros/directory. The README's internal documentation links are valid and properly structured.mcp/DEVELOPMENT.adoc (3)
426-449: All referenced npm test scripts are correctly configured in package.json.Verification confirms that
npm run test:mcp,npm run test:mcp:integration, andnpm run test:mcp:contractare all properly defined in package.json with appropriate test paths. The documentation is accurate.
487-496: No issues found — documentation example is correct.Verification confirms the require path
./bin/mcp-toolscorrectly resolves tobin/mcp-tools/index.js, and theexecuteToolfunction is properly defined (line 38) and exported (line 163) from that module. The example code in the documentation is accurate.
523-525: Update file reference and expand documented security controls to match implementation.The validation logic exists but is referenced incorrectly. The actual implementation is in
bin/mcp-tools/index.js(notcli.js), and it blocks more shell metacharacters than documented. Additionally, path traversal blocking and shell process hardening also exist.All CLI commands go through validation in `bin/mcp-tools/index.js`: * Shell metacharacters blocked: `;`, `|`, `&`, `$`, ``` ` ```, `<`, `>`, `(`, `)`, `{`, `}`, `[`, `]`, `!`, `*`, `?`, `~` * Path traversal sequences blocked: `..`, `~` * Commands must be non-empty strings * Shell process execution disabled (`shell: false`)mcp/README.adoc (1)
42-47: No issues found — test documentation path and link syntax are correct.The verification confirms that the file exists at
__tests__/mcp/README.adocand the AsciiDoc link syntax using escaped underscores (\__tests__) is correct. The relative path navigation and markup escaping follow proper AsciiDoc conventions.extensions/README.adoc (1)
1-123: Now let me verify the actual extension implementations exist:Based on the initial script output, I have enough information to complete verification. Let me now generate the rewritten review comment:
All referenced documentation files and extension paths are valid.
The verification confirms:
- All three documentation files exist: USER_GUIDE.adoc, REFERENCE.adoc, and DEVELOPMENT.adoc
- Extension paths in the README (lines 49, 96, 105, 117) follow the correct Node.js module resolution format for the
@redpanda-data/docs-extensions-and-macrospackage- The extension paths match the documented pattern from the upstream Redpanda extensions repository
- The
.jsfile extension on line 117 is optional but acceptable in Node.js module resolutionThe README provides a well-structured overview of Antora extensions with proper documentation references and valid usage examples.
mcp/USER_GUIDE.adoc (1)
1-1099: Excellent comprehensive documentation!This user guide is thorough, well-organized, and genuinely user-friendly. The documentation successfully achieves its stated goal of providing a "writer-friendly" interface with natural language examples, clear response structures, comprehensive troubleshooting, and proper security coverage.
Key strengths:
- Clear progression from quickstart to advanced usage
- Detailed JSON response structures for all tools
- Realistic conversational examples
- Comprehensive security and troubleshooting sections
- Good coverage of multi-repo and background job workflows
mcp/prompts/rpcn-connector-docs-guide.md (1)
1-121: Clear and well-structured LLM prompt guide.This prompt guide effectively explains the Redpanda Connect documentation workflow, including the
$refsystem for deduplication, quality requirements, and expected outputs. The structure is logical, progressing from concepts to workflow to summary.extensions/REFERENCE.adoc (1)
1-768: Comprehensive and well-organized extension reference.This reference documentation provides thorough coverage of all Antora extensions with a consistent structure (environment variables, configuration options, registration examples) that makes it easy to navigate and use. The IMPORTANT callout about the correct registration key helps prevent common configuration errors.
extensions/DEVELOPMENT.adoc (1)
1-464: Excellent development guide for extension authors.This guide provides comprehensive coverage of Antora extension development, from basic architecture to advanced patterns. The progression from simple examples to complex scenarios is well-structured, and the inclusion of best practices, testing patterns, and debugging tips makes this a valuable resource for developers.
Key strengths:
- Clear explanation of extension lifecycle
- Practical code examples throughout
- Performance optimization guidance
- Testing and documentation patterns
- Debugging tips
macros/REFERENCE.adoc (1)
1-222: Clear and well-structured macro reference.This reference provides comprehensive documentation for all AsciiDoc macros with consistent formatting, clear usage examples, and proper registration instructions. The IMPORTANT callout about the correct configuration key helps prevent common mistakes.
package.json (4)
3-3: Appropriate version bump for new features.The minor version bump from 4.12.5 to 4.13.0 correctly follows semantic versioning for the addition of new MCP server functionality.
16-17: New MCP server CLI entry point added.The new
doc-tools-mcpbin entry properly exposes the MCP server functionality as a CLI command, following the established pattern of the existingdoc-toolsentry.
26-31: Well-organized test and documentation scripts.The new npm scripts provide targeted test execution for MCP functionality and CLI documentation generation, following consistent naming conventions with the existing scripts.
92-92: Appropriate dependency additions for MCP functionality.The new dependencies are necessary and well-chosen:
@modelcontextprotocol/sdk(^1.0.4) - Core MCP protocol implementationglob(^11.0.0) - Standard file pattern matching libraryBoth use caret ranges for flexible minor/patch updates.
Also applies to: 102-102
README.adoc (1)
9-75: README restructure and navigation look coherentThe new Quickstart and “Documentation / What’s included” layout is clear and aligns well with the extensions/macros/MCP/CLI split; links and example Antora config look consistent with the package name and directory structure.
bin/mcp-tools/review.js (1)
233-235: Doc comment says “markdown report” but implementation generates AsciiDoc
generate_reportis documented as “Whether to generate a markdown report file”, and the filename comment implies markdown, but the implementation callsgenerateReviewReportwhich produces.adoccontent. To avoid confusion for callers, update the doc comment (and any CLI/docs that refer to this) to say “AsciiDoc report”.
[suggest_minor_issue]CLI_REFERENCE.adoc (1)
1-12: CLI reference looks consistent and well-scoped for the expanded surfaceThe auto-generated CLI reference cleanly captures the new commands and generate/* subcommands, with usage, options, examples, and requirements. Given it’s regenerated via
npm run generate:cli-docs, this structure should work well as the single source of truth for the CLI surface.__tests__/mcp/README.adoc (1)
1-147: MCP test README is clear and actionableThis does a good job explaining integration vs contract tests, commands to run each flavour, when to run them, and how to react to failures; the “Adding tests” section is especially helpful for future MCP tools. Looks solid as written.
__tests__/mcp/cli-contract.test.js (5)
15-32: LGTM!The helper function properly handles both success and error cases, capturing all relevant information (stdout, stderr, exit code). The 10-second timeout is appropriate for CLI command execution.
34-51: LGTM!The command structure tests effectively verify that the expected top-level commands exist and respond to
--help.
53-120: LGTM!Excellent data-driven testing approach that comprehensively covers all generate subcommands and their flag requirements. The structure makes it easy to maintain and extend.
122-146: LGTM!The output format tests properly validate the expected format while gracefully handling network errors. Good use of regex patterns to match version strings flexibly.
148-179: LGTM!The error handling and version compatibility tests thoroughly verify that the CLI properly reports errors with non-zero exit codes and outputs a valid semantic version.
mcp/CLI_INTERFACE.adoc (1)
1-383: LGTM!This is comprehensive and well-structured documentation that clearly defines the CLI interface contract. The examples, error scenarios, and stability guarantees provide excellent guidance for maintaining backward compatibility.
.github/workflows/test-mcp.yml (2)
49-72: LGTM!The CLI contract verification job properly validates both the test suite and the actual command existence. The grep verification provides a good sanity check.
91-128: LGTM!The compatibility check effectively verifies that all MCP tools are accessible and return results. Calling tools with empty arguments is appropriate for this smoke test to ensure the tool registry and execution pathway work correctly.
bin/mcp-tools/crd-docs.js (2)
19-37: LGTM!The parameter validation and error handling are well-structured with helpful error messages and suggestions.
70-87: LGTM!The result structure and error handling provide comprehensive information for both success and failure cases.
__tests__/mcp/integration.test.js (6)
12-50: LGTM!The CLI availability tests comprehensively verify that all required doc-tools commands are present.
52-88: LGTM!The tool execution tests appropriately handle both success and network error scenarios with reasonable timeouts.
90-130: LGTM!Parameter validation tests ensure that required arguments are properly enforced across all generation tools.
132-150: LGTM!Review tool tests properly validate parameter requirements and enum constraints.
152-170: LGTM!Error handling tests appropriately verify security protections against command injection and proper error responses for invalid tool names.
172-215: LGTM!Output parsing tests use appropriate mock data, and filesystem tests verify expected repository structure without making assumptions about generated file presence.
bin/mcp-tools/openapi.js (2)
22-48: LGTM!The validation and version normalization logic is correct, with appropriate special handling for the 'dev' tag.
91-121: LGTM!The result construction correctly infers generated files based on the surface parameter, with appropriate default paths.
bin/mcp-tools/rpcn-docs.js (2)
26-58: LGTM!Excellent use of
spawnSyncwith an argument array to prevent command injection. The argument building is clean and safe.
60-92: LGTM!Comprehensive error handling that distinguishes between spawn failures and command execution failures, with appropriate output parsing.
bin/mcp-tools/property-docs.js (3)
18-48: LGTM!Proper parameter validation and safe command argument construction using an array.
50-63: LGTM!The background job execution pattern is well-designed, allowing async processing with job status tracking.
75-105: LGTM!The output parsing and result construction correctly handle both generated files and property counts. (Note: this assumes the command execution issue in lines 65-73 is addressed.)
bin/mcp-tools/metrics-docs.js (2)
35-59: LGTM!Excellent security implementation with version format validation before execution and safe command execution via
spawnSyncwith an argument array.
60-96: LGTM!Comprehensive error handling and appropriate output parsing for metrics count.
bin/mcp-tools/index.js (3)
1-31: LGTM!Well-organized imports with clear categorization of utilities, documentation tools, and job queue functionality.
137-158: LGTM!The validation function properly blocks dangerous shell metacharacters and path traversal sequences with comprehensive checks.
160-186: LGTM!Comprehensive exports with clear organization and documentation, providing both high-level dispatcher and individual tool access.
bin/mcp-tools/antora.js (3)
16-31: LGTM!Flexible parameter handling and appropriate error handling for playbook parsing, with warnings logged to console.
33-92: LGTM!Robust recursive search with proper safeguards against infinite recursion (depth limit, visited tracking, symlink resolution) and appropriate error handling.
94-120: LGTM!The
hasDocToolsdetection is well-implemented with a two-tiered check (file existence first, then command execution) and safe use ofexecSyncwith a fixed command string, timeout, and error handling.bin/doc-tools.js (1)
618-693:setup-mcpwiring looks good; consider propagating non‑zero exits fromsetupMCPmore flexiblyThe
setup-mcpcommand correctly:
- Loads
setupMCP,showStatus, andprintNextStepslazily.- Short‑circuits when
--statusis provided.- Exits with
0on success and1on failure.If you ever want to support richer error codes (e.g., distinguish “not in local repo” from generic failure), you could pass through an optional
codefield fromsetupMCPand map that instead of hard‑coding0/1. For now, the current behavior is totally acceptable for a user‑facing CLI.bin/doc-tools-mcp.js (1)
429-461: The review comment is incorrect—the current implementation is correct per the MCP specificationThe codebase correctly implements two distinct MCP response types with different content structures:
GetPromptresponses usePromptMessageobjects, wherecontentis a singleContentBlockobject (as shown in your implementation at line 437)CallToolresponses useCallToolResultobjects, wherecontentis an array ofContentBlockobjects (as shown at line 472)This structural difference is intentional in the MCP specification. Prompt messages carry one content block per message, while tool results use an array to allow bundling multiple content pieces. Wrapping the GetPrompt
contentin an array as suggested would violate the protocol schema and cause compatibility issues with MCP clients.Your implementation is correct and requires no changes.
Likely an incorrect or invalid review comment.
This pull request introduces a local MCP server that gives LLMs (Claude Code) the ability to execute our automations. It also adds automated CI workflows, detailed contributing and testing guides, and contract tests to ensure CLI compatibility and reliability. These changes significantly improve code quality, developer onboarding, and prevent breaking changes to the MCP server.
Testing
To test this locally:
From the
mcp-toolbranch, run:Run the built-in setup command (works on macOS, Linux, and Windows):
This automatically:
Restart Claude Code to load the MCP server.
Open Claude Code:
Type in
/mcpto check which MCP servers Claude has access to.Verify that the

repanda-docs-tool-assistantis available and connected:Then in Claude Code, start chatting:
You: "Show me the Antora structure of this repo"
Claude: uses get_antora_structure MCP tool
You: "Generate property docs for v25.3.1"
Claude: uses run_doc_tools_command MCP tool
shows generated documentation