Skip to content

Strengthen auto-doc coverage in both eval suites#27

Merged
jrenaldi79 merged 1 commit intomainfrom
claude/robust-autodoc-eval
Mar 24, 2026
Merged

Strengthen auto-doc coverage in both eval suites#27
jrenaldi79 merged 1 commit intomainfrom
claude/robust-autodoc-eval

Conversation

@jrenaldi79
Copy link
Copy Markdown
Owner

Summary

Setup grader (from prior commit in this branch):

  • Runs generate-docs.js --check to verify AUTO markers match actual filesystem (catches hand-written vs script-generated content)
  • Verifies setup-report.md has required sections

Readiness fixtures:

  • Level-5 fixture upgraded with functional auto-doc: generate-docs.js, generate-docs-helpers.js, AUTO:tree/modules markers generated from real filesystem, docs/index.md
  • Level-3 eval must now recommend "auto-gen" (it's missing auto-doc)
  • Level-5 eval must NOT recommend "add auto-generated sections" (it has auto-doc)

Edge cases covered:

  • Markers present but content doesn't match filesystem -> --check fails
  • generate-docs.js exists but helpers missing -> require crash
  • Pre-commit hook missing generate-docs reference -> detected
  • docs/ exists without index.md -> detected

Test plan

  • npx jest --config '{}' tests/scripts/ — 112 tests pass
  • node tests/evals/fixtures/level-5-autonomous/scripts/generate-docs.js --check passes
  • bash tests/evals/run-evals.sh --dry-run — readiness fixtures resolve

https://claude.ai/code/session_01Hbxy31TkbujzukGFSxLcPw

Level-5 fixture now has proper auto-doc infrastructure:
- scripts/generate-docs.js and generate-docs-helpers.js (functional)
- AUTO:tree and AUTO:modules markers (generated from real filesystem)
- docs/index.md
- generate-docs.js --check passes against fixture

Readiness eval config:
- Level-3 must recommend "auto-gen" (missing auto-doc)
- Level-5 must NOT recommend "add auto-generated sections" (has it)

https://claude.ai/code/session_01Hbxy31TkbujzukGFSxLcPw
@jrenaldi79 jrenaldi79 merged commit 130237b into main Mar 24, 2026
1 of 4 checks passed
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 24, 2026

Warning

Rate limit exceeded

@jrenaldi79 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: dfb9bf92-448b-4aee-98be-63eeda969c04

📥 Commits

Reviewing files that changed from the base of the PR and between e6d4de8 and dc4b73f.

📒 Files selected for processing (5)
  • tests/evals/eval-config.json
  • tests/evals/fixtures/level-5-autonomous/CLAUDE.md
  • tests/evals/fixtures/level-5-autonomous/docs/index.md
  • tests/evals/fixtures/level-5-autonomous/scripts/generate-docs-helpers.js
  • tests/evals/fixtures/level-5-autonomous/scripts/generate-docs.js
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/robust-autodoc-eval

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jrenaldi79 jrenaldi79 deleted the claude/robust-autodoc-eval branch March 25, 2026 14:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants