Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds timeout handling to synchronous shell calls in the onboarding module: introduces an Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
scripts/check-markdown-links.sh (1)
20-21: Inconsistency betweenSKIP_DIRSandfindexclusions.The
SKIP_DIRSpattern on line 20 includes__pycache__andnemoclaw/node_modules, but thefindcommand on lines 117-122 doesn't exclude these directories. This means:
- Running without arguments:
findmay return files in__pycache__, but they'll be skipped later by theSKIP_DIRScheck- This works but is inconsistent and slightly inefficient
Consider aligning the exclusions for clarity:
♻️ Align find exclusions with SKIP_DIRS
done < <(find "$REPO_ROOT" -name '*.md' -not -path '*/node_modules/*' \ -not -path '*/.venv/*' \ -not -path '*/dist/*' \ -not -path '*/_build/*' \ -not -path '*/_deps/*' \ + -not -path '*/__pycache__/*' \ -print0)Also applies to: 117-122
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/check-markdown-links.sh` around lines 20 - 21, The SKIP_DIRS variable includes patterns like __pycache__ and nemoclaw/node_modules that are not currently mirrored in the find command's exclusion (-path ... -prune) logic; update the find invocation to prune the same directories listed in SKIP_DIRS (or build the find exclusion list from SKIP_DIRS) so find never descends into __pycache__ and nemoclaw/node_modules (mirror SKIP_DIRS into the -path/-prune clauses used in the find command to keep exclusions consistent and avoid extra work).test/onboard.test.js (1)
798-817: Make the static scanner less brittle to harmless refactors.The
i + 15scan window and the hardcoded>= 7count can fail when formatting or implementation evolves, even if timeout safety remains correct. Consider deriving target calls dynamically and removing the fixed-count assumption.♻️ Suggested refactor
- for (let j = i + 1; j < Math.min(i + 15, lines.length); j++) { + for (let j = i + 1; j < lines.length; j++) { block += "\n" + lines[j]; - if (/^\s*\}\);/.test(lines[j])) break; + if (/^\s*\);/.test(lines[j]) || /^\s*\}\);/.test(lines[j])) break; } ... - it("has spawnSync calls to test", () => { - expect(spawnBlocks.length).toBeGreaterThanOrEqual(7); + it("has non-sleep spawnSync calls to test", () => { + const actionable = spawnBlocks.filter(({ text }) => !/spawnSync\("sleep"/.test(text)); + expect(actionable.length).toBeGreaterThan(0); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/onboard.test.js` around lines 798 - 817, The test's static scanner is brittle because collectSpawnBlocks uses a fixed 15-line window and the assertion expects >= 7 calls; update collectSpawnBlocks to scan forward until it detects the block terminator (e.g., a line matching /^\s*\}\);/ or end of file) instead of stopping at i + 15, and change the assertion that uses spawnBlocks to derive the expected count dynamically (for example compute totalSpawnCalls = lines.filter(l => /spawnSync\(/.test(l)).length and assert spawnBlocks.length === totalSpawnCalls or >= totalSpawnCalls if partial grouping is acceptable). Reference: the collectSpawnBlocks function and spawnBlocks usage in the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/pr.yaml:
- Around line 199-203: The GH Actions step is expanding $MD_FILES unquoted which
breaks filenames with spaces; update the run invocation that calls
scripts/check-markdown-links.sh to pass "$MD_FILES" quoted or pipe $MD_FILES
through a newline-safe mechanism (e.g., echo "$MD_FILES" | xargs -d '\n' -r
./scripts/check-markdown-links.sh) so the check-markdown-links.sh script
receives each filename intact; ensure references to MD_FILES and
scripts/check-markdown-links.sh are updated accordingly.
In `@bin/lib/onboard.js`:
- Around line 584-587: Replace the brittle timeout checks that compare
result.signal === "SIGTERM" with a small helper function (e.g.,
isTimedOut(result)) that returns true when result.error?.code === "ETIMEDOUT" or
as a fallback when result.signal === "SIGTERM" && result.status == null; add
this helper near other utility functions in onboard.js and then change all five
locations (the checks currently at the blocks around result handling where
failures.push is called) to use isTimedOut(result) before setting message:
"timed out" so only true timeouts are reported.
In `@scripts/check-markdown-links.sh`:
- Around line 71-75: The current markdown link extraction regex (link_re) in the
while loop that processes remaining will truncate link targets containing
parentheses (e.g., docs/file_(1).md); update the link_re used in that loop to a
more robust pattern that supports parentheses inside the target (a
balanced-parentheses-aware regex) or, if you want a minimal change, add a clear
code comment next to link_re and the loop documenting this limitation and
suggesting a future fix; locate link_re and the while [[ "$remaining" =~
$link_re ]] loop to apply the change (these symbols identify where to update).
---
Nitpick comments:
In `@scripts/check-markdown-links.sh`:
- Around line 20-21: The SKIP_DIRS variable includes patterns like __pycache__
and nemoclaw/node_modules that are not currently mirrored in the find command's
exclusion (-path ... -prune) logic; update the find invocation to prune the same
directories listed in SKIP_DIRS (or build the find exclusion list from
SKIP_DIRS) so find never descends into __pycache__ and nemoclaw/node_modules
(mirror SKIP_DIRS into the -path/-prune clauses used in the find command to keep
exclusions consistent and avoid extra work).
In `@test/onboard.test.js`:
- Around line 798-817: The test's static scanner is brittle because
collectSpawnBlocks uses a fixed 15-line window and the assertion expects >= 7
calls; update collectSpawnBlocks to scan forward until it detects the block
terminator (e.g., a line matching /^\s*\}\);/ or end of file) instead of
stopping at i + 15, and change the assertion that uses spawnBlocks to derive the
expected count dynamically (for example compute totalSpawnCalls = lines.filter(l
=> /spawnSync\(/.test(l)).length and assert spawnBlocks.length ===
totalSpawnCalls or >= totalSpawnCalls if partial grouping is acceptable).
Reference: the collectSpawnBlocks function and spawnBlocks usage in the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fadd8dea-4085-4ab4-917a-1b23f9b1b2d7
📒 Files selected for processing (4)
.github/workflows/pr.yamlbin/lib/onboard.jsscripts/check-markdown-links.shtest/onboard.test.js
2317895 to
6228431
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
bin/lib/onboard.js (1)
640-643:⚠️ Potential issue | 🟡 MinorUse a stronger timeout predicate in
isSpawnTimeout.Line 642 currently infers timeout from
SIGTERM+ null status only. That can misclassify non-timeout terminations as"timed out".🔧 Proposed fix
/** True when spawnSync killed the child due to its timeout option. */ function isSpawnTimeout(result) { - return result.status == null && result.signal === "SIGTERM"; + return ( + result?.error?.code === "ETIMEDOUT" || + (result?.status == null && result?.signal === "SIGTERM") + ); }In current Node.js child_process.spawnSync behavior, is `result.error.code === "ETIMEDOUT"` the authoritative timeout indicator, and can `result.signal === "SIGTERM"` occur for non-timeout terminations?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 640 - 643, The timeout detection in isSpawnTimeout is too weak: replace the current predicate with one that first checks for result.error && result.error.code === "ETIMEDOUT" (the authoritative Node timeout indicator) and only as a fallback consider the previous heuristic (result.status == null && result.signal === "SIGTERM") if no error code is available; update the function isSpawnTimeout to prefer the error.code check while preserving backwards compatibility by keeping the signal/status check as secondary.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@bin/lib/onboard.js`:
- Around line 640-643: The timeout detection in isSpawnTimeout is too weak:
replace the current predicate with one that first checks for result.error &&
result.error.code === "ETIMEDOUT" (the authoritative Node timeout indicator) and
only as a fallback consider the previous heuristic (result.status == null &&
result.signal === "SIGTERM") if no error code is available; update the function
isSpawnTimeout to prefer the error.code check while preserving backwards
compatibility by keeping the signal/status check as secondary.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 498d2caf-cb8c-464c-9f7f-acb24d59b778
📒 Files selected for processing (2)
bin/lib/onboard.jstest/onboard.test.js
✅ Files skipped from review due to trivial changes (1)
- test/onboard.test.js
6228431 to
37ec4f5
Compare
All spawnSync calls in onboard.js that run external commands (curl probes, ollama pull, install-openshell) now include a timeout property. Previously, if a subprocess hung (DNS stall, unresponsive endpoint), the onboarding wizard would freeze indefinitely with no way to recover. - curl-based probes/fetches: 30s timeout (5 call sites) - ollama pull: 300s timeout (large model downloads) - install-openshell: 120s timeout - Timed-out calls return a graceful error instead of hanging - Intentional sleep calls left unchanged Adds a regression test that statically verifies every non-sleep spawnSync call in onboard.js includes a timeout property. Fixes NVIDIA#1017
37ec4f5 to
583a90c
Compare
Summary
Adds
timeoutto allspawnSynccalls inbin/lib/onboard.jsthat runexternal commands. Previously, if a subprocess hung (DNS stall, unresponsive
endpoint, network timeout beyond curl's
--max-time), the onboarding wizardfroze indefinitely with no way to recover other than killing the process.
Related Issue
Fixes #1017
Changes
timeout: 30_000(30s) — 5 call sitesollama pull:timeout: 300_000(5 min) — model downloads can be largeinstall-openshell.sh:timeout: 120_000(2 min)result.signal === "SIGTERM"and return a graceful{ ok: false, message: "timed out" }error instead of hangingsleepcalls left unchanged (blocking by design)spawnSynccall includes a
timeoutpropertyType of Change
Code change for a new feature, bug fix, or refactor.
Testing
npm test— 479 passed, 2 skippednpx prek run --all-files— all hooks passedspawnSync timeout safety(2 assertions)Summary by CodeRabbit