My fix branch#1223
Conversation
1. 修复 API Error: undefined is not an object (evaluating '_OpenAI_instances.add')
- build.ts 增加 external: ['openai'],避免 Bun bundler 将 openai SDK 打包进
chunk,防止 private class fields (WeakSet/WeakMap) 懒初始化顺序错乱
2. 修复 API Error: Cannot access 'OpenAI' before initialization
- openai/client.ts、grok/client.ts:静态 import OpenAI 改为 import type(仅类型)
+ 函数体内 await import('openai') 动态加载,彻底规避 Bun code splitting
产生的 TDZ(Temporal Dead Zone)问题
- openai/index.ts、grok/index.ts:对应调用处加 await
3. 修复 API Error: undefined is not an object (evaluating 'DEFAULT_MODEL_MAP[cleanModel]')
- openai/modelMapping.ts、grok/modelMapping.ts:将模块级常量 DEFAULT_MODEL_MAP /
DEFAULT_FAMILY_MAP 改为函数 getDefaultModelMap() / getDefaultFamilyMap(),
避免模块初始化时因 Bun 懒加载顺序导致常量为 undefined
- 包括首次登录(/login)逻辑和后续再次进入界面的逻辑均已覆盖
4. 登录后模型字符串缓存重置(login.tsx)
- /login 成功后依次调用 resetSettingsCache()、applyConfigEnvironmentVariables()、
resetModelStrings(),确保切换 provider 后模型映射立即生效,不残留旧缓存
5. 新增 resetModelStrings() 工具函数(modelStrings.ts)
- 暴露缓存重置接口,供 login 流程调用
6. 错误日志增强(openai/index.ts、grok/index.ts)
- catch 块补充 error.stack 输出,便于定位构建产物中的运行时错误
📝 WalkthroughWalkthroughThis PR refactors OpenAI and Grok client initialization from synchronous to asynchronous with dynamic module imports, establishes concrete typed tool-progress definitions replacing auto-generated stubs, normalizes component message content handling, and adds post-login model-string reset for fresh provider initialization. ChangesAPI Client Async Refactor and Type Safety
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/services/api/grok/client.ts (1)
15-41:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPrevent duplicate Grok client initialization during concurrent first calls.
Line 20 checks cache before the awaited dynamic import (Line 22), so concurrent first calls can both construct clients. Gate the non-
fetchOverridepath with an in-flight promise.Suggested patch
let cachedClient: OpenAI | null = null +let cachedClientPromise: Promise<OpenAI> | null = null export async function getGrokClient(options?: { @@ }): Promise<OpenAI> { if (cachedClient) return cachedClient + if (!options?.fetchOverride && cachedClientPromise) return cachedClientPromise - const { default: OpenAIClass } = await import('openai') - const apiKey = process.env.GROK_API_KEY || process.env.XAI_API_KEY || '' - const baseURL = process.env.GROK_BASE_URL || DEFAULT_BASE_URL - - const client = new OpenAIClass({ - apiKey, - baseURL, - maxRetries: options?.maxRetries ?? 0, - timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10), - dangerouslyAllowBrowser: true, - fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }), - ...(options?.fetchOverride && { fetch: options.fetchOverride }), - }) + const createClient = async (): Promise<OpenAI> => { + const { default: OpenAIClass } = await import('openai') + const apiKey = process.env.GROK_API_KEY || process.env.XAI_API_KEY || '' + const baseURL = process.env.GROK_BASE_URL || DEFAULT_BASE_URL + return new OpenAIClass({ + apiKey, + baseURL, + maxRetries: options?.maxRetries ?? 0, + timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10), + dangerouslyAllowBrowser: true, + fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }), + ...(options?.fetchOverride && { fetch: options.fetchOverride }), + }) + } - if (!options?.fetchOverride) { - cachedClient = client - } + if (options?.fetchOverride) return createClient() - return client + cachedClientPromise = createClient() + try { + cachedClient = await cachedClientPromise + return cachedClient + } finally { + cachedClientPromise = null + } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/api/grok/client.ts` around lines 15 - 41, getGrokClient can race when multiple callers hit the uncached path because the cache check happens before the awaited dynamic import; introduce an in-flight promise guard (e.g., a module-scoped inFlightGetGrokClient Promise) used inside getGrokClient so that the first caller sets inFlightGetGrokClient before awaiting import('openai') and other callers await that same promise, and only set cachedClient when options?.fetchOverride is false; ensure you clear the in-flight promise on success or error to avoid stalling future calls and reference the existing symbols cachedClient and getGrokClient when implementing the guard.src/services/api/openai/client.ts (1)
16-44:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPrevent duplicate OpenAI client initialization during concurrent first calls.
Line 21 checks
cachedClientbefore anawait(Line 23), so parallel first calls can both create clients and race to set cache. Add an in-flight promise gate for the cached (non-fetchOverride) path.Suggested patch
let cachedClient: OpenAI | null = null +let cachedClientPromise: Promise<OpenAI> | null = null export async function getOpenAIClient(options?: { @@ }): Promise<OpenAI> { if (cachedClient) return cachedClient + if (!options?.fetchOverride && cachedClientPromise) return cachedClientPromise - const { default: OpenAIClass } = await import('openai') - const apiKey = process.env.OPENAI_API_KEY || '' - const baseURL = process.env.OPENAI_BASE_URL - - const client = new OpenAIClass({ - apiKey, - ...(baseURL && { baseURL }), - maxRetries: options?.maxRetries ?? 0, - timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10), - dangerouslyAllowBrowser: true, - ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }), - ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }), - fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }), - ...(options?.fetchOverride && { fetch: options.fetchOverride }), - }) + const createClient = async (): Promise<OpenAI> => { + const { default: OpenAIClass } = await import('openai') + const apiKey = process.env.OPENAI_API_KEY || '' + const baseURL = process.env.OPENAI_BASE_URL + return new OpenAIClass({ + apiKey, + ...(baseURL && { baseURL }), + maxRetries: options?.maxRetries ?? 0, + timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10), + dangerouslyAllowBrowser: true, + ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }), + ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }), + fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }), + ...(options?.fetchOverride && { fetch: options.fetchOverride }), + }) + } - if (!options?.fetchOverride) { - cachedClient = client - } + if (options?.fetchOverride) return createClient() - return client + cachedClientPromise = createClient() + try { + cachedClient = await cachedClientPromise + return cachedClient + } finally { + cachedClientPromise = null + } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/api/openai/client.ts` around lines 16 - 44, getOpenAIClient races on the first concurrent calls because cachedClient is checked before the await import('openai'), so multiple callers can create duplicate clients; introduce a module-level inFlight promise (e.g., inFlightOpenAIClientPromise) used only for the non-fetchOverride path: if cachedClient exists return it, else if inFlightOpenAIClientPromise exists await and return its result, otherwise set inFlightOpenAIClientPromise to the promise that imports 'openai' and constructs the client (ensure you clear inFlightOpenAIClientPromise on error and set cachedClient once creation succeeds), and for options.fetchOverride bypass the cache/in-flight gate as current code does.
🧹 Nitpick comments (1)
packages/builtin-tools/src/tools/AgentTool/UI.tsx (1)
36-45: ⚡ Quick winConsider adding Array.isArray check for defensive type narrowing.
After eliminating
null,undefined, andstring, the code assumescontentis an array and casts directly. While this is justified by the SDK'sMessageContenttype definition, adding an explicitArray.isArray(content)guard would align with the coding guideline to prefer type guards over forced type casting and protect against unexpected runtime shapes.🛡️ Proposed defensive type guard
function asContentBlockArray(content: MessageContent | undefined): BetaContentBlock[] { if (content == null) { return []; } if (typeof content === 'string') { return []; } + if (!Array.isArray(content)) { + return []; + } return content as BetaContentBlock[]; }As per coding guidelines: "Use type guards (type narrowing) with union types instead of forced type casting."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/builtin-tools/src/tools/AgentTool/UI.tsx` around lines 36 - 45, The function asContentBlockArray currently casts content to BetaContentBlock[] after checking for null/undefined and string; instead, add an explicit Array.isArray(content) type guard in asContentBlockArray to narrow the type safely—if Array.isArray(content) return it as BetaContentBlock[]; otherwise return an empty array—so you avoid forced casting and handle unexpected runtime shapes.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@build.ts`:
- Line 62: The project currently externalizes 'openai' in build.ts while the
code (src/services/api/openai/client.ts and src/services/api/grok/client.ts)
dynamically imports it at runtime, but 'openai' is only listed in
devDependencies; update package.json to move 'openai' from devDependencies into
dependencies so production installs include it, then reinstall/update the
lockfile (npm/yarn/pnpm) to ensure the runtime package is present; you do not
need to change the external array in build.ts.
In `@src/utils/model/modelStrings.ts`:
- Around line 150-152: resetModelStrings currently races with in-flight async
updaters (like updateBedrockModelStrings) which can overwrite the reset with
stale data; to fix, add a lightweight generation/epoch token to the module state
(e.g., modelStringsEpoch) and increment it inside resetModelStrings, then have
async writers such as updateBedrockModelStrings capture the epoch when they
start and verify it still matches before calling setModelStringsState (or bail
out), ensuring stale async completions do not revert the cleared cache.
---
Outside diff comments:
In `@src/services/api/grok/client.ts`:
- Around line 15-41: getGrokClient can race when multiple callers hit the
uncached path because the cache check happens before the awaited dynamic import;
introduce an in-flight promise guard (e.g., a module-scoped
inFlightGetGrokClient Promise) used inside getGrokClient so that the first
caller sets inFlightGetGrokClient before awaiting import('openai') and other
callers await that same promise, and only set cachedClient when
options?.fetchOverride is false; ensure you clear the in-flight promise on
success or error to avoid stalling future calls and reference the existing
symbols cachedClient and getGrokClient when implementing the guard.
In `@src/services/api/openai/client.ts`:
- Around line 16-44: getOpenAIClient races on the first concurrent calls because
cachedClient is checked before the await import('openai'), so multiple callers
can create duplicate clients; introduce a module-level inFlight promise (e.g.,
inFlightOpenAIClientPromise) used only for the non-fetchOverride path: if
cachedClient exists return it, else if inFlightOpenAIClientPromise exists await
and return its result, otherwise set inFlightOpenAIClientPromise to the promise
that imports 'openai' and constructs the client (ensure you clear
inFlightOpenAIClientPromise on error and set cachedClient once creation
succeeds), and for options.fetchOverride bypass the cache/in-flight gate as
current code does.
---
Nitpick comments:
In `@packages/builtin-tools/src/tools/AgentTool/UI.tsx`:
- Around line 36-45: The function asContentBlockArray currently casts content to
BetaContentBlock[] after checking for null/undefined and string; instead, add an
explicit Array.isArray(content) type guard in asContentBlockArray to narrow the
type safely—if Array.isArray(content) return it as BetaContentBlock[]; otherwise
return an empty array—so you avoid forced casting and handle unexpected runtime
shapes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e7157721-678e-4658-bf0a-02086cd702b2
📒 Files selected for processing (13)
build.tspackages/builtin-tools/src/tools/AgentTool/AgentTool.tsxpackages/builtin-tools/src/tools/AgentTool/UI.tsxpackages/builtin-tools/src/tools/SkillTool/SkillTool.tssrc/commands/login/login.tsxsrc/services/api/grok/client.tssrc/services/api/grok/index.tssrc/services/api/grok/modelMapping.tssrc/services/api/openai/client.tssrc/services/api/openai/index.tssrc/services/api/openai/modelMapping.tssrc/types/tools.tssrc/utils/model/modelStrings.ts
| splitting: true, | ||
| define: getMacroDefines(), | ||
| features, | ||
| external: ['openai'], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking all package.json files for openai dependency placement..."
fd -a '^package\.json$' | while read -r f; do
echo "== $f =="
jq -r '{
name: (.name // "unknown"),
openai_in_dependencies: (.dependencies.openai // null),
openai_in_optionalDependencies: (.optionalDependencies.openai // null),
openai_in_devDependencies: (.devDependencies.openai // null)
}' "$f"
done
echo
echo "Confirming bundler externalization line:"
rg -n "external:\\s*\\[[^\\]]*openai" build.tsRepository: claude-code-best/claude-code
Length of output: 3438
🏁 Script executed:
cat -n build.ts | head -100Repository: claude-code-best/claude-code
Length of output: 3651
🏁 Script executed:
# Search for imports and usage of 'openai' in the codebase
rg -i "import.*openai|from\s+['\"]openai|require\(['\"]openai" --type ts --type tsx -B 2 -A 2Repository: claude-code-best/claude-code
Length of output: 100
🏁 Script executed:
# Check what entry points/outputs are defined in build.ts
rg -A 5 -B 5 "entrypoint|entry|outdir|outfile" build.tsRepository: claude-code-best/claude-code
Length of output: 3056
🏁 Script executed:
rg "openai" --type ts -B 2 -A 2 | head -100Repository: claude-code-best/claude-code
Length of output: 5825
🏁 Script executed:
# Check specifically if openai is imported in cli.tsx or other runtime code
rg "from\s+['\"]openai|import.*openai" --type ts -A 3Repository: claude-code-best/claude-code
Length of output: 4714
Move openai from devDependencies to dependencies.
Line 62 externalizes openai (preventing bundling), but openai is dynamically imported at runtime in src/services/api/openai/client.ts and src/services/api/grok/client.ts. Since it's not bundled, the CLI depends on openai being available as a production dependency. Currently it's only in devDependencies, which means production installations will fail when using OpenAI or Grok providers.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@build.ts` at line 62, The project currently externalizes 'openai' in build.ts
while the code (src/services/api/openai/client.ts and
src/services/api/grok/client.ts) dynamically imports it at runtime, but 'openai'
is only listed in devDependencies; update package.json to move 'openai' from
devDependencies into dependencies so production installs include it, then
reinstall/update the lockfile (npm/yarn/pnpm) to ensure the runtime package is
present; you do not need to change the external array in build.ts.
| export function resetModelStrings(): void { | ||
| setModelStringsState(null as unknown as ModelStrings) | ||
| } |
There was a problem hiding this comment.
Prevent stale async Bedrock writes after cache reset.
On Line 151, resetting the cache can race with an older in-flight updateBedrockModelStrings() call, which may later overwrite the reset state with stale Bedrock mappings.
Suggested fix
const MODEL_KEYS = Object.keys(ALL_MODEL_CONFIGS) as ModelKey[]
+let modelStringsGeneration = 0
const updateBedrockModelStrings = sequential(async () => {
+ const generationAtStart = modelStringsGeneration
if (getModelStringsState() !== null) {
return
}
try {
const ms = await getBedrockModelStrings()
+ if (
+ generationAtStart !== modelStringsGeneration ||
+ getAPIProvider() !== 'bedrock' ||
+ getModelStringsState() !== null
+ ) {
+ return
+ }
setModelStringsState(ms)
} catch (error) {
logError(error as Error)
}
})
export function resetModelStrings(): void {
+ modelStringsGeneration += 1
setModelStringsState(null as unknown as ModelStrings)
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/utils/model/modelStrings.ts` around lines 150 - 152, resetModelStrings
currently races with in-flight async updaters (like updateBedrockModelStrings)
which can overwrite the reset with stale data; to fix, add a lightweight
generation/epoch token to the module state (e.g., modelStringsEpoch) and
increment it inside resetModelStrings, then have async writers such as
updateBedrockModelStrings capture the epoch when they start and verify it still
matches before calling setModelStringsState (or bail out), ensuring stale async
completions do not revert the cleared cache.
完善了tools相关的type字段内容。
Need help on this PR? Tag
@codesmithwith what you need.Summary by CodeRabbit
Release Notes
New Features
Improvements