Skip to content

My fix branch#1223

Closed
xiaoFjun-eng wants to merge 2 commits into
claude-code-best:mainfrom
xiaoFjun-eng:my-fix-branch
Closed

My fix branch#1223
xiaoFjun-eng wants to merge 2 commits into
claude-code-best:mainfrom
xiaoFjun-eng:my-fix-branch

Conversation

@xiaoFjun-eng
Copy link
Copy Markdown
Contributor

@xiaoFjun-eng xiaoFjun-eng commented May 13, 2026

完善了tools相关的type字段内容。


View in Codesmith
Need help on this PR? Tag @codesmith with what you need.

  • Let Codesmith autofix CI failures and bot reviews

Summary by CodeRabbit

Release Notes

  • New Features

    • Post-login workflow now includes automatic settings cache reset and managed environment variables initialization.
  • Improvements

    • API error logs now include complete stack traces for better debugging and troubleshooting.
    • Tool progress types strengthened with concrete TypeScript definitions for improved reliability.

Review Change Stack

1. 修复 API Error: undefined is not an object (evaluating '_OpenAI_instances.add')
   - build.ts 增加 external: ['openai'],避免 Bun bundler 将 openai SDK 打包进
     chunk,防止 private class fields (WeakSet/WeakMap) 懒初始化顺序错乱

2. 修复 API Error: Cannot access 'OpenAI' before initialization
   - openai/client.ts、grok/client.ts:静态 import OpenAI 改为 import type(仅类型)
     + 函数体内 await import('openai') 动态加载,彻底规避 Bun code splitting
     产生的 TDZ(Temporal Dead Zone)问题
   - openai/index.ts、grok/index.ts:对应调用处加 await

3. 修复 API Error: undefined is not an object (evaluating 'DEFAULT_MODEL_MAP[cleanModel]')
   - openai/modelMapping.ts、grok/modelMapping.ts:将模块级常量 DEFAULT_MODEL_MAP /
     DEFAULT_FAMILY_MAP 改为函数 getDefaultModelMap() / getDefaultFamilyMap(),
     避免模块初始化时因 Bun 懒加载顺序导致常量为 undefined
   - 包括首次登录(/login)逻辑和后续再次进入界面的逻辑均已覆盖

4. 登录后模型字符串缓存重置(login.tsx)
   - /login 成功后依次调用 resetSettingsCache()、applyConfigEnvironmentVariables()、
     resetModelStrings(),确保切换 provider 后模型映射立即生效,不残留旧缓存

5. 新增 resetModelStrings() 工具函数(modelStrings.ts)
   - 暴露缓存重置接口,供 login 流程调用

6. 错误日志增强(openai/index.ts、grok/index.ts)
   - catch 块补充 error.stack 输出,便于定位构建产物中的运行时错误
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 13, 2026

📝 Walkthrough

Walkthrough

This PR refactors OpenAI and Grok client initialization from synchronous to asynchronous with dynamic module imports, establishes concrete typed tool-progress definitions replacing auto-generated stubs, normalizes component message content handling, and adds post-login model-string reset for fresh provider initialization.

Changes

API Client Async Refactor and Type Safety

Layer / File(s) Summary
Tool progress type definitions
src/types/tools.ts
Concrete typed tool-progress definitions replace any stubs with discriminated unions for bash, PowerShell, agent, skill, MCP, web-search, task, workflow, and REPL progress payloads, and unify them in ToolProgressData.
OpenAI client async refactor and caller
src/services/api/openai/client.ts, src/services/api/openai/index.ts, src/services/api/openai/modelMapping.ts
OpenAI client factory uses type-only import with dynamic import('openai') inside async getOpenAIClient, returning Promise<OpenAI>. queryModelOpenAI awaits client initialization, includes stack traces in error logs, and model mapping refactored to getter function.
Grok client async refactor and caller
src/services/api/grok/client.ts, src/services/api/grok/index.ts, src/services/api/grok/modelMapping.ts
Grok client factory parallels OpenAI: type-only import, async dynamic import inside getGrokClient, returning Promise<OpenAI>. queryModelGrok awaits client, includes stack traces in error logs, and model/family mappings converted to getter functions.
Build configuration for dynamic imports
build.ts
openai module marked as external in Bun.build to exclude it from bundle and resolve at runtime, supporting dynamic import strategy.
Component type assertions and content normalization
packages/builtin-tools/src/tools/AgentTool/UI.tsx, packages/builtin-tools/src/tools/AgentTool/AgentTool.tsx, packages/builtin-tools/src/tools/SkillTool/SkillTool.ts
AgentTool/UI introduces asContentBlockArray helper to normalize message content to array form, updating search/read detection, tool-use/tool-result counting, token aggregation, and tool-info extraction. AgentTool and SkillTool add explicit type assertions on message fields in progress callbacks.
Post-login model-string reset and environment application
src/utils/model/modelStrings.ts, src/commands/login/login.tsx
New resetModelStrings() helper clears cached model-strings state. Login call() now resets settings cache, applies managed environment variables, and calls resetModelStrings() before existing remote refreshes, ensuring fresh model availability after login.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 With dynamic imports, clients now await,
Types bloom from shadows—no more any fate.
Content blocks normalize, messages align,
Post-login refresh makes the models shine.
External bundles keep the build so lean!

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'My fix branch' is generic and does not convey meaningful information about the changeset's primary purpose. Use a descriptive title that summarizes the main change, such as 'Make OpenAI SDK dynamic import and reset model strings on login' or 'Fix Bun bundling and add tool progress type definitions'.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch my-fix-branch

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/services/api/grok/client.ts (1)

15-41: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Prevent duplicate Grok client initialization during concurrent first calls.

Line 20 checks cache before the awaited dynamic import (Line 22), so concurrent first calls can both construct clients. Gate the non-fetchOverride path with an in-flight promise.

Suggested patch
 let cachedClient: OpenAI | null = null
+let cachedClientPromise: Promise<OpenAI> | null = null
 
 export async function getGrokClient(options?: {
@@
 }): Promise<OpenAI> {
   if (cachedClient) return cachedClient
+  if (!options?.fetchOverride && cachedClientPromise) return cachedClientPromise
 
-  const { default: OpenAIClass } = await import('openai')
-  const apiKey = process.env.GROK_API_KEY || process.env.XAI_API_KEY || ''
-  const baseURL = process.env.GROK_BASE_URL || DEFAULT_BASE_URL
-
-  const client = new OpenAIClass({
-    apiKey,
-    baseURL,
-    maxRetries: options?.maxRetries ?? 0,
-    timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
-    dangerouslyAllowBrowser: true,
-    fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
-    ...(options?.fetchOverride && { fetch: options.fetchOverride }),
-  })
+  const createClient = async (): Promise<OpenAI> => {
+    const { default: OpenAIClass } = await import('openai')
+    const apiKey = process.env.GROK_API_KEY || process.env.XAI_API_KEY || ''
+    const baseURL = process.env.GROK_BASE_URL || DEFAULT_BASE_URL
+    return new OpenAIClass({
+      apiKey,
+      baseURL,
+      maxRetries: options?.maxRetries ?? 0,
+      timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
+      dangerouslyAllowBrowser: true,
+      fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
+      ...(options?.fetchOverride && { fetch: options.fetchOverride }),
+    })
+  }
 
-  if (!options?.fetchOverride) {
-    cachedClient = client
-  }
+  if (options?.fetchOverride) return createClient()
 
-  return client
+  cachedClientPromise = createClient()
+  try {
+    cachedClient = await cachedClientPromise
+    return cachedClient
+  } finally {
+    cachedClientPromise = null
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/grok/client.ts` around lines 15 - 41, getGrokClient can race
when multiple callers hit the uncached path because the cache check happens
before the awaited dynamic import; introduce an in-flight promise guard (e.g., a
module-scoped inFlightGetGrokClient Promise) used inside getGrokClient so that
the first caller sets inFlightGetGrokClient before awaiting import('openai') and
other callers await that same promise, and only set cachedClient when
options?.fetchOverride is false; ensure you clear the in-flight promise on
success or error to avoid stalling future calls and reference the existing
symbols cachedClient and getGrokClient when implementing the guard.
src/services/api/openai/client.ts (1)

16-44: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Prevent duplicate OpenAI client initialization during concurrent first calls.

Line 21 checks cachedClient before an await (Line 23), so parallel first calls can both create clients and race to set cache. Add an in-flight promise gate for the cached (non-fetchOverride) path.

Suggested patch
 let cachedClient: OpenAI | null = null
+let cachedClientPromise: Promise<OpenAI> | null = null
 
 export async function getOpenAIClient(options?: {
@@
 }): Promise<OpenAI> {
   if (cachedClient) return cachedClient
+  if (!options?.fetchOverride && cachedClientPromise) return cachedClientPromise
 
-  const { default: OpenAIClass } = await import('openai')
-  const apiKey = process.env.OPENAI_API_KEY || ''
-  const baseURL = process.env.OPENAI_BASE_URL
-
-  const client = new OpenAIClass({
-    apiKey,
-    ...(baseURL && { baseURL }),
-    maxRetries: options?.maxRetries ?? 0,
-    timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
-    dangerouslyAllowBrowser: true,
-    ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
-    ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
-    fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
-    ...(options?.fetchOverride && { fetch: options.fetchOverride }),
-  })
+  const createClient = async (): Promise<OpenAI> => {
+    const { default: OpenAIClass } = await import('openai')
+    const apiKey = process.env.OPENAI_API_KEY || ''
+    const baseURL = process.env.OPENAI_BASE_URL
+    return new OpenAIClass({
+      apiKey,
+      ...(baseURL && { baseURL }),
+      maxRetries: options?.maxRetries ?? 0,
+      timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
+      dangerouslyAllowBrowser: true,
+      ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
+      ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
+      fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
+      ...(options?.fetchOverride && { fetch: options.fetchOverride }),
+    })
+  }
 
-  if (!options?.fetchOverride) {
-    cachedClient = client
-  }
+  if (options?.fetchOverride) return createClient()
 
-  return client
+  cachedClientPromise = createClient()
+  try {
+    cachedClient = await cachedClientPromise
+    return cachedClient
+  } finally {
+    cachedClientPromise = null
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/openai/client.ts` around lines 16 - 44, getOpenAIClient
races on the first concurrent calls because cachedClient is checked before the
await import('openai'), so multiple callers can create duplicate clients;
introduce a module-level inFlight promise (e.g., inFlightOpenAIClientPromise)
used only for the non-fetchOverride path: if cachedClient exists return it, else
if inFlightOpenAIClientPromise exists await and return its result, otherwise set
inFlightOpenAIClientPromise to the promise that imports 'openai' and constructs
the client (ensure you clear inFlightOpenAIClientPromise on error and set
cachedClient once creation succeeds), and for options.fetchOverride bypass the
cache/in-flight gate as current code does.
🧹 Nitpick comments (1)
packages/builtin-tools/src/tools/AgentTool/UI.tsx (1)

36-45: ⚡ Quick win

Consider adding Array.isArray check for defensive type narrowing.

After eliminating null, undefined, and string, the code assumes content is an array and casts directly. While this is justified by the SDK's MessageContent type definition, adding an explicit Array.isArray(content) guard would align with the coding guideline to prefer type guards over forced type casting and protect against unexpected runtime shapes.

🛡️ Proposed defensive type guard
 function asContentBlockArray(content: MessageContent | undefined): BetaContentBlock[] {
   if (content == null) {
     return [];
   }
   if (typeof content === 'string') {
     return [];
   }
+  if (!Array.isArray(content)) {
+    return [];
+  }
   return content as BetaContentBlock[];
 }

As per coding guidelines: "Use type guards (type narrowing) with union types instead of forced type casting."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/builtin-tools/src/tools/AgentTool/UI.tsx` around lines 36 - 45, The
function asContentBlockArray currently casts content to BetaContentBlock[] after
checking for null/undefined and string; instead, add an explicit
Array.isArray(content) type guard in asContentBlockArray to narrow the type
safely—if Array.isArray(content) return it as BetaContentBlock[]; otherwise
return an empty array—so you avoid forced casting and handle unexpected runtime
shapes.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@build.ts`:
- Line 62: The project currently externalizes 'openai' in build.ts while the
code (src/services/api/openai/client.ts and src/services/api/grok/client.ts)
dynamically imports it at runtime, but 'openai' is only listed in
devDependencies; update package.json to move 'openai' from devDependencies into
dependencies so production installs include it, then reinstall/update the
lockfile (npm/yarn/pnpm) to ensure the runtime package is present; you do not
need to change the external array in build.ts.

In `@src/utils/model/modelStrings.ts`:
- Around line 150-152: resetModelStrings currently races with in-flight async
updaters (like updateBedrockModelStrings) which can overwrite the reset with
stale data; to fix, add a lightweight generation/epoch token to the module state
(e.g., modelStringsEpoch) and increment it inside resetModelStrings, then have
async writers such as updateBedrockModelStrings capture the epoch when they
start and verify it still matches before calling setModelStringsState (or bail
out), ensuring stale async completions do not revert the cleared cache.

---

Outside diff comments:
In `@src/services/api/grok/client.ts`:
- Around line 15-41: getGrokClient can race when multiple callers hit the
uncached path because the cache check happens before the awaited dynamic import;
introduce an in-flight promise guard (e.g., a module-scoped
inFlightGetGrokClient Promise) used inside getGrokClient so that the first
caller sets inFlightGetGrokClient before awaiting import('openai') and other
callers await that same promise, and only set cachedClient when
options?.fetchOverride is false; ensure you clear the in-flight promise on
success or error to avoid stalling future calls and reference the existing
symbols cachedClient and getGrokClient when implementing the guard.

In `@src/services/api/openai/client.ts`:
- Around line 16-44: getOpenAIClient races on the first concurrent calls because
cachedClient is checked before the await import('openai'), so multiple callers
can create duplicate clients; introduce a module-level inFlight promise (e.g.,
inFlightOpenAIClientPromise) used only for the non-fetchOverride path: if
cachedClient exists return it, else if inFlightOpenAIClientPromise exists await
and return its result, otherwise set inFlightOpenAIClientPromise to the promise
that imports 'openai' and constructs the client (ensure you clear
inFlightOpenAIClientPromise on error and set cachedClient once creation
succeeds), and for options.fetchOverride bypass the cache/in-flight gate as
current code does.

---

Nitpick comments:
In `@packages/builtin-tools/src/tools/AgentTool/UI.tsx`:
- Around line 36-45: The function asContentBlockArray currently casts content to
BetaContentBlock[] after checking for null/undefined and string; instead, add an
explicit Array.isArray(content) type guard in asContentBlockArray to narrow the
type safely—if Array.isArray(content) return it as BetaContentBlock[]; otherwise
return an empty array—so you avoid forced casting and handle unexpected runtime
shapes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e7157721-678e-4658-bf0a-02086cd702b2

📥 Commits

Reviewing files that changed from the base of the PR and between 3d7b32f and 99ac2eb.

📒 Files selected for processing (13)
  • build.ts
  • packages/builtin-tools/src/tools/AgentTool/AgentTool.tsx
  • packages/builtin-tools/src/tools/AgentTool/UI.tsx
  • packages/builtin-tools/src/tools/SkillTool/SkillTool.ts
  • src/commands/login/login.tsx
  • src/services/api/grok/client.ts
  • src/services/api/grok/index.ts
  • src/services/api/grok/modelMapping.ts
  • src/services/api/openai/client.ts
  • src/services/api/openai/index.ts
  • src/services/api/openai/modelMapping.ts
  • src/types/tools.ts
  • src/utils/model/modelStrings.ts

Comment thread build.ts
splitting: true,
define: getMacroDefines(),
features,
external: ['openai'],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Checking all package.json files for openai dependency placement..."
fd -a '^package\.json$' | while read -r f; do
  echo "== $f =="
  jq -r '{
    name: (.name // "unknown"),
    openai_in_dependencies: (.dependencies.openai // null),
    openai_in_optionalDependencies: (.optionalDependencies.openai // null),
    openai_in_devDependencies: (.devDependencies.openai // null)
  }' "$f"
done

echo
echo "Confirming bundler externalization line:"
rg -n "external:\\s*\\[[^\\]]*openai" build.ts

Repository: claude-code-best/claude-code

Length of output: 3438


🏁 Script executed:

cat -n build.ts | head -100

Repository: claude-code-best/claude-code

Length of output: 3651


🏁 Script executed:

# Search for imports and usage of 'openai' in the codebase
rg -i "import.*openai|from\s+['\"]openai|require\(['\"]openai" --type ts --type tsx -B 2 -A 2

Repository: claude-code-best/claude-code

Length of output: 100


🏁 Script executed:

# Check what entry points/outputs are defined in build.ts
rg -A 5 -B 5 "entrypoint|entry|outdir|outfile" build.ts

Repository: claude-code-best/claude-code

Length of output: 3056


🏁 Script executed:

rg "openai" --type ts -B 2 -A 2 | head -100

Repository: claude-code-best/claude-code

Length of output: 5825


🏁 Script executed:

# Check specifically if openai is imported in cli.tsx or other runtime code
rg "from\s+['\"]openai|import.*openai" --type ts -A 3

Repository: claude-code-best/claude-code

Length of output: 4714


Move openai from devDependencies to dependencies.

Line 62 externalizes openai (preventing bundling), but openai is dynamically imported at runtime in src/services/api/openai/client.ts and src/services/api/grok/client.ts. Since it's not bundled, the CLI depends on openai being available as a production dependency. Currently it's only in devDependencies, which means production installations will fail when using OpenAI or Grok providers.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@build.ts` at line 62, The project currently externalizes 'openai' in build.ts
while the code (src/services/api/openai/client.ts and
src/services/api/grok/client.ts) dynamically imports it at runtime, but 'openai'
is only listed in devDependencies; update package.json to move 'openai' from
devDependencies into dependencies so production installs include it, then
reinstall/update the lockfile (npm/yarn/pnpm) to ensure the runtime package is
present; you do not need to change the external array in build.ts.

Comment on lines +150 to +152
export function resetModelStrings(): void {
setModelStringsState(null as unknown as ModelStrings)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Prevent stale async Bedrock writes after cache reset.

On Line 151, resetting the cache can race with an older in-flight updateBedrockModelStrings() call, which may later overwrite the reset state with stale Bedrock mappings.

Suggested fix
 const MODEL_KEYS = Object.keys(ALL_MODEL_CONFIGS) as ModelKey[]
+let modelStringsGeneration = 0

 const updateBedrockModelStrings = sequential(async () => {
+  const generationAtStart = modelStringsGeneration
   if (getModelStringsState() !== null) {
     return
   }
   try {
     const ms = await getBedrockModelStrings()
+    if (
+      generationAtStart !== modelStringsGeneration ||
+      getAPIProvider() !== 'bedrock' ||
+      getModelStringsState() !== null
+    ) {
+      return
+    }
     setModelStringsState(ms)
   } catch (error) {
     logError(error as Error)
   }
 })

 export function resetModelStrings(): void {
+  modelStringsGeneration += 1
   setModelStringsState(null as unknown as ModelStrings)
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/utils/model/modelStrings.ts` around lines 150 - 152, resetModelStrings
currently races with in-flight async updaters (like updateBedrockModelStrings)
which can overwrite the reset with stale data; to fix, add a lightweight
generation/epoch token to the module state (e.g., modelStringsEpoch) and
increment it inside resetModelStrings, then have async writers such as
updateBedrockModelStrings capture the epoch when they start and verify it still
matches before calling setModelStringsState (or bail out), ensuring stale async
completions do not revert the cleared cache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants