-
Notifications
You must be signed in to change notification settings - Fork 3
feat: support ai-sdk, add stream-switch #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds an AI abstraction (BaseAi) with SDK and REST backends, a runtime factory (getAiInstance), streaming vs non‑streaming chat paths controlled by streamSwitch, expanded llmConfig and per‑server transport options, streaming assembly and transformer utilities, JSON Schema→Zod converter, updated types/exports, README examples, and new ai SDK deps. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Client as McpClientChat
participant Factory as getAiInstance
participant AI as AiSDK / AiRestApi
rect rgba(232,240,255,0.7)
note over Client,Factory: Initialization
User->>Client: init(options with llmConfig)
Client->>Factory: getAiInstance(llmConfig)
alt llmConfig.useSDK == true
Factory-->>Client: AiSDK instance
else
Factory-->>Client: AiRestApi instance
end
end
rect rgba(240,255,240,0.6)
note over User,Client: Chat flow (streamSwitch)
User->>Client: chat()
Client->>Client: getChatBody()
alt streamSwitch == true
Client->>AI: aiInstance.chatStream(chatBody)
AI-->>Client: ReadableStream
Client->>Client: generateStreamingResponses(stream, writeMessageDelta)
Client-->>User: streaming deltas... then data: [DONE]
else
Client->>AI: aiInstance.chat(chatBody)
AI-->>Client: ChatCompleteResponse / Error
Client-->>User: final response
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Pre-merge checks (2 passed, 1 warning)❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 22
🔭 Outside diff range comments (4)
packages/mcp/mcp-client-chat/src/init.ts (1)
11-13
: Add a runtime guard for unsupported agentStrategy valuesIf an invalid value is passed at runtime (e.g., from untyped JS), Chat resolves to undefined and will throw later. Fail fast with a clear error.
const agentStrategy = options.agentStrategy ?? AgentStrategy.FUNCTION_CALLING; - const Chat = chatMap[agentStrategy]; + const Chat = chatMap[agentStrategy as AgentStrategy]; + if (!Chat) { + throw new Error(`Unsupported agentStrategy: ${String(agentStrategy)}`); + } const mcpClient = new Chat(options);packages/mcp/mcp-client-chat/src/react/reActChat.ts (2)
116-123
: Fix fallback path to respect streaming mode in error scenariosThe catch block always reads from NonStreamingChoice, which is wrong when streaming is enabled.
Apply this diff:
- const text = (response.choices[0] as NonStreamingChoice).message.content ?? ''; + const text = this.options.streamSwitch + ? ((response.choices[0] as StreamingChoice).delta.content ?? '') + : ((response.choices[0] as NonStreamingChoice).message.content ?? '');
127-135
: Align ChatBody with expected fields: map streamSwitch -> stream and omit non-protocol fields
ChatBody
expectsstream?: boolean
but you’re spreadingstreamSwitch
(and possiblyuseSDK
) into the payload. This can leak non-protocol fields downstream.Apply this diff:
protected async getChatBody(): Promise<ChatBody> { - const { apiKey, url, systemPrompt, summarySystemPrompt, model, ...llmConfig } = this.options.llmConfig; - const chatBody: ChatBody = { - model, - messages: this.messages, - ...llmConfig, - }; + const { + apiKey, + url, + systemPrompt, + summarySystemPrompt, + model, + streamSwitch, + // do not propagate selector fields into the request body + useSDK: _useSDK, + ...callSettings + } = this.options.llmConfig as any; + const chatBody: ChatBody = { + model, + messages: this.messages, + ...(streamSwitch !== undefined ? { stream: streamSwitch } : {}), + ...callSettings, + }; return chatBody; }Follow-up: If you prefer stricter typing, add a helper to explicitly map from
LlmConfig
toChatBody
instead of relying onas any
.packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (1)
72-85
: Duplicate interface definitionThe
MCPClientOptions
interface is defined twice (lines 72-78 and 80-85) with slightly different properties. This will cause a TypeScript error.Apply this diff to remove the duplicate and merge the definitions:
export interface MCPClientOptions { agentStrategy?: AgentStrategy; llmConfig: LlmConfig; mcpServersConfig: McpServersConfig; // MCP service configuration maxIterationSteps?: number; // Maximum execution steps streamSwitch?: boolean; } -export interface MCPClientOptions { - agentStrategy?: AgentStrategy; - llmConfig: LlmConfig; - mcpServersConfig: McpServersConfig; // MCP service configuration - maxIterationSteps?: number; // Maximum execution steps -}
🧹 Nitpick comments (9)
packages/mcp/mcp-client-chat/package.json (1)
17-22
: Avoid relying on transitive provider packages; move SDK providers to peer deps and remove from depsThe library itself shouldn't depend on concrete provider packages that consumers choose at app level. Keeping '@ai-sdk/openai' and '@ai-sdk/provider' as dependencies encourages consumers to rely on transitive deps and can cause version skew. Recommend:
- Remove provider packages from dependencies.
- Document provider installs in README examples (already shown; expand with install commands).
- Optionally add them as peerDependencies (optional), so consumers resolve versions explicitly.
Apply this minimal diff to remove unnecessary direct deps:
"dependencies": { - "@ai-sdk/openai": "^2.0.10", - "@ai-sdk/provider": "^2.0.0", "@anthropic-ai/sdk": "^0.41.0", "@modelcontextprotocol/sdk": "^1.11.1", "ai": "^5.0.10", "openai": "^4.98.0", "zod": "^3.24.2" },If you want to make them optional peer deps (recommended), add this to package.json (outside the changed hunk):
"peerDependencies": { "@ai-sdk/openai": "^2.0.10", "@ai-sdk/provider": "^2.0.0", "ai": "^5.0.10" }, "peerDependenciesMeta": { "@ai-sdk/openai": { "optional": true }, "@ai-sdk/provider": { "optional": true }, "ai": { "optional": false } }packages/mcp/mcp-client-chat/README.md (2)
116-133
: Fix reversed link syntax for OpenAI provider sectionLinks use reversed syntax. Replace "(OpenAI)[...]" with "OpenAI".
-### 使用 (OpenAI)[https://ai-sdk.dev/providers/ai-sdk-providers/openai] +### 使用 [OpenAI](https://ai-sdk.dev/providers/ai-sdk-providers/openai)Additionally, add explicit install instructions so consumers don’t rely on transitive deps:
### 使用 [OpenAI](https://ai-sdk.dev/providers/ai-sdk-providers/openai) +```bash +# 安装所需依赖(请在你的应用项目中执行) +npm i ai @ai-sdk/openai +``` + ```typescript import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat"; import { createOpenAI } from '@ai-sdk/openai';
157-188
: Fix reversed link syntax for DeepSeek provider; document install of provider packageCorrect the link and add install step for the provider used in the example.
-### 使用 (DeepSeek)[https://ai-sdk.dev/providers/ai-sdk-providers/deepseek] +### 使用 [DeepSeek](https://ai-sdk.dev/providers/ai-sdk-providers/deepseek) ```typescript +// 安装所需依赖(请在你的应用项目中执行) +// npm i ai @ai-sdk/deepseek + import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat"; import { createDeepSeek } from '@ai-sdk/deepseek';packages/mcp/mcp-client-chat/src/index.ts (1)
2-2
: Preserve backward compatibility for prior './type.js' import pathSwitching to './types/index.js' is fine, but if downstream code imports from './type.js', this will be a breaking change. Consider adding a thin compatibility re-export to avoid disruption until the next major.
Add this new file to keep the old path working (outside current range):
// packages/mcp/mcp-client-chat/src/type.ts export * from './types/index.js';Please confirm whether any consumers import from '.../src/type' or published builds referencing 'type.js'. If yes, the compatibility shim above is recommended.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts (1)
4-15
: Build provider options defensively to avoid passing undefined fieldsWhen
useSDK: true
,apiKey
andurl
may be undefined. Only include defined fields to avoid accidental “undefined” headers or base URLs.Apply this diff:
export const getDefaultProvider = (llmConfig: LlmConfig) => { - const options: OpenAIProviderSettings = { - apiKey: llmConfig.apiKey, - baseURL: llmConfig.url, - }; - - if (llmConfig.headers) { - options.headers = llmConfig.headers as Record<string, string>; - } + const options: OpenAIProviderSettings = { + ...(llmConfig.apiKey ? { apiKey: llmConfig.apiKey } : {}), + ...(llmConfig.url ? { baseURL: llmConfig.url } : {}), + ...(llmConfig.headers ? { headers: llmConfig.headers as Record<string, string> } : {}), + }; return createOpenAI(options); };packages/mcp/mcp-client-chat/src/react/reActChat.ts (1)
2-11
: Satisfy import sorting rule by reordering named importsESLint hints indicate sorting issues for the imported members (ChatBody). Keep the import deterministic.
Apply this diff:
import type { - ChatCompleteResponse, - ChatBody, + ChatBody, + ChatCompleteResponse, MCPClientOptions, NonStreamingChoice, StreamingChoice, Tool, ToolCall, } from '../types/index.js';packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts (1)
46-84
: Consider generating meaningful ID and timestamp values for streaming chunksSimilar to the non-streaming function, this uses placeholder values for
id
andcreated
.Apply this diff for consistency:
export function toOpenAIChunk(chunk: TextStreamPart<ToolSet>, model: LanguageModel): ChatCompleteResponse { const choice: StreamingChoice = { finish_reason: '', native_finish_reason: '', delta: { content: '', role: Role.ASSISTANT, }, }; const result: ChatCompleteResponse = { - id: '', - created: 0, + id: `chat-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`, + created: Math.floor(Date.now() / 1000), object: 'chat.completion.chunk', model: typeof model === 'string' ? model : model.modelId, choices: [choice], };packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
16-38
: Consider improving error handling for better debuggingThe error handling returns the error as-is but could provide more context for debugging, especially for network failures.
Apply this diff to enhance error information:
async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { const { url, apiKey } = this.llmConfig; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(chatBody), }); if (!response.ok) { return new Error(`HTTP error ${response.status}: ${await response.text()}`); } return (await response.json()) as ChatCompleteResponse; } catch (error) { - console.error('Error calling chat/complete:', error); - - return error as Error; + const errorMessage = error instanceof Error ? error.message : String(error); + console.error('Error calling chat/complete:', errorMessage); + return new Error(`Failed to call chat API: ${errorMessage}`); } }packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
104-108
: Remove unused destructured propertiesThe destructured properties
summarySystemPrompt
,url
,useSDK
, andapiKey
on line 104 are not used in thegenerateChatOptions
method.Apply this diff to remove unused properties:
- const { model, systemPrompt, summarySystemPrompt, url, useSDK, apiKey, ...rest } = this.llmConfig; + const { model, systemPrompt, ...rest } = this.llmConfig;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
packages/mcp/mcp-client-chat/README.md
(5 hunks)packages/mcp/mcp-client-chat/package.json
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-instance.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-rest-api/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/base-ai.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
(4 hunks)packages/mcp/mcp-client-chat/src/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/init.ts
(1 hunks)packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
(11 hunks)packages/mcp/mcp-client-chat/src/react/reActChat.ts
(3 hunks)packages/mcp/mcp-client-chat/src/types/aiSDK.ts
(1 hunks)packages/mcp/mcp-client-chat/src/types/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
(5 hunks)packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/transformer.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/zodSchema.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
packages/mcp/mcp-client-chat/src/utils/transformer.ts
packages/mcp/mcp-client-chat/src/types/aiSDK.ts
packages/mcp/mcp-client-chat/src/init.ts
packages/mcp/mcp-client-chat/src/react/reActChat.ts
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/index.ts
packages/mcp/mcp-client-chat/src/utils/transformer.ts
packages/mcp/mcp-client-chat/src/init.ts
packages/mcp/mcp-client-chat/src/react/reActChat.ts
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
packages/mcp/mcp-client-chat/README.md
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-05-28T12:25:07.698Z
Learnt from: rhlin
PR: opentiny/tiny-agent#16
File: packages/mcp/mcp-proxy-server/test.ts:15-16
Timestamp: 2025-05-28T12:25:07.698Z
Learning: When using MCP SDK transports (like StreamableHTTPServerTransport, SSEServerTransport) with Express, do not add standard JSON parsing middleware like express.json() as it conflicts with how the official MCP transports handle request data internally. MCP transports are designed to process raw HTTP request bodies and streams directly.
Applied to files:
packages/mcp/mcp-client-chat/README.md
🧬 Code Graph Analysis (10)
packages/mcp/mcp-client-chat/src/ai/ai-instance.ts (2)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (1)
LlmConfig
(62-70)packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
AiSDK
(31-183)
packages/mcp/mcp-client-chat/src/ai/base-ai.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (2)
ChatBody
(104-110)ChatCompleteResponse
(176-185)
packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts (2)
packages/mcp/mcp-client-chat/src/types/aiSDK.ts (1)
LanguageModel
(8-8)packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (2)
ChatCompleteResponse
(176-185)StreamingChoice
(160-165)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (1)
LlmConfig
(62-70)
packages/mcp/mcp-client-chat/src/utils/transformer.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (4)
ToolCall
(134-138)ChatCompleteResponse
(176-185)StreamingChoice
(160-165)NonStreamingChoice
(153-158)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(104-110)ChatCompleteResponse
(176-185)
packages/mcp/mcp-client-chat/src/react/reActChat.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
StreamingChoice
(160-165)NonStreamingChoice
(153-158)ChatBody
(104-110)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (1)
packages/mcp/mcp-client-chat/src/types/aiSDK.ts (1)
LanguageModel
(8-8)
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (5)
ChoiceMessage
(146-151)StreamingChoice
(160-165)NonStreamingChoice
(153-158)ChatBody
(104-110)AvailableTool
(87-98)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (5)
MCPClientOptions
(72-78)MCPClientOptions
(80-85)ChatCompleteResponse
(176-185)ChatBody
(104-110)ToolCall
(134-138)packages/mcp/mcp-client-chat/src/ai/ai-instance.ts (1)
getAiInstance
(6-12)packages/mcp/mcp-client-chat/src/utils/transformer.ts (1)
generateStreamingResponses
(105-167)
🪛 ESLint
packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
[error] 2-2: ai
type import should occur before type import of ../types/index.js
(import-x/order)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts
[error] 2-2: Imports "OpenAIProviderSettings" are only used as type.
(@typescript-eslint/consistent-type-imports)
[error] 2-2: @ai-sdk/openai
import should occur before type import of ../../types/index.js
(import-x/order)
[error] 2-2: Member 'OpenAIProviderSettings' of the import declaration should be sorted alphabetically.
(sort-imports)
packages/mcp/mcp-client-chat/src/utils/transformer.ts
[error] 2-2: 'AvailableTool' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 3-3: 'ChatBody' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 5-5: 'CustomTransportMcpServer' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 6-6: 'IChatOptions' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 7-7: 'MCPClientOptions' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 8-8: 'McpServer' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 9-9: 'Message' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 13-13: 'ToolResults' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 105-105: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 92-92: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 25-25: '../../utils/index.js' imported multiple times.
(import-x/no-duplicates)
[error] 27-27: ../../utils/index.js
import should occur before import of ./defaultConfig.js
(import-x/order)
[error] 27-27: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 27-27: '../../utils/index.js' imported multiple times.
(import-x/no-duplicates)
[error] 157-157: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/react/reActChat.ts
[error] 4-4: Member 'ChatBody' of the import declaration should be sorted alphabetically.
(sort-imports)
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
[error] 10-10: Member 'ModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 10-10: 'ModelMessage' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
[error] 198-198: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🪛 markdownlint-cli2 (0.17.2)
packages/mcp/mcp-client-chat/README.md
116-116: Reversed link syntax
(OpenAI)[https://ai-sdk.dev/providers/ai-sdk-providers/openai]
(MD011, no-reversed-links)
157-157: Reversed link syntax
(DeepSeek)[https://ai-sdk.dev/providers/ai-sdk-providers/deepseek]
(MD011, no-reversed-links)
🔇 Additional comments (24)
packages/mcp/mcp-client-chat/src/init.ts (1)
4-4
: LGTM: centralizing type imports via types/index.jsSwitching to the barrel export improves cohesion and reduces deep import paths. No runtime impact.
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/index.ts (1)
1-1
: Barrel export looks goodPublic re-export aligns with the package’s module structure and keeps imports consistent.
packages/mcp/mcp-client-chat/src/types/index.ts (1)
1-2
: LGTM; verify type-only re-exports don’t force runtime importsThe aggregator is appropriate. If these underlying modules export only types, TypeScript will still emit runtime re-exports for
export *
under NodeNext/Bundler. This is generally harmless (the emitted .js files exist) but adds unnecessary runtime edges. It’s fine to keep as-is; optionally switch to namedexport type { ... }
if you want pure type elision.Confirm tsconfig uses NodeNext/Bundler module resolution with extensioned imports, so these .js paths type-check and emit correctly.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/index.ts (1)
1-1
: LGTM: barrel export for AiSDKThe barrel improves discoverability without changing runtime behavior.
packages/mcp/mcp-client-chat/src/utils/index.ts (1)
1-3
: LGTM: centralized utils aggregatorGood consolidation. This simplifies imports for streaming assembly, schema mapping, and AI-SDK transformations.
packages/mcp/mcp-client-chat/src/ai/index.ts (1)
1-4
: Barrel export looks goodThe aggregated exports make consumption ergonomic and align with the new AI abstraction.
packages/mcp/mcp-client-chat/src/react/reActChat.ts (1)
45-49
: Streaming assumption: ensure delta.content contains the fully assembled text, not a partial chunkIf upstream delivers incremental deltas, reading
choices[0].delta.content
once will be partial. This is brittle for JSON extraction. Verify that your stream assembly consolidates the full assistant message before it reachesorganizeToolCalls
.Would you confirm that:
- The response passed here is post-assembly (final aggregated content) when streamSwitch is true, and
StreamingChoice.delta.content
contains the entire message text?If not, we should switch this logic to consume the assembled buffer or refactor to process tool calls after stream completion.
packages/mcp/mcp-client-chat/src/types/aiSDK.ts (1)
1-10
: Type helpers look solid and isolate SDK surface cleanlyGood use of type-only imports and option type extraction via Parameters/Omit. This will help keep the AI SDK integration decoupled from runtime.
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts (3)
44-46
: Good addition of the thought fieldThe addition of the
thought
field, which captures reasoning or content, provides valuable context for understanding the model's decision-making process. The fallback chain ensures a value is always returned.
59-65
: LGTM! Good error handling for API compatibilityThe message processing logic correctly ensures that assistant messages with tool_calls have a content field, which is required by certain APIs like DeepSeek. The Chinese comment clearly explains the rationale.
73-77
: Excellent defensive programmingThe conditional addition of the tools field only when necessary (tools.length > 0 and iterationSteps > 0) prevents API errors with providers that don't accept empty tool arrays. The comment clearly explains the reasoning.
packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts (2)
20-27
: Tool calls transformation looks correctThe mapping from AI SDK's tool call format to OpenAI's format is properly implemented, including JSON stringification of arguments.
63-81
: Well-structured chunk type handlingThe function correctly handles the three different chunk types (
text-delta
,tool-call
,text-end
) and properly sets the appropriate fields for each type. The streaming completion format aligns with OpenAI's API specification.packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (3)
40-71
: Good error handling for streaming responsesThe streaming method appropriately handles various error cases including missing response body and HTTP errors, providing synthetic error streams for consistency.
73-99
: Well-implemented error stream generationThe
generateErrorStream
method correctly creates SSE-formatted error responses that align with the expected streaming format. The error response structure with proper finish_reason and content delivery ensures clients can handle errors gracefully.
92-92
: No action needed: ReadableStream is supported in Node ≥22.12.0
Thepackage.json
already specifies"engines": { "node": ">=22.12.0" }
, and in Node 22 the Web Streams API—includingReadableStream
—is fully supported.packages/mcp/mcp-client-chat/src/utils/transformer.ts (1)
16-34
: Tool call merging logic looks robustThe function correctly handles tool calls with and without IDs, attempting to append arguments to the previous tool call when ID is missing. The error handling ensures the function doesn't crash on unexpected input.
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
62-70
: Well-designed discriminated union for LLM configurationThe discriminated union based on
useSDK
elegantly handles the two different configuration modes (SDK vs REST API), ensuring type safety and proper required fields for each mode.
149-149
: Good addition of reasoning fieldThe optional
reasoning
field inChoiceMessage
allows capturing the model's thought process, which is valuable for debugging and understanding AI decision-making.
104-110
: Ignore type inconsistency concern—LanguageModel
already covers bothstring
(for REST API) andLanguageModelV2
(for SDK).
Castingthis.options.llmConfig.model
tostring
where needed is safe becauseLanguageModel = string | LanguageModelV2
. No changes toChatBody
are required.Likely an incorrect or invalid review comment.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
177-178
: Missing null check for toOpenAIChunk resultThe
toOpenAIChunk
function could potentially return null or undefined, which would cause JSON.stringify to produce unexpected results.Apply this diff to add safety check:
const data = toOpenAIChunk(chunk, this.model); - + if (!data) continue; + yield encoder.encode(`data: ${JSON.stringify(data)}\n\n`);packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
378-378
: Good addition of name field for tool messagesAdding the
name
field to tool messages improves traceability and debugging by clearly identifying which tool generated each response.
485-487
: Consistent error handling pattern with AI instance integrationThe integration with
aiInstance.chatStream
maintains consistency with the non-streaming path and properly delegates to the AI abstraction layer.
227-229
: Incorrect assumption about thought streamingStreaming mode uses generateStreamingResponses to continually invoke writeMessageDelta for each incoming chunk of the assistant’s “thought,” whereas non-streaming mode emits the complete thought in one shot via the
if (!this.options.streamSwitch && thought…)
block. The asymmetry is intentional to avoid duplicate output. You can safely ignore this review comment.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
157-181
: Align error handling between chat and chatStream methods.The
chatStream
method throws errors (line 179) while thechat
method returns them (lines 149-153). This inconsistency could confuse consumers who need to handle errors differently for each method.Consider returning an error stream instead of throwing to maintain consistency with the
chat
method:async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream> { try { const chatOptions: StreamTextOptions = this.generateChatOptions(chatBody); const result: StreamTextResult<ToolSet, unknown> = streamText({ model: this.model, ...chatOptions, }); const iterator = this.openaiChunkGenerator(result); return new ReadableStream<Uint8Array>({ async pull(controller) { const { value, done } = await iterator.next(); if (done) { controller.close(); } else { controller.enqueue(value); } }, }); } catch (error) { console.error(error); - - throw error; + + // Return an error stream that emits the error message + const errorMessage = error instanceof Error ? error.message : 'Stream failed'; + return new ReadableStream<Uint8Array>({ + start(controller) { + const encoder = new TextEncoder(); + controller.enqueue(encoder.encode(`data: {"error": "${errorMessage}"}\n\n`)); + controller.enqueue(encoder.encode('data: [DONE]\n\n')); + controller.close(); + } + }); } }packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
43-43
: Fix incorrect property access for streamSwitch.The
streamSwitch
property should be accessed directly fromoptions
rather than fromoptions.llmConfig
.Apply this diff to fix the property access:
- streamSwitch: options.llmConfig.streamSwitch ?? true, + streamSwitch: options.streamSwitch ?? true,
198-203
: Add error handling for streaming path failures.The streaming path could throw errors that would leave the stream in an inconsistent state.
Apply this diff to add proper error handling:
if (this.options.streamSwitch) { - const streamResponses: ReadableStream = await this.queryChatCompleteStreaming(); - response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); + try { + const streamResponses: ReadableStream = await this.queryChatCompleteStreaming(); + response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); + } catch (error) { + response = error instanceof Error ? error : new Error(String(error)); + } } else {
290-295
: Add safety checks to prevent race conditions in writeMessageEnd.The method doesn't check if the stream is already closed or locked before attempting to write.
Apply this diff to add safety checks:
protected async writeMessageEnd() { + if (this.transformStream.writable.locked) { + console.warn('Stream is already locked, skipping end message'); + return; + } const writer = this.transformStream.writable.getWriter(); - await writer.ready; - await writer.write(new TextEncoder().encode('data: [DONE]\n\n')); - await writer.close(); + try { + await writer.ready; + await writer.write(new TextEncoder().encode('data: [DONE]\n\n')); + await writer.close(); + } catch (error) { + console.error('Failed to write end message:', error); + } finally { + writer.releaseLock(); + } }
🧹 Nitpick comments (1)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
26-26
: Fix import ordering issues.The imports don't follow the project's ordering convention - imports from
../../utils/index.js
should come before imports from./defaultConfig.js
.Apply this diff to fix the import ordering:
import { BaseAi } from '../base-ai.js'; import type { ChatBody, ChatCompleteResponse, - LlmConfig, GenerateTextOptions, + LlmConfig, StreamTextOptions, LanguageModel, } from '../../types/index.js'; +import { jsonSchemaToZod, toOpenAIChunk, transformChatResult } from '../../utils/index.js'; import { getDefaultModel } from './defaultConfig.js'; -import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js';
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
packages/mcp/mcp-client-chat/src/ai/ai-instance.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
(11 hunks)packages/mcp/mcp-client-chat/src/utils/transformer.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/mcp/mcp-client-chat/src/ai/ai-instance.ts
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/utils/transformer.ts
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/utils/transformer.ts
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-06-05T08:40:41.876Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: demo-server/src/chat.ts:19-26
Timestamp: 2025-06-05T08:40:41.876Z
Learning: In demo-server/src/chat.ts, the user rhlin prefers to keep simple type assertions for error checking rather than implementing more complex type guard functions, favoring simplicity over enhanced type safety.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
🪛 ESLint
packages/mcp/mcp-client-chat/src/utils/transformer.ts
[error] 88-88: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./defaultConfig.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 166-166: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
[error] 199-199: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (3)
packages/mcp/mcp-client-chat/src/utils/transformer.ts (2)
4-22
: LGTM! Robust handling of streaming tool calls.The function properly handles tool calls without IDs by appending arguments to the last tool call, and includes appropriate error handling.
87-156
: LGTM! Well-structured streaming response parser.The implementation correctly handles SSE-formatted streaming responses with proper buffering, line-by-line processing, and error recovery. The optional chaining for safe property access (line 123) is particularly good for preventing runtime errors.
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (1)
438-450
: LGTM! Clean integration with the new AI backend.The method properly delegates to the AI instance and maintains consistent error handling by returning errors rather than throwing them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (3)
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts (2)
2-13
: Imports cleanup and ordering look goodUnused symbols are removed and the import list is alphabetically sorted, resolving the prior ESLint complaints.
30-46
: Robust choice validation before casting — good improvementValidating
response.choices
presence and guarding fordelta
/message
before casting eliminates unsafe assumptions from earlier versions.packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (1)
295-312
: writeMessageEnd race-proofing is on pointGuarding on
writable.locked
, try/finally, and releasing the writer resolves the prior race-condition risks.
🧹 Nitpick comments (14)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
25-26
: Fix import order to satisfy import-x/order and sort membersMove the utils import above
defaultConfig
and sort the named imports to satisfy ESLint rules.-import { getDefaultModel } from './defaultConfig.js'; -import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { jsonSchemaToZod, toOpenAIChunk, transformChatResult } from '../../utils/index.js'; +import { getDefaultModel } from './defaultConfig.js';
2-15
: Alphabetize type specifiers in import from 'ai'ESLint flags unsorted members (e.g., AssistantModelMessage). Alphabetize the specifiers to pass
sort-imports
.import type { - AssistantContent, - ModelMessage, - SystemModelMessage, - UserModelMessage, - AssistantModelMessage, - ToolCallPart, - ToolModelMessage, - GenerateTextResult, - StreamTextResult, - ToolSet, - UserContent, - ToolResultPart, + AssistantContent, + AssistantModelMessage, + GenerateTextResult, + ModelMessage, + StreamTextResult, + SystemModelMessage, + ToolCallPart, + ToolModelMessage, + ToolResultPart, + ToolSet, + UserContent, + UserModelMessage, } from 'ai';
18-24
: Alphabetize type specifiers in local types import
GenerateTextOptions
is out of order. Sort specifiers alphabetically to satisfysort-imports
.import type { - ChatBody, - ChatCompleteResponse, - LlmConfig, - GenerateTextOptions, - StreamTextOptions, - LanguageModel, + ChatBody, + ChatCompleteResponse, + GenerateTextOptions, + LanguageModel, + LlmConfig, + StreamTextOptions, } from '../../types/index.js';
110-116
: Remove unusedsummarySystemPrompt
from destructuring
summarySystemPrompt
is destructured but unused; this may triggerno-unused-vars
. Drop it (or alias to_summarySystemPrompt
if you’re keeping it for future use).- const { model, systemPrompt, summarySystemPrompt, url, useSDK, apiKey, ...rest } = this.llmConfig; + const { model, systemPrompt, url, useSDK, apiKey, ...rest } = this.llmConfig;
157-166
: Address Node ‘ReadableStream’ experimental warning by importing from node:stream/webESLint flags
ReadableStream
as experimental under the configured Node range. Import the Web Streams implementation fromnode:stream/web
and update the return type accordingly.+import { ReadableStream } from 'node:stream/web'; @@ - async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream<Uint8Array>> { + async chatStream(chatBody: ChatBody): Promise<ReadableStream<Uint8Array>> {packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts (1)
78-83
: Set ChatBody.stream and avoid leaking unknown fieldsRight now,
streamSwitch
(from LLM config) is spread intochatBody
as an unknown property andChatBody.stream
isn’t explicitly set. This can cause excess-property type errors and could confuse backends that rely onstream
. Map it explicitly and excludestreamSwitch
from the spread.Apply this diff:
- const { apiKey, url, systemPrompt, summarySystemPrompt, model, ...llmConfig } = this.options.llmConfig; - const chatBody: ChatBody = { - model, - messages: processedMessages, - ...llmConfig, - }; + const { + apiKey, + url, + systemPrompt, + summarySystemPrompt, + model, + streamSwitch, + ...llmConfig + } = this.options.llmConfig; + const chatBody: ChatBody = { + model, + messages: processedMessages, + stream: streamSwitch ?? this.streamSwitch, + ...llmConfig, + };packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (5)
37-47
: aiInstance lifecycle: ensure it’s initialized before use
aiInstance
is nullable, and later usage assumes non-null (this.aiInstance!
). If consumers forget to callinit()
beforechat()
, this can throw at runtime. Consider lazy-initializing inchat()
or in the query methods, or document/guard usage.Would you like me to add a minimal lazy-init guard in
chat()
(e.g.,if (!this.aiInstance) await this.init();
) to make the API safer for consumers who forget to callinit()
?
197-207
: Streaming branch now wrapped in try/catch — goodCatching errors and converting them to an
Error
flow prevents half-open streams and aligns with previous feedback. One nit: switch the annotation toglobalThis.ReadableStream
to satisfy the Node rule.Apply this diff:
- const streamResponses: ReadableStream = await this.queryChatCompleteStreaming(); + const streamResponses: globalThis.ReadableStream = await this.queryChatCompleteStreaming();
286-289
: Summary path should respect streamSwitchThis always uses streaming for the summary, even when
streamSwitch
is false. Consider honoring the toggle for consistency.Apply this diff to respect the switch and still produce an SSE-like result in non-streaming mode:
- const result = await this.queryChatCompleteStreaming(); - - result.pipeTo(this.transformStream.writable); + if (this.streamSwitch) { + const result = await this.queryChatCompleteStreaming(); + result.pipeTo(this.transformStream.writable); + } else { + const nonStreaming = await this.queryChatComplete(); + if (nonStreaming instanceof Error) { + const errorStream = this.generateErrorStream(nonStreaming.message); + errorStream.pipeTo(this.transformStream.writable); + } else { + const choice = nonStreaming.choices?.[0] as NonStreamingChoice | undefined; + const content = choice?.message?.content ?? ''; + if (content) { + await this.writeMessageDelta(content, 'assistant'); + } + await this.writeMessageEnd(); + } + }
116-119
: Tighten types for local accumulatorsMinor typing improvements prevent accidental any:
Apply this diff:
- protected async fetchToolsList(): Promise<AvailableTool[]> { - const availableTools = []; - const toolClientMap = new Map(); + protected async fetchToolsList(): Promise<AvailableTool[]> { + const availableTools: AvailableTool[] = []; + const toolClientMap: Map<string, Client> = new Map();
455-466
: Guard against uninitialized aiInstance in query methods
this.aiInstance!.chat(...)
/chatStream(...)
rely on non-null. Ifinit()
wasn’t called, this throws. Either lazy-init or throw a clearer error right here.Example guard:
protected async queryChatComplete(): Promise<ChatCompleteResponse | Error> { const chatBody = await this.getChatBody(); try { + if (!this.aiInstance) { + throw new Error('AI instance not initialized. Call init() before chat().'); + } const response = await this.aiInstance!.chat(chatBody); return response; } catch (error) { console.error('Error calling chat/complete:', error); return error as Error; } }packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (3)
16-38
: Consider improving error handling in thechat
methodThe current implementation returns different types (
ChatCompleteResponse | Error
), which can make error handling inconsistent for callers. Additionally, the generic error casting on line 36 may lose error details.Consider either:
- Throwing errors consistently instead of returning them
- Using a Result type pattern for better type safety
Here's an improved version that throws errors consistently:
- async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { + async chat(chatBody: ChatBody): Promise<ChatCompleteResponse> { const { url, apiKey } = this.llmConfig; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(chatBody), }); if (!response.ok) { - return new Error(`HTTP error ${response.status}: ${await response.text()}`); + throw new Error(`HTTP error ${response.status}: ${await response.text()}`); } return (await response.json()) as ChatCompleteResponse; } catch (error) { console.error('Error calling chat/complete:', error); - - return error as Error; + throw error instanceof Error ? error : new Error(String(error)); } }
58-59
: Remove or translate the Chinese commentThe comment on line 58 is in Chinese. For consistency and international collaboration, consider using English comments throughout the codebase.
- // 获取详细的错误信息 + // Get detailed error information
50-50
: Ensure consistent stream parameter handlingThe
chatStream
method explicitly addsstream: true
to the request body, which could override any existing stream value inchatBody
. This might cause confusion ifchatBody.stream
is already set tofalse
.Consider validating or documenting this behavior:
- body: JSON.stringify({ stream: true, ...chatBody }), + body: JSON.stringify({ ...chatBody, stream: true }),This ensures
stream: true
always takes precedence, making the behavior explicit.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (8)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/base-ai.ts
(1 hunks)packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
(4 hunks)packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
(11 hunks)packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
(5 hunks)packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
- packages/mcp/mcp-client-chat/src/ai/base-ai.ts
- packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts
- packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-08-15T03:03:54.457Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.457Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-06-05T08:40:41.876Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: demo-server/src/chat.ts:19-26
Timestamp: 2025-06-05T08:40:41.876Z
Learning: In demo-server/src/chat.ts, the user rhlin prefers to keep simple type assertions for error checking rather than implementing more complex type guard functions, favoring simplicity over enhanced type safety.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
🧬 Code Graph Analysis (4)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)
packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (5)
ChoiceMessage
(138-143)StreamingChoice
(152-157)NonStreamingChoice
(145-150)ChatBody
(96-102)AvailableTool
(79-90)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (4)
MCPClientOptions
(72-77)ChatCompleteResponse
(168-177)ChatBody
(96-102)ToolCall
(126-130)packages/mcp/mcp-client-chat/src/ai/ai-instance.ts (1)
getAiInstance
(4-12)packages/mcp/mcp-client-chat/src/utils/transformer.ts (1)
generateStreamingResponses
(87-156)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)packages/mcp/mcp-client-chat/src/types/aiSDK.ts (4)
LanguageModel
(8-8)GenerateTextOptions
(4-4)ModelMessage
(10-10)StreamTextOptions
(6-6)packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts (1)
getDefaultModel
(18-20)packages/mcp/mcp-client-chat/src/utils/zodSchema.ts (1)
jsonSchemaToZod
(4-43)packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts (2)
transformChatResult
(6-44)toOpenAIChunk
(46-84)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 92-92: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
[error] 201-201: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./defaultConfig.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 166-166: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 181-181: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (12)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (3)
60-67
: Nice defensive parsing for tool call argumentsWrapping JSON.parse in try/catch with a safe fallback avoids hard crashes on malformed tool arguments. Good call.
118-131
: Tool registration and auto-selection look goodBuilding a ToolSet from JSON schema via zod and enabling
toolChoice = 'auto'
is clean and idiomatic for ai-sdk.
136-155
: Non-streaming path looks correct and transformations are coherentOptions generation →
generateText
→ transform toChatCompleteResponse
is straightforward. Standardized error returning is consistent with prior feedback.packages/mcp/mcp-client-chat/src/functionCalling/functionCallingChat.ts (3)
55-58
: Thought extraction is consistent and non-breakingUsing
message.reasoning ?? message.content ?? ''
is a sane default and aligns with the new “thought” output requirement.
70-76
: Assistant tool_calls normalization is pragmaticEnsuring
content
exists for assistant messages withtool_calls
avoids downstream API validation errors (e.g., DeepSeek). This is a sensible normalization.
86-88
: Conditional tools inclusion is correctAvoiding empty tools arrays and guarding with
iterationSteps > 0
match API expectations and reduces noise.packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (5)
46-47
: streamSwitch source looks intentional; ensure consistency in ChatBodyReading the toggle from
options.llmConfig.streamSwitch ?? true
aligns with the PR objective. Just ensuregetChatBody()
setsstream
accordingly (and doesn’t passstreamSwitch
through), per my comment in functionCallingChat.ts.
230-235
: Thought delta write in non-streaming mode is a nice UX touchWriting the “thought” text before tool-calls in non-streaming mode preserves the mental model users expect from streaming without breaking non-stream flows.
270-280
: End-of-stream finalization is safer nowThe sentinel
-1
path and guardedwriteMessageEnd()
ensure clean stream termination without races.
497-509
: Streaming API error path is clear and actionableThrowing a descriptive error after logging ensures callers handle failures explicitly; return type already reflects
ReadableStream
so throwing is correct here.
201-201
: ESLint: Node built-in ReadableStream warningStatic analysis flags
ReadableStream
as experimental for your configured Node range. UsingglobalThis.ReadableStream
in type annotations (as suggested above) usually resolves the linter warning without changing runtime behavior.packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
92-98
: No change required — ReadableStream supported by package enginepackages/mcp/mcp-client-chat/package.json declares "node": ">=22.12.0", which is >=16.5, so ReadableStream is supported for this package and no polyfill is needed.
- Verified file: packages/mcp/mcp-client-chat/package.json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
91-97
: UTF-8 encoding of SSE chunks looks good (follow-up to prior comment)You correctly switched to
TextEncoder
and now enqueueUint8Array
chunks, aligning the stream type withReadableStream<Uint8Array>
. Nice fix.
🧹 Nitpick comments (4)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (3)
46-49
: Set proper Accept header for SSE streamingMany providers require
Accept: text/event-stream
to emit SSE; adding it improves compatibility and intent clarity.headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'text/event-stream', },
33-37
: Normalize error payloads and logs; avoid “[object Object]” in messagesReturn a real Error in
chat()
even when non-Error is thrown, and stringify errors inchatStream()
properly. Also avoid duplicating the “Failed to call chat API” prefix in logs.} catch (error) { console.error('Error calling chat/complete:', error); - return error as Error; + return error instanceof Error ? error : new Error(String(error)); } @@ - console.error('Failed to call chat API:', errorMessage); + console.error(errorMessage); return this.generateErrorStream(errorMessage); @@ - } catch (error) { - console.error('Failed to call streaming chat/complete:', error); - - return this.generateErrorStream(`Failed to call chat API! ${error}`); - } + } catch (error) { + const errMsg = error instanceof Error ? error.message : String(error); + console.error('Failed to call streaming chat/complete:', errMsg); + return this.generateErrorStream(`Failed to call chat API! ${errMsg}`); + }Also applies to: 61-63, 66-70
20-27
: Optional: Specify Accept for non-streaming requestsSetting
Accept: application/json
makes the expected response explicit and sometimes avoids content negotiation quirks.const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'application/json', }, body: JSON.stringify(chatBody), });
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
1-26
: Minor import ordering and ESLint issuesESLint flags several import-related issues including alphabetical sorting and import order. While these won't affect functionality, they reduce code consistency.
Apply these diffs to fix the import issues:
import type { AssistantContent, ModelMessage, SystemModelMessage, UserModelMessage, - AssistantModelMessage, + AssistantModelMessage, ToolCallPart, ToolModelMessage, GenerateTextResult, StreamTextResult, ToolSet, UserContent, ToolResultPart, } from 'ai';import type { ChatBody, ChatCompleteResponse, LlmConfig, - GenerateTextOptions, + GenerateTextOptions, StreamTextOptions, LanguageModel, } from '../../types/index.js'; -import { getDefaultModel } from './defaultConfig.js'; -import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { getDefaultModel } from './defaultConfig.js';
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-15T03:03:54.457Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.457Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
🧬 Code Graph Analysis (2)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)packages/mcp/mcp-client-chat/src/types/aiSDK.ts (2)
LanguageModel
(8-8)GenerateTextOptions
(4-4)packages/mcp/mcp-client-chat/src/ai/ai-sdk/defaultConfig.ts (1)
getDefaultModel
(18-20)packages/mcp/mcp-client-chat/src/utils/zodSchema.ts (1)
jsonSchemaToZod
(4-43)packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts (2)
transformChatResult
(6-44)toOpenAIChunk
(46-84)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 93-93: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./defaultConfig.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 162-162: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 185-185: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (6)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
7-14
: Solid adapter scaffolding and config typingThe AiRestApi class and AiRestApiConfig intersection correctly narrow to the REST branch of LlmConfig. Constructor/init look clean.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
30-38
: LGTM! Clean class structure with proper inheritanceThe constructor correctly initializes the model using either the provided configuration or a sensible default. The type constraint ensures only SDK-enabled configurations can be used.
40-134
: Excellent message transformation logic with robust error handlingThe message transformation from ChatBody to AI SDK format is well-structured and handles all message types correctly. The JSON parsing error handling for tool arguments is particularly good - it logs the issue and gracefully falls back to an empty object instead of crashing.
The tool setup logic properly converts JSON schemas to Zod schemas and enables auto tool selection when tools are available.
136-151
: LGTM! Consistent error handling patternThe chat method properly handles errors by logging them and returning a standardized Error object instead of throwing. This maintains consistency with the expected return type and provides better error handling for consumers.
153-194
: Excellent streaming implementation with comprehensive error handlingThe streaming implementation is robust with multiple layers of error handling:
- Iterator errors are caught in the pull handler and emit proper SSE error frames
- Initial setup errors are caught and return a fallback error stream
- JSON serialization is used correctly to prevent malformed SSE frames
The dual error handling approach ensures the stream always returns a valid ReadableStream rather than throwing, which is crucial for streaming APIs.
196-204
: Clean async generator implementationThe OpenAI chunk generator correctly transforms AI SDK chunks to OpenAI format and properly terminates the stream with the
[DONE]
marker. The implementation follows SSE conventions correctly.
…ndling and constraints
…dd testing scripts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/mcp/mcp-client-chat/README.md (1)
100-107
: Streaming example: pipe Web ReadableStream to Express response correctlymcpClientChat.chat returns a Web ReadableStream (WHATWG). It doesn’t support .pipe(res) directly. Convert to a Node Readable first, and set appropriate SSE headers if you’re proxying SSE.
app.post("/chat", async (req, res) => { @@ - // 流式数据返回 - const streamResponse = await mcpClientChat.chat("your question..."); - - streamResponse.pipe(res); + // 流式数据返回(将 Web ReadableStream 转为 Node Readable 再 pipe) + const stream = await mcpClientChat.chat("your question..."); + // 可选:若上游返回 SSE,建议设置响应头 + res.setHeader('Content-Type', 'text/event-stream'); + res.setHeader('Cache-Control', 'no-cache'); + res.setHeader('Connection', 'keep-alive'); + const { Readable } = await import('node:stream'); + const nodeStream = (Readable as any).fromWeb + ? (Readable as any).fromWeb(stream) + : stream; // 兼容少数环境 + nodeStream.pipe(res);If chat returns a non-streaming response (based on streamSwitch), document that separately.
♻️ Duplicate comments (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
73-100
: Import and use Web Streams constructor to satisfy ESLint and type signaturesESLint flags global ReadableStream usage under Node 20.x. Import the Web Streams constructor from node:stream/web and use it for construction and in the return types to avoid the “experimental ReadableStream” rule violations. Also keep SSE framing on its own lines.
import type { ChatBody, ChatCompleteResponse, LlmConfig } from '../../types/index.js'; import { Role } from '../../types/index.js'; import { BaseAi } from '../base-ai.js'; +import { ReadableStream as WebReadableStream } from 'node:stream/web'; @@ - protected generateErrorStream(errorMessage: string): ReadableStream<Uint8Array> { + protected generateErrorStream(errorMessage: string): WebReadableStream<Uint8Array> { @@ - return new ReadableStream<Uint8Array>({ + return new WebReadableStream<Uint8Array>({ start(controller) { controller.enqueue(encoder.encode(data)); controller.enqueue(encoder.encode('data: [DONE]\n\n')); controller.close(); }, }); }Additionally, consider updating the chatStream signature to match:
- async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream<Uint8Array>> { + async chatStream(chatBody: ChatBody): Promise<WebReadableStream<Uint8Array>> {
🧹 Nitpick comments (8)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (3)
16-33
: Return/throw consistency for non-streaming errorsReturning an Error object as a union type pushes error handling complexity to all call sites and hides HTTP status information from standard exception flows. Prefer throwing on non-OK responses and letting callers handle exceptions, or return a discriminated result.
async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { @@ - if (!response.ok) { - return new Error(`HTTP error ${response.status}: ${await response.text()}`); - } + if (!response.ok) { + const body = await response.text(); + throw new Error(`HTTP error ${response.status}: ${body}`); + }If BaseAi requires a union today, consider switching to
Promise<ChatCompleteResponse>
and surface errors via throws for both adapters to keep the interface clean.
44-51
: Be explicit about streaming Accept header and preserve stream overrideGood job putting stream: true after the spread so it can’t be overridden. Consider also sending Accept for SSE to be explicit with providers that content-negotiate streaming.
headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'text/event-stream', }, body: JSON.stringify({ ...chatBody, stream: true }),
67-71
: Sanitize streamed error detailsThe error interpolation may leak stack traces or internal details to end users via the SSE stream. Consider emitting a generic message to the client and logging the detailed error server-side.
- return this.generateErrorStream(`Failed to call chat API! ${error}`); + console.error('Failed to call chat API!', error); + return this.generateErrorStream('Failed to call chat API! Please retry later.');packages/mcp/mcp-client-chat/src/utils/zodSchema.ts (4)
98-117
: allOf handling is object-only and loses non-object constraintsThe reducer merges only object sub-schemas and ignores number/string/array constraints, so many valid allOf cases degrade or silently drop rules. Consider building Zod schemas for each sub-schema and intersecting when both are objects; otherwise, refine with a composite predicate.
- const mergedSchema = schema.allOf.reduce( - (acc: any, subSchema: any) => { - if (subSchema.type === 'object' && acc.type === 'object') { - return { - ...acc, - ...subSchema, - properties: { ...(acc.properties || {}), ...(subSchema.properties || {}) }, - required: [...(acc.required || []), ...(subSchema.required || [])], - }; - } - return acc; - }, - { type: 'object' }, - ); - return jsonSchemaToZod(mergedSchema); + const subs = schema.allOf.map((s) => jsonSchemaToZod(s)); + // Try to intersect object schemas; fallback to a refine that enforces all subs. + const [first, ...rest] = subs; + const intersected = rest.reduce((acc, s) => { + // Prefer object-object intersections; otherwise, keep acc and add a refine. + const bothObjects = + acc._def?.typeName === 'ZodObject' && s._def?.typeName === 'ZodObject'; + return bothObjects ? z.intersection(acc as any, s as any) : acc.refine((v) => s.safeParse(v).success); + }, first); + return intersected;This preserves more semantics across heterogeneous allOf compositions.
60-66
: Enum with object values: make equality stable and order-insensitiveUsing JSON.stringify for deep equality depends on key order and can mismatch semantically equal objects. Normalize keys before comparing.
- return z - .object({}) - .passthrough() - .refine((data) => JSON.stringify(data) === JSON.stringify(value), { message: `Expected ${JSON.stringify(value)}` }); + const stableStringify = (obj: any) => + JSON.stringify(obj, Object.keys(obj).sort()); + return z + .object({}) + .passthrough() + .refine((data) => stableStringify(data) === stableStringify(value), { + message: `Expected exact object literal`, + });If nested objects are expected, recursively sort keys for all nested levels.
Also applies to: 80-85
41-57
: Nullable handling can duplicate null in unionsWhen type is an array and includes "null", applyConstraints will add another null via nullable, producing redundant unions. Consider skipping nullable when type already allows null.
-const applyConstraints = (zodSchema: ZodTypeAny, schema: JsonSchema): ZodTypeAny => { +const applyConstraints = (zodSchema: ZodTypeAny, schema: JsonSchema): ZodTypeAny => { let result = zodSchema; - if (schema.nullable === true) { + const typeAlreadyNull = + Array.isArray(schema.type) && schema.type.includes('null'); + if (schema.nullable === true && !typeAlreadyNull) { result = z.union([result, z.null()]); }Also applies to: 120-129
227-242
: uniqueItems check may be expensive for large arraysThe JSON.stringify-based uniqueness check is O(n) memory and heavy for nested structures. If large arrays are expected, consider a lightweight stable hash (e.g., fast-json-stable-stringify) or caller-provided comparator.
packages/mcp/mcp-client-chat/README.md (1)
28-29
: Doc/type mismatch: use LanguageModel (not LanguageModelV2)The codebase types reference LanguageModel; the README uses LanguageModelV2 here. Update for consistency.
- - `model(string | LanguageModelV2)`: 模型名称或 AI SDK 模型实例 + - `model(string | LanguageModel)`: 模型名称或 AI SDK 模型实例
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
packages/mcp/mcp-client-chat/README.md
(5 hunks)packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/zodSchema.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/mcp/mcp-client-chat/src/utils/aiSDKTransformer.ts
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/README.md
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-05-28T12:25:07.698Z
Learnt from: rhlin
PR: opentiny/tiny-agent#16
File: packages/mcp/mcp-proxy-server/test.ts:15-16
Timestamp: 2025-05-28T12:25:07.698Z
Learning: When using MCP SDK transports (like StreamableHTTPServerTransport, SSEServerTransport) with Express, do not add standard JSON parsing middleware like express.json() as it conflicts with how the official MCP transports handle request data internally. MCP transports are designed to process raw HTTP request bodies and streams directly.
Applied to files:
packages/mcp/mcp-client-chat/README.md
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
🧬 Code Graph Analysis (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)
🪛 ast-grep (0.38.6)
packages/mcp/mcp-client-chat/src/utils/zodSchema.ts
[warning] 143-143: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(schema.pattern)
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🪛 ESLint
packages/mcp/mcp-client-chat/src/utils/zodSchema.ts
[error] 145-145: 'error' is defined but never used. Allowed unused caught errors must match /^_/u.
(@typescript-eslint/no-unused-vars)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 73-73: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 93-93: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🪛 markdownlint-cli2 (0.17.2)
packages/mcp/mcp-client-chat/README.md
116-116: Reversed link syntax
(OpenAI)[https://ai-sdk.dev/providers/ai-sdk-providers/openai]
(MD011, no-reversed-links)
157-157: Reversed link syntax
(DeepSeek)[https://ai-sdk.dev/providers/ai-sdk-providers/deepseek]
(MD011, no-reversed-links)
…ripts, and enhance AiRestApi type handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (2)
1-6
: Import Web Streams constructor to satisfy ESLint and use the stable Node implementationESLint flags global ReadableStream as experimental in Node 20.x. Import from node:stream/web and use it for construction.
import type { ChatBody, ChatCompleteResponse, LlmConfig } from '../../types/index.js'; import { Role } from '../../types/index.js'; import { BaseAi } from '../base-ai.js'; +import { ReadableStream as WebReadableStream } from 'node:stream/web'; type AiRestApiConfig = Extract<LlmConfig, { useSDK?: false | undefined; url: string; apiKey: string; model: string }>;
77-105
: Use Node’s WebReadableStream to construct the stream; align return type and keep SSE framingConstruct with WebReadableStream to resolve the ESLint “experimental ReadableStream” error and make types consistent with the public signature.
- protected generateErrorStream(errorMessage: string): ReadableStream<Uint8Array> { + protected generateErrorStream(errorMessage: string): globalThis.ReadableStream<Uint8Array> { const errorResponse: ChatCompleteResponse = { id: `chat-error-${Date.now()}`, object: 'chat.completion.chunk', created: Math.floor(Date.now() / 1000), model: this.llmConfig.model, choices: [ { finish_reason: 'error', native_finish_reason: 'error', delta: { role: Role.ASSISTANT, content: errorMessage, }, }, ], }; const data = `data: ${JSON.stringify(errorResponse)}\n\n`; const encoder = new TextEncoder(); - return new ReadableStream<Uint8Array>({ + return new WebReadableStream<Uint8Array>({ start(controller) { controller.enqueue(encoder.encode(data)); controller.enqueue(encoder.encode('data: [DONE]\n\n')); controller.close(); }, }); }
🧹 Nitpick comments (3)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
16-39
: Return-union vs throw: align error-handling contract across BaseAiThis method returns
Error
instead of throwing. Verify BaseAi consumers consistently branch on union types; otherwise, prefer throwing for non-OK and rethrowing in catch for a simpler API.packages/mcp/mcp-client-chat/src/utils/zodSchema.ts (2)
60-65
: Consider performance implications of object literal validation.The
createObjectLiteralSchema
function uses JSON.stringify for deep equality comparison, which could be expensive for large objects and may have inconsistent serialization order issues.Consider using a deep equality library or implementing a more efficient comparison:
const createObjectLiteralSchema = (value: any): ZodTypeAny => { - return z - .object({}) - .passthrough() - .refine((data) => JSON.stringify(data) === JSON.stringify(value), { message: `Expected ${JSON.stringify(value)}` }); + return z + .object({}) + .passthrough() + .refine((data) => { + // Use a deep equality check instead of JSON.stringify + return JSON.stringify(data, Object.keys(data).sort()) === JSON.stringify(value, Object.keys(value).sort()); + }, { message: `Expected ${JSON.stringify(value)}` }); };Or consider using a library like
lodash.isEqual
for more reliable deep comparison.
236-239
: Array uniqueness check may have performance issues.The current uniqueness validation uses
JSON.stringify
for each item comparison, which can be slow for large arrays with complex objects.Consider optimizing the uniqueness check:
if (schema.uniqueItems === true) { - arraySchema = arraySchema.refine((arr) => new Set(arr.map((item) => JSON.stringify(item))).size === arr.length, { - message: 'Array items must be unique', - }) as any; + arraySchema = arraySchema.refine((arr) => { + const seen = new Set(); + for (const item of arr) { + const key = typeof item === 'object' ? JSON.stringify(item) : item; + if (seen.has(key)) return false; + seen.add(key); + } + return true; + }, { + message: 'Array items must be unique', + }) as any; }This approach short-circuits on the first duplicate and avoids creating the entire stringified array upfront.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
packages/mcp/mcp-client-chat/README.md
(5 hunks)packages/mcp/mcp-client-chat/package.json
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/zodSchema.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/mcp/mcp-client-chat/package.json
- packages/mcp/mcp-client-chat/README.md
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
🧬 Code graph analysis (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 77-77: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 97-97: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🪛 ast-grep (0.38.6)
packages/mcp/mcp-client-chat/src/utils/zodSchema.ts
[warning] 144-144: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(schema.pattern)
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🔇 Additional comments (4)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
19-29
: Good: header merge and AbortSignal propagation are handled correctlyMerging caller headers while ensuring Authorization and Content-Type take precedence, and wiring abortSignal into fetch are both correct. This aligns with the documented configuration surface.
Also applies to: 46-55
packages/mcp/mcp-client-chat/src/utils/zodSchema.ts (3)
1-321
: Comprehensive JSON Schema to Zod converter implementation looks solid!The implementation provides robust coverage of JSON Schema features with proper type safety and error handling. The modular structure with separate handler functions for different schema types makes it maintainable and extensible. The code properly handles edge cases like empty arrays, single-item unions, and complex object schemas.
Key strengths:
- ✅ Comprehensive feature coverage (const, enum, oneOf/anyOf, allOf, type arrays, constraints)
- ✅ Proper type safety with TypeScript interfaces
- ✅ Defensive programming with guards and fallbacks
- ✅ Modular architecture with focused handler functions
- ✅ Security-conscious regex handling with length limits
- ✅ Clear error messages and warnings
144-151
: ReDoS protection is well implemented!The regex handling properly mitigates ReDoS risks by:
- Limiting pattern length to 2048 characters
- Using try-catch for error handling
- Providing informative warnings
- Gracefully degrading when patterns are invalid
The static analysis warning about ReDoS is a false positive here since you've already implemented appropriate safeguards.
283-320
: Excellent use of handler chain pattern!The main function uses an elegant chain of handler functions with early returns, making the control flow clear and maintainable. The fallback to
z.any()
for unknown types is appropriate and includes helpful warnings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (3)
16-39
: Normalize ChatBody.model to a string and add Accept headerREST endpoints only accept string model names. ChatBody.model is LanguageModel (string | object). JSON.stringify(chatBody) will leak an object when SDK models are passed, causing provider 4xx. Add explicit Accept while you’re here.
Apply this diff:
async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { const { url, apiKey } = this.llmConfig; try { - const response = await fetch(url, { + // Normalize model -> string for REST + const modelId = + typeof chatBody.model === 'string' + ? chatBody.model + : (chatBody.model as any)?.modelId ?? String(chatBody.model); + const normalizedBody = { ...chatBody, model: modelId }; + + const response = await fetch(url, { method: 'POST', headers: { ...(this.llmConfig.headers ?? {}), Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'application/json', }, - body: JSON.stringify(chatBody), + body: JSON.stringify(normalizedBody), signal: this.llmConfig.abortSignal, });
42-55
: Streaming path: normalize model, set SSE Accept, and align stream typeSame model normalization requirement applies to the stream path. Also set Accept: text/event-stream. Finally, prefer using the Node Web Streams type consistently.
Apply this diff:
- async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream<Uint8Array>> { + async chatStream(chatBody: ChatBody): Promise<WebReadableStream<Uint8Array>> { const { url, apiKey } = this.llmConfig; try { - const response = await fetch(url, { + // Normalize model -> string for REST + const modelId = + typeof chatBody.model === 'string' + ? chatBody.model + : (chatBody.model as any)?.modelId ?? String(chatBody.model); + + const response = await fetch(url, { method: 'POST', headers: { ...(this.llmConfig.headers ?? {}), Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'text/event-stream', }, - body: JSON.stringify({ ...chatBody, stream: true }), + body: JSON.stringify({ ...chatBody, model: modelId, stream: true }), signal: this.llmConfig.abortSignal, }); @@ - return response.body; + return response.body as WebReadableStream<Uint8Array>;Also applies to: 57-69
1-4
: Fix ESLint “experimental ReadableStream” and use Node’s Web Streams constructorESLint flags the global ReadableStream usage; instantiate from node:stream/web and type against it to satisfy the rule and keep types consistent with fetch(). Also keeps SSE framing as bytes.
Apply this diff:
import type { ChatBody, ChatCompleteResponse, LlmConfig } from '../../types/index.js'; import { Role } from '../../types/index.js'; import { BaseAi } from '../base-ai.js'; +import { ReadableStream as WebReadableStream } from 'node:stream/web'; @@ - protected generateErrorStream(errorMessage: string): ReadableStream<Uint8Array> { + protected generateErrorStream(errorMessage: string): WebReadableStream<Uint8Array> { @@ - return new ReadableStream<Uint8Array>({ + return new WebReadableStream<Uint8Array>({ start(controller) { controller.enqueue(encoder.encode(data)); controller.enqueue(encoder.encode('data: [DONE]\n\n')); controller.close(); }, });Also applies to: 76-103
🧹 Nitpick comments (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
16-74
: Optional: DRY up model normalizationModel normalization logic is duplicated across chat and chatStream. Consider a tiny helper to reduce drift.
Example (apply outside current ranges):
// inside class private toRestModelId(model: ChatBody['model']): string { return typeof model === 'string' ? model : (model as any)?.modelId ?? String(model); }Then use:
const normalizedBody = { ...chatBody, model: this.toRestModelId(chatBody.model) };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-08-25T08:51:03.990Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts:16-39
Timestamp: 2025-08-25T08:51:03.990Z
Learning: REST API endpoints for AI models only accept string type model names, not complex model objects. When sending requests to REST APIs, ChatBody.model (typed as LanguageModel which can be string | LanguageModelV2) must be normalized to a string before JSON.stringify.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
🧬 Code graph analysis (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 76-76: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 96-96: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (3)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (3)
5-14
: Type narrowing for REST config looks solidUsing Extract<> constrains the REST variant at the type level (url/apiKey/model as string), removing casts downstream. Good call.
22-29
: Header merge and abort propagation — LGTMCaller headers are merged while preserving Authorization and Content-Type precedence; abortSignal is forwarded. Consistent across both methods.
Also applies to: 49-55
76-92
: SSE framing and payload — LGTM
- Double newline after each data: frame is correct for SSE.
- Error chunk structure (chat.completion.chunk, delta with Role.ASSISTANT) is coherent with your types and downstream consumers.
Also applies to: 93-103
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (7)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (7)
19-25
: Correct: normalize LanguageModel → string for REST payloadsGood defensive normalization of ChatBody.model to a string before JSON.stringify, aligning with the “REST only accepts string model names” constraint.
25-34
: Add explicit Accept header for clarityMost REST chat endpoints return JSON; add Accept: application/json to be explicit and future-proof content negotiation.
Apply this diff:
const response = await fetch(url, { method: 'POST', headers: { ...(this.llmConfig.headers ?? {}), Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'application/json', }, body: JSON.stringify(normalizedBody), signal: this.llmConfig.abortSignal, });
47-56
: Streaming path: normalization and stream flag placement — LGTMYou normalize the model again and ensure stream: true overrides any caller-provided value by assigning it after spreading. Good.
56-65
: Prefer Accept: text/event-stream for SSE endpointsMany providers rely on Accept: text/event-stream. Adding it improves interoperability.
Apply this diff:
const response = await fetch(url, { method: 'POST', headers: { ...(this.llmConfig.headers ?? {}), Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', + Accept: 'text/event-stream', }, body: JSON.stringify({ ...normalizedBody, stream: true }), signal: this.llmConfig.abortSignal, });
67-76
: Correct ordering: check response.ok before body — LGTMThis surfaces HTTP errors with details before inspecting the stream body. Good change.
1-4
: Import Web Streams constructor to fix ESLint “experimental ReadableStream”ESLint flags global ReadableStream usage on Node 20.x. Import from node:stream/web and use that type/ctor instead.
Apply this diff:
import type { ChatBody, ChatCompleteResponse, LlmConfig } from '../../types/index.js'; import { Role } from '../../types/index.js'; import { BaseAi } from '../base-ai.js'; +import { ReadableStream as WebReadableStream } from 'node:stream/web';
86-113
: Use node:stream/web ReadableStream in implementation and typesUpdate generateErrorStream to use WebReadableStream to satisfy ESLint and ensure consistent typing.
Apply this diff:
- protected generateErrorStream(errorMessage: string): ReadableStream<Uint8Array> { + protected generateErrorStream(errorMessage: string): WebReadableStream<Uint8Array> { @@ - return new ReadableStream<Uint8Array>({ + return new WebReadableStream<Uint8Array>({ start(controller) { controller.enqueue(encoder.encode(data)); controller.enqueue(encoder.encode('data: [DONE]\n\n')); controller.close(); }, }); }Additionally, consider making chatStream’s return type consistent:
- async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream<Uint8Array>> { + async chatStream(chatBody: ChatBody): Promise<WebReadableStream<Uint8Array>> { @@ - return response.body; + return response.body as WebReadableStream<Uint8Array>;
🧹 Nitpick comments (2)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (2)
19-25
: DRY: factor model normalization into a private helperThe normalization logic is duplicated in chat and chatStream. Extract into a small helper to avoid drift.
Example:
export class AiRestApi extends BaseAi { llmConfig: AiRestApiConfig; + private normalizeModel(model: ChatBody['model']): string { + return typeof model === 'string' ? model : ((model as any)?.modelId ?? String(model)); + } @@ - const modelId = - typeof chatBody.model === 'string' - ? chatBody.model - : ((chatBody.model as any)?.modelId ?? String(chatBody.model)); + const modelId = this.normalizeModel(chatBody.model); @@ - const modelId = - typeof chatBody.model === 'string' - ? chatBody.model - : ((chatBody.model as any)?.modelId ?? String(chatBody.model)); + const modelId = this.normalizeModel(chatBody.model);Also applies to: 51-56
103-106
: Micro: reuse TextEncoder to avoid re-allocation per streamNot critical, but you can reuse a single encoder instance.
Example:
export class AiRestApi extends BaseAi { llmConfig: AiRestApiConfig; + private static encoder = new TextEncoder(); @@ - const encoder = new TextEncoder(); + const encoder = AiRestApi.encoder;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-08-25T08:51:03.990Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts:16-39
Timestamp: 2025-08-25T08:51:03.990Z
Learning: REST API endpoints for AI models only accept string type model names, not complex model objects. When sending requests to REST APIs, ChatBody.model (typed as LanguageModel which can be string | LanguageModelV2) must be normalized to a string before JSON.stringify.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
📚 Learning: 2025-06-05T08:40:41.876Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: demo-server/src/chat.ts:19-26
Timestamp: 2025-06-05T08:40:41.876Z
Learning: In demo-server/src/chat.ts, the user rhlin prefers to keep simple type assertions for error checking rather than implementing more complex type guard functions, favoring simplicity over enhanced type safety.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
🧬 Code graph analysis (1)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
[error] 86-86: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 106-106: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (2)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts (2)
5-14
: Strong REST config narrowing — LGTMType-narrowing AiRestApiConfig with Extract correctly enforces url/apiKey/model as string and removes the need for unsafe casts downstream.
35-44
: Non-OK returns Error: confirm callers handle the unionchat() returns ChatCompleteResponse | Error. Verify all call sites check instanceof Error (or narrow) before using the response.
Would you like me to scan the repo for BaseAi.chat() call sites and flag any that don’t handle the Error branch?
… and maintainability in mcp-client-chat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (1)
127-133
: Strip non-payload fields and whitelist ChatBody properties in getChatBodyIn packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (getChatBody), we currently spread all remaining llmConfig fields into the ChatBody. That:
- Omits the required
stream
flag (so streaming may silently be disabled), and- Leaks any extra properties on
llmConfig
(e.g. internal flags likeuseSDK
), violating theChatBody
contract and risking runtime/API errors.Please refactor to:
- Destructure only the allowed LLM parameters you intend to pass (e.g.
temperature
,top_p
, etc.),- Map
options.streamSwitch
toChatBody.stream
, and- Drop any other fields.
Example replacement:
- protected async getChatBody(): Promise<ChatBody> { - const { apiKey, url, systemPrompt, summarySystemPrompt, model, ...llmConfig } = this.options.llmConfig; - const chatBody: ChatBody = { - model, - messages: this.messages, - ...llmConfig, - }; - return chatBody; - } + protected async getChatBody(): Promise<ChatBody> { + // Only include the exact ChatBody payload properties: + const { + model, + temperature, + top_p, + max_tokens, + // add or remove from this list as your backend/ChatBody evolves + } = this.options.llmConfig; + + const chatBody: ChatBody = { + model, + messages: this.messages, + stream: Boolean(this.options.streamSwitch), + ...(typeof temperature === 'number' && { temperature }), + ...(typeof top_p === 'number' && { top_p }), + ...(typeof max_tokens === 'number' && { max_tokens }), + }; + return chatBody; + }– Ensure
ChatBody.stream?: boolean
is the flag read by your REST/SDK layer for streaming.
– Review any other LLM parameters you need (e.g.presence_penalty
,frequency_penalty
) and add them explicitly to the whitelist.packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts (1)
60-83
: Potential TS excess-property errors and API leakage by spreading llmConfig into ChatBody.ChatBody only allows a small set of fields (model, messages, tools, optional stream/temperature). Spreading
...llmConfig
likely includes unrelated fields (e.g., useSDK, baseURL, streamSwitch), which can:
- Fail TS “excess property” checks on this object literal.
- Leak unknown fields to providers over the wire.
Safer: pick only allowed ChatBody fields and assert shape with
satisfies
. Also explicitly strip new flags likeuseSDK
andstreamSwitch
.- const { apiKey, url, systemPrompt, summarySystemPrompt, model, ...llmConfig } = this.options.llmConfig; - const chatBody: ChatBody = { - model, - messages: processedMessages, - ...llmConfig, - }; + const { + apiKey, + url, + systemPrompt, + summarySystemPrompt, + model, + useSDK, // exclude non-ChatBody flags + streamSwitch, // exclude non-ChatBody flags + ...rest + } = this.options.llmConfig as Record<string, unknown>; + + // Whitelist known ChatBody fields only. + const maybeStream = 'stream' in rest ? { stream: rest.stream as boolean } : {}; + const maybeTemp = 'temperature' in rest ? { temperature: rest.temperature as number } : {}; + + const chatBody = { + model, + messages: processedMessages, + ...maybeStream, + ...maybeTemp, + } satisfies ChatBody;If you prefer to keep spreading, alternatively change the assignment to use
satisfies ChatBody
(instead of a type annotation) and ensure the transport layer filters unknown keys before sending.
🧹 Nitpick comments (21)
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (1)
2-10
: Sort named type imports to satisfy ESLint (sort-imports).Alphabetize the specifiers to clear the linter error and keep diffs tidy.
-import type { - ChatCompleteResponse, - ChatBody, - MCPClientOptions, - NonStreamingChoice, - StreamingChoice, - Tool, - ToolCall, -} from '../types/index.js'; +import type { + ChatBody, + ChatCompleteResponse, + MCPClientOptions, + NonStreamingChoice, + StreamingChoice, + Tool, + ToolCall, +} from '../types/index.js';packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts (5)
17-28
: Only include tool_calls when present; guard undefined result.toolCallsMapping
result.toolCalls
directly can throw if the SDK returnsundefined
and also emits an emptytool_calls
property unnecessarily. Build the array first and spread it in conditionally.- message: { - role: Role.ASSISTANT, - content: result.text, - tool_calls: result.toolCalls.map((toolCall) => ({ - id: toolCall.toolCallId, - type: 'function', - function: { - name: toolCall.toolName, - arguments: JSON.stringify(toolCall.input), - }, - })), - }, + message: { + role: Role.ASSISTANT, + content: result.text, + ...(Array.isArray(result.toolCalls) && result.toolCalls.length + ? { + tool_calls: result.toolCalls.map((toolCall) => ({ + id: toolCall.toolCallId, + type: 'function', + function: { + name: toolCall.toolName, + arguments: JSON.stringify(toolCall.input), + }, + })), + } + : {}), + },
35-41
: Prefer nullish coalescing for token countersUse
??
to preserve legitimate 0 values and avoid accidentally converting falsy-but-valid values.- prompt_tokens: result.usage.inputTokens || 0, - completion_tokens: result.usage.outputTokens || 0, - total_tokens: result.usage.totalTokens || 0, + prompt_tokens: result.usage.inputTokens ?? 0, + completion_tokens: result.usage.outputTokens ?? 0, + total_tokens: result.usage.totalTokens ?? 0,
47-54
: Initialize streaming choice finish reasons as null (not empty strings)Types show
finish_reason: string | null
. Empty strings can confuse downstream consumers.- const choice: StreamingChoice = { - finish_reason: '', - native_finish_reason: '', + const choice: StreamingChoice = { + finish_reason: null, + native_finish_reason: null, delta: { content: '', role: Role.ASSISTANT, }, };
63-81
: Trim redundant assignments and handle future chunk kinds defensively
- You reassign
result.choices = [choice]
multiple times though it’s already set.- Consider switching to a
switch
with a default to future-proof new chunk kinds.- if (chunk.type === 'text-delta') { - choice.delta.content = chunk.text; - result.choices = [choice]; - } else if (chunk.type === 'tool-call') { + if (chunk.type === 'text-delta') { + choice.delta.content = chunk.text; + } else if (chunk.type === 'tool-call') { choice.delta.tool_calls = [ { id: chunk.toolCallId, type: 'function', function: { name: chunk.toolName, arguments: JSON.stringify(chunk.input), }, }, ]; } else if (chunk.type === 'text-end') { choice.finish_reason = 'stop'; choice.native_finish_reason = 'stop'; - result.choices = [choice]; + } else { + // no-op for unknown chunk types; keep shape stable }
11-12
: Use crypto.randomUUID when available for IDs
Math.random()
-based IDs can collide under high throughput. Prefercrypto.randomUUID()
with a fallback for broader runtimes.- id: `chat-${Date.now()}-${Math.random().toString(36).substring(2, 11)}`, + id: (() => { + const rnd = globalThis.crypto?.randomUUID?.(); + return rnd ? `chat-${rnd}` : `chat-${Date.now()}-${Math.random().toString(36).slice(2, 11)}`; + })(),Apply in both places where the
id
is created.Also applies to: 56-57
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (5)
44-56
: Use Zod’s .nullable() instead of manual union for nullable schemas
.nullable()
preserves metadata and is more idiomatic.- if (schema.nullable === true) { - result = z.union([result, z.null()]); - } + if (schema.nullable === true) { + result = (result as any).nullable?.() ?? z.union([result, z.null()]); + }
120-129
: Avoid double-null in multi-type schemas that also set nullableIf
schema.type
includes'null'
andnullable
is true, you’ll introduce two nullables. Stripnullable
in that case.- const typeSchemas = schema.type.map((type: string) => jsonSchemaToZod({ ...schema, type })); - return applyConstraints(createUnion(typeSchemas), schema); + const hasNull = schema.type.includes('null'); + const typeSchemas = schema.type.map((type: string) => jsonSchemaToZod({ ...schema, type })); + const base = createUnion(typeSchemas); + return applyConstraints(base, hasNull ? { ...schema, nullable: false } : schema);
98-117
: allOf handling merges only object shapes; non-object intersections are ignored
allOf
for numbers (ranges), strings (formats + patterns), etc., will be dropped. Prefer building Zod intersections when possible.Sketch:
- const mergedSchema = schema.allOf.reduce( - (acc: any, subSchema: any) => { - if (subSchema.type === 'object' && acc.type === 'object') { - // merge properties... - } - return acc; - }, - { type: 'object' }, - ); - return jsonSchemaToZod(mergedSchema); + const zods = schema.allOf.map((s) => jsonSchemaToZod(s)); + if (!zods.length) return null; + return zods.reduce((acc, cur) => (acc ? z.intersection(acc as any, cur as any) : cur));Be mindful of Zod’s intersection caveats; for objects it's fine, for mixed types it’s best-effort.
60-66
: Object literal enum equality via JSON.stringify can be brittleKey order differences break equality. Use deep-equal semantics.
+const deepEqual = (a: any, b: any): boolean => { + if (Object.is(a, b)) return true; + if (typeof a !== 'object' || typeof b !== 'object' || !a || !b) return false; + const aKeys = Object.keys(a); + const bKeys = Object.keys(b); + if (aKeys.length !== bKeys.length) return false; + return aKeys.every((k) => deepEqual(a[k], b[k])); +}; + const createObjectLiteralSchema = (value: any): ZodTypeAny => { return z .object({}) .passthrough() - .refine((data) => JSON.stringify(data) === JSON.stringify(value), { message: `Expected ${JSON.stringify(value)}` }); + .refine((data) => deepEqual(data, value), { message: `Expected ${JSON.stringify(value)}` }); };
235-239
: Unique array items via JSON.stringify can misclassify objects
JSON.stringify
can yield false positives/negatives due to key order. Consider deep structural checks (O(n^2) but usually small arrays) or document the limitation.Example:
- if (schema.uniqueItems === true) { - arraySchema = arraySchema.refine((arr) => new Set(arr.map((item) => JSON.stringify(item))).size === arr.length, { - message: 'Array items must be unique', - }) as any; - } + if (schema.uniqueItems === true) { + arraySchema = arraySchema.refine((arr) => { + for (let i = 0; i < arr.length; i++) { + for (let j = i + 1; j < arr.length; j++) { + if (deepEqual(arr[i], arr[j])) return false; + } + } + return true; + }, { message: 'Array items must be unique' }) as any; + }If performance becomes a concern, we can gate this by array length.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts (2)
5-16
: Fail fast or warn when no API key is availableIf
llmConfig.apiKey
andOPENAI_API_KEY
are both absent, downstream calls will fail. Consider throwing with a clear message or logging a warning here to aid diagnostics.export const getDefaultProvider = (llmConfig: LlmConfig) => { const options: OpenAIProviderSettings = { apiKey: llmConfig.apiKey || process.env.OPENAI_API_KEY, baseURL: llmConfig.url, }; if (llmConfig.headers) { options.headers = llmConfig.headers as Record<string, string>; } + if (!options.apiKey) { + console.warn('[AiSDK] No API key provided via llmConfig.apiKey or OPENAI_API_KEY'); + } return createOpenAI(options); };
18-20
: Hard-coded default model may not suit all providersReturning
provider('deepseek-chat')
is opinionated. Consider making the default model configurable at the call site or viallmConfig.model
, falling back only when absent.-export const getDefaultModel = (provider: ReturnType<typeof createOpenAI> = openai) => { - return provider('deepseek-chat'); -}; +export const getDefaultModel = (provider: ReturnType<typeof createOpenAI> = openai, fallback = 'deepseek-chat') => { + return provider(fallback); +}Then pass a repo-level default from the caller if needed.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (4)
118-131
: Type the reducer seed to avoid implicit{}
wideningThe reducer seed
{}
widens to{}
; although you cast at the variable, better to seed with the right shape.- const tools: ToolSet = chatBody.tools.reduce((pre, cur) => { + const tools: ToolSet = chatBody.tools.reduce<ToolSet>((pre, cur) => { return { ...pre, [cur.function.name]: tool({ description: cur.function.description, inputSchema: jsonSchemaToZod(cur.function.parameters), }), }; - }, {}); + }, {} as ToolSet);
26-26
: Fix import order per ESLint (utils before local default-config)ESLint flags the order; move the utils import before
./default-config.js
and keep members sorted.-import { getDefaultModel } from './default-config.js'; -import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { getDefaultModel } from './default-config.js';If you enforce import sorting, also sort the named members in both import lines.
136-151
: Standardized error object is good; consider attaching a codeYou now return
new Error(message)
which is consistent withchat
’s signature. Attaching a.name
or custom.code
(e.g.,E_CHAT_FAIL
) would aid consumers without parsing messages.- return new Error(error instanceof Error ? error.message : 'An unexpected error occurred during chat'); + const err = new Error(error instanceof Error ? error.message : 'An unexpected error occurred during chat'); + (err as any).code = 'E_CHAT_FAIL'; + return err;
196-204
: SSE frames: consider flushing on small batches for latencyCurrently each chunk yields immediately, which is good for latency. If you later buffer, be careful not to coalesce across multiple OpenAI-like chunks as some clients expect one JSON per
data:
line.packages/mcp/mcp-client-chat/src/function-calling/index.ts (1)
1-2
: Public API widened via star-exports; consider explicit exports to prevent accidental surface growth.Star re-exports make all new symbols in the target modules public by default, which can cause name clashes and unintended breaking changes later. Prefer explicit exports for stability.
Apply this diff to switch to explicit exports (adjust identifiers as needed):
-export * from './function-calling-chat.js'; -export * from './system-prompt.js'; +export { FunctionCallChat } from './function-calling-chat.js'; +export { + DEFAULT_SYSTEM_PROMPT, + DEFAULT_SUMMARY_SYSTEM_PROMPT, +} from './system-prompt.js';packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts (3)
28-46
: Harden choice extraction: handle per-choice errors and avoid assuming index 0 is valid.Current logic assumes the first choice is valid and doesn’t check the optional error field. For robustness (esp. mixed or partial streaming chunks), find the first usable choice and surface provider error details when present.
Proposed patch:
- if (this.streamSwitch) { - const choice = response.choices[0]; - if (!('delta' in choice)) { - throw new Error('Invalid streaming response: delta not found'); - } - message = (choice as StreamingChoice).delta; - } else { - const choice = response.choices[0]; - if (!('message' in choice)) { - throw new Error('Invalid non-streaming response: message not found'); - } - message = (choice as NonStreamingChoice).message; - } + if (this.streamSwitch) { + // Prefer the first choice that actually has a delta + const choice = response.choices.find((c) => 'delta' in (c as StreamingChoice)); + if (!choice) throw new Error('Invalid streaming response: delta not found'); + const streaming = choice as StreamingChoice; + if (streaming.error) { + throw new Error(`Streaming choice error: ${JSON.stringify(streaming.error)}`); + } + message = streaming.delta; + } else { + // Prefer the first choice that actually has a message + const choice = response.choices.find((c) => 'message' in (c as NonStreamingChoice)); + if (!choice) throw new Error('Invalid non-streaming response: message not found'); + const nonStreaming = choice as NonStreamingChoice; + if (nonStreaming.error) { + throw new Error(`Non-streaming choice error: ${JSON.stringify(nonStreaming.error)}`); + } + message = nonStreaming.message; + }
55-57
: Gate “thought” exposure; avoid unintentional chain-of-thought leakage.Returning
message.reasoning ?? message.content
as “thought” will surface internal reasoning for models that provide it. Many providers advise against exposing CoT verbatim. Make this configurable (e.g., llmConfig.exposeReasoning === true) and default to not returning it.Example change:
- const thought = message.reasoning ?? message.content ?? ''; + const exposeReasoning = (this.options.llmConfig as any)?.exposeReasoning === true; + const thought = exposeReasoning ? (message.reasoning ?? '') : '';If you want, I can follow up with a small PR to add
exposeReasoning?: boolean
to the LLM config types and thread it through.
63-68
: Don’t hardcode console logging in library code; prefer injected logger or debug flag.
console.error
can be noisy for consumers. Consider optional logger injection or a debug flag.Minimal tweak:
- } catch (error) { - console.error('Failed to fetch tools list:', error); - tools = []; - } + } catch (error) { + (this as any)?.logger?.error?.('Failed to fetch tools list:', error); + tools = []; + }If there’s a shared logger interface in the package, wire that instead of
any
.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (16)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts
(1 hunks)packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts
(4 hunks)packages/mcp/mcp-client-chat/src/function-calling/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/functionCalling/index.ts
(0 hunks)packages/mcp/mcp-client-chat/src/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/init.ts
(1 hunks)packages/mcp/mcp-client-chat/src/re-act/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
(3 hunks)packages/mcp/mcp-client-chat/src/react/index.ts
(0 hunks)packages/mcp/mcp-client-chat/src/types/ai-SDK.ts
(1 hunks)packages/mcp/mcp-client-chat/src/types/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
(5 hunks)packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
(1 hunks)
💤 Files with no reviewable changes (2)
- packages/mcp/mcp-client-chat/src/functionCalling/index.ts
- packages/mcp/mcp-client-chat/src/react/index.ts
✅ Files skipped from review due to trivial changes (1)
- packages/mcp/mcp-client-chat/src/re-act/index.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/mcp/mcp-client-chat/src/init.ts
- packages/mcp/mcp-client-chat/src/utils/index.ts
- packages/mcp/mcp-client-chat/src/types/index.ts
- packages/mcp/mcp-client-chat/src/index.ts
- packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/function-calling/index.ts
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/types/ai-SDK.ts
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
🧬 Code graph analysis (5)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (1)
LlmConfig
(62-70)
packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts (2)
packages/mcp/mcp-client-chat/src/types/ai-SDK.ts (1)
LanguageModel
(8-8)packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (2)
ChatCompleteResponse
(168-177)StreamingChoice
(152-157)
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
StreamingChoice
(152-157)NonStreamingChoice
(145-150)ChatBody
(96-102)
packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (5)
ChoiceMessage
(138-143)StreamingChoice
(152-157)NonStreamingChoice
(145-150)ChatBody
(96-102)AvailableTool
(79-90)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)packages/mcp/mcp-client-chat/src/types/ai-SDK.ts (3)
LanguageModel
(8-8)GenerateTextOptions
(4-4)StreamTextOptions
(6-6)packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts (1)
getDefaultModel
(18-20)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (1)
jsonSchemaToZod
(283-320)packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts (2)
transformChatResult
(6-44)toOpenAIChunk
(46-84)
🪛 ast-grep (0.38.6)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[warning] 144-144: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(schema.pattern)
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🪛 ESLint
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
[error] 4-4: Member 'ChatBody' of the import declaration should be sorted alphabetically.
(sort-imports)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./default-config.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 162-162: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 185-185: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (6)
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (1)
11-11
: Path rename to system-prompt.js looks correct.Import path aligns with the new filename convention.
packages/mcp/mcp-client-chat/src/types/ai-SDK.ts (1)
4-10
: Type surfacing looks goodThe option types omitting
model
align with howgenerateText/streamText
are invoked here, andLanguageModel
matches provider usage.packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
88-103
: Verify ToolResultPart shape (output vs content/result)The
ToolResultPart
constructed here uses anoutput
property with{ type: 'text', value: msg.content }
. Depending on the AI SDK version, the expected field may becontent
(orresult
). Please confirm against the SDK’sToolResultPart
type; otherwise, type mismatches may be silently accepted at compile time (viaany
) and break tool result rendering.Would you like me to adjust this once you confirm the SDK version you’re targeting?
packages/mcp/mcp-client-chat/src/function-calling/index.ts (1)
1-2
: ESM pathing and stale import cleanup verified
- No legacy
functionCalling
imports remain anywhere in the repo.- Both
system-prompt.ts
modules are present undersrc/function-calling
(andsrc/re-act
), matching the new folder names.tsconfig.json
formcp-client-chat
is set to"module": "NodeNext"
and"moduleResolution": "nodenext"
, so.js
specifiers in.ts
source resolve correctly.- There’s no explicit
"exports"
field inpackages/mcp/mcp-client-chat/package.json
, meaning Node’s default ESM fallback applies—this aligns with the intended resolution strategy.All pathing and import cleanup checks out; no further changes needed here.
packages/mcp/mcp-client-chat/src/function-calling/function-calling-chat.ts (2)
3-13
: Imports consolidation and ESM specifiers look good.Centralizing types under ../types/index.js and using explicit system-prompt module paths improves cohesion and avoids scattered type paths. The “.js” specifiers are consistent with ESM/NodeNext.
86-88
: No changes needed for first-turn function callingThe guard
this.iterationSteps > 0
will always be true on the very first request, becauseiterationSteps
is initialized tooptions.maxIterationSteps || 1
. That means any non-emptytools
array will still be passed on turn 1, and you only stop re-sending tools on subsequent turns once the loop exits. Function-calling remains fully operational on the first assistant turn—no update to the gating logic is required.
…nd improve error handling in chat response processing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
1-15
: Import order/sort warnings (lint-only)ESLint hints suggest adjusting import order and member sorting. Low priority but keeps diffs clean and CI happy if your pipeline enforces it.
Also applies to: 16-27
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (1)
118-126
: Mirror the same shape-based guard in the catch block; handle empty choices safely.The error path assumes NonStreamingChoice and will throw if the response is streaming or choices is empty. Use the same delta/message discrimination and guard for missing choices.
} catch (error) { console.error('Failed to organize tool calls:', error); - const text = (response.choices[0] as NonStreamingChoice).message.content ?? ''; + const c0 = response.choices?.[0] as StreamingChoice | NonStreamingChoice | undefined; + const text = + c0 && 'delta' in (c0 as any) + ? c0.delta?.content ?? '' + : c0 + ? c0.message?.content ?? '' + : ''; return { toolCalls: [], thought: '', finalAnswer: text, }; }
🧹 Nitpick comments (14)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/index.ts (1)
1-1
: Confirm whether a default export needs re-exportingNote that
export * from './ai-rest-api.js'
will not re-export a default export. If./ai-rest-api.js
(source:ai-rest-api.ts
) has a default (e.g.,export default class AiRestApi {}
), it won’t be available via this barrel.If that’s intended, ignore. Otherwise, add an explicit default re-export:
export * from './ai-rest-api.js'; +export { default as AiRestApi } from './ai-rest-api.js';
packages/mcp/mcp-client-chat/src/utils/index.ts (1)
3-3
: Double-check filename casing for cross-platform buildsThe path uses
./ai-SDK-transformer.js
(capital SDK). Ensure the actual file name matches exactly on disk to avoid resolution issues on case-sensitive file systems and CI.packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (5)
145-151
: Fix lint and avoid unnecessary capturing groups in unsafe-regex heuristicSwitch the heuristic’s capturing groups to non-capturing to satisfy
regexp/no-unused-capturing-group
, and keep behavior identical.- const unsafe = /(\.\*|\.\+|\[[^\]]+\]\+|\)[*+]){2,}/.test(schema.pattern); + const unsafe = /(?:\.\*|\.\+|\[[^\]]+\]\+|\)[*+]){2,}/.test(schema.pattern);
101-115
: allOf merge of required props can duplicate entriesWhen merging
required
, duplicates may accumulate. Deduplicate to avoid redundant validations and noisy error messages.- required: [...(acc.required || []), ...(subSchema.required || [])], + required: Array.from(new Set([...(acc.required || []), ...(subSchema.required || [])])),Also applies to: 108-109
198-200
: Guard against invalidmultipleOf
valuesJSON Schema forbids
multipleOf <= 0
. Zod will error or behave unexpectedly if given 0/negative. Add a guard and warn when invalid.- if (isNumber(schema.multipleOf)) { - numberSchema = (numberSchema as z.ZodNumber).multipleOf(schema.multipleOf); - } + if (isNumber(schema.multipleOf)) { + if (schema.multipleOf > 0) { + numberSchema = (numberSchema as z.ZodNumber).multipleOf(schema.multipleOf); + } else { + console.warn('Invalid multipleOf (must be > 0), skipping:', schema.multipleOf); + } + }
60-65
: Object literal equality via JSON.stringify is order-sensitive
JSON.stringify
depends on key insertion order. Two semantically equal objects with different key orders will fail the equality check. Consider a stable stringifier or a deep-equal comparison to avoid false negatives.Option A (local helper, no deps) — add a stable stringifier and use it in the refine:
// Helper (place near other utils) function stableStringify(value: any): string { const seen = new WeakSet(); const normalize = (v: any): any => { if (v && typeof v === 'object') { if (seen.has(v)) return v; // cycle guard seen.add(v); if (Array.isArray(v)) return v.map(normalize); return Object.keys(v) .sort() .reduce((acc, k) => { acc[k] = normalize(v[k]); return acc; }, {} as any); } return v; }; return JSON.stringify(normalize(value)); }Then change the refine:
- .refine((data) => JSON.stringify(data) === JSON.stringify(value), { message: `Expected ${JSON.stringify(value)}` }); + .refine((data) => stableStringify(data) === stableStringify(value), { message: `Expected ${JSON.stringify(value)}` });
159-174
: Consider supporting common JSON Schema formats beyond email/url/uuid/datetimeMany schemas use
format: 'uri'
,'hostname'
, or'ipv4'/'ipv6'
. Mapping at least'uri'
→z.string().url()
would improve compatibility. Optional.packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (2)
61-67
: Avoid logging raw tool arguments to prevent PII leakage; truncate/sanitize logsOn JSON parse failure, logging the full
function.arguments
can leak secrets or large payloads. Log structured metadata and a truncated preview instead.- } catch (error) { - console.error(`Failed to parse tool arguments: ${toolCall.function.arguments}`, error); - return {}; - } + } catch (error) { + const raw = String(toolCall.function.arguments ?? ''); + const preview = raw.length > 512 ? raw.slice(0, 512) + '…' : raw; + console.error('Failed to parse tool arguments', { + tool: toolCall.function.name, + toolCallId: toolCall.id, + preview, + error, + }); + return {}; + }
40-78
: Assistant message: preserve text when tool_calls exist (optional)Current logic replaces assistant
content
with tool-call parts, dropping any accompanying assistant text. If upstream can send both, consider emitting both text and tool-calls to retain context.packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (5)
2-10
: Fix import member order to satisfy ESLint sort-imports.ESLint flags ChatBody needing alphabetical order. Reorder named imports to pass CI.
import type { - ChatCompleteResponse, - ChatBody, + ChatBody, + ChatCompleteResponse, MCPClientOptions, NonStreamingChoice, StreamingChoice, Tool, ToolCall, } from '../types/index.js';
130-136
: Ensure the request body’s stream flag aligns with the runtime streamSwitch.Given the PR adds a streamSwitch to toggle streaming, set chatBody.stream from options.streamSwitch so the backend/SDK emits the intended shape. This doesn’t affect your shape-based reading but makes the switch effective upstream.
protected async getChatBody(): Promise<ChatBody> { const { apiKey, url, systemPrompt, summarySystemPrompt, model, ...llmConfig } = this.options.llmConfig; const chatBody: ChatBody = { model, messages: this.messages, ...llmConfig, }; + // Let the runtime switch drive the wire-level stream behavior when provided. + if (typeof this.options.streamSwitch === 'boolean') { + chatBody.stream = this.options.streamSwitch; + } return chatBody; }Follow-up: If llmConfig can contain non-ChatBody keys, consider whitelisting (model, messages, tools, temperature, stream) to avoid excess-property assignment issues.
58-61
: Trim both leading and trailing punctuation in Thought extraction.Current regex only removes one side (and only once). Use a global, two-sided pattern.
- thought = matches[0][1]?.replace(/^\W|$/, '')?.trim(); + thought = matches[0][1]?.replace(/^\W+|\W+$/g, '')?.trim();
33-36
: Avoid injecting 'undefined' into the system prompt when systemPrompt is missing.Guard the base system prompt and skip falsy segments to keep the prompt clean.
- const toolStrings = tools.length ? JSON.stringify(tools) : ''; - const prompt = [this.options.llmConfig.systemPrompt, PREFIX, toolStrings, FORMAT_INSTRUCTIONS, SUFFIX].join('\n\n'); + const toolStrings = tools.length ? JSON.stringify(tools) : ''; + const basePrompt = this.options.llmConfig.systemPrompt ?? ''; + const prompt = [basePrompt, PREFIX, toolStrings, FORMAT_INSTRUCTIONS, SUFFIX] + .filter(Boolean) + .join('\n\n');
99-106
: Prefer stable UUIDs for tool call IDs.Math.random() is fine but can collide under load. If runtime allows, prefer crypto.randomUUID() with a fallback.
- toolCalls.push({ - id: `call_${Math.random().toString(36).slice(2)}`, + toolCalls.push({ + id: + typeof crypto !== 'undefined' && 'randomUUID' in crypto + ? `call_${crypto.randomUUID()}` + : `call_${Math.random().toString(36).slice(2)}`, type: 'function', function: { name: action, arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {}), }, });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (11)
packages/mcp/mcp-client-chat/package.json
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-rest-api/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/base-ai.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
(3 hunks)packages/mcp/mcp-client-chat/src/types/ai-SDK.ts
(1 hunks)packages/mcp/mcp-client-chat/src/types/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- packages/mcp/mcp-client-chat/src/ai/ai-sdk/index.ts
- packages/mcp/mcp-client-chat/src/ai/index.ts
- packages/mcp/mcp-client-chat/src/types/index.ts
- packages/mcp/mcp-client-chat/src/ai/base-ai.ts
- packages/mcp/mcp-client-chat/package.json
- packages/mcp/mcp-client-chat/src/types/ai-SDK.ts
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
🧬 Code graph analysis (2)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)packages/mcp/mcp-client-chat/src/types/ai-SDK.ts (4)
LanguageModel
(8-8)GenerateTextOptions
(4-4)ModelMessage
(10-10)StreamTextOptions
(6-6)packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts (1)
getDefaultModel
(18-20)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (1)
jsonSchemaToZod
(288-325)packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts (2)
transformChatResult
(6-44)toOpenAIChunk
(46-84)
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
StreamingChoice
(152-157)NonStreamingChoice
(145-150)ChatBody
(96-102)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./default-config.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 174-174: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 197-197: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts
[error] 4-4: Member 'ChatBody' of the import declaration should be sorted alphabetically.
(sort-imports)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[error] 145-145: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
🪛 ast-grep (0.38.6)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[warning] 146-146: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(schema.pattern)
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🔇 Additional comments (5)
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/index.ts (2)
1-1
: LGTM: barrel export aligns with project patternThe index barrel cleanly exposes the REST client API and keeps public entry points consistent with the SDK side. No functional risk here.
1-1
: All TS/ESM settings for.js
imports are properly configuredI’ve verified that in
packages/mcp/mcp-client-chat
:
- tsconfig.json uses
"module": "NodeNext"
and"moduleResolution": "nodenext"
, which supports.js
extensions in imports.- package.json declares
"type": "module"
, ensuring ESM output aligns with the.js
-extension imports in your source.Additionally, the SDK barrel in
src/ai/ai-sdk/index.ts
follows the same pattern (export * from './ai-sdk.js';
), so everything is consistent.No further changes are needed here.
packages/mcp/mcp-client-chat/src/utils/index.ts (1)
1-3
: LGTM — simple barrel file is fineRe-exports look correct and keep the public surface tidy. No runtime logic; safe to merge.
packages/mcp/mcp-client-chat/src/re-act/re-act-chat.ts (2)
11-11
: LGTM: system prompt import path change.The move to ./system-prompt.js (ESM with explicit .js) looks correct.
43-52
: Good: discriminate response shape by inspecting the choice payload.Nice switch from option-based branching to checking 'delta' vs 'message' on the first choice. This avoids silent mismatches when the request/adapter returns a different shape.
…o support dynamic model selection and improve data handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (2)
175-205
: ESLint “ReadableStream experimental” is a known false positive for Node >=18Given your workspace targets modern Node, prefer adjusting ESLint/plugin-node targets or engines to suppress this warning rather than changing code.
If it still flags, verify engines and ESLint config with:
#!/bin/bash # Inspect engines and node versions referenced by packages and CI fd package.json -a | xargs -I{} sh -c 'echo "== {} =="; jq ".engines // null" "{}"' rg -n --hidden --glob ".github/**" "setup-node|node-version"
140-153
: Per-request model override is ignored in chat() — use request model for both call and labelingYou already fixed chatStream; mirror that here to avoid routing to the wrong model and mislabeling responses.
Apply this diff:
async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { try { const chatOptions = this.generateChatOptions(chatBody); - const { messages, tools, toolChoice, prompt, ...rest } = chatOptions; - const result: GenerateTextResult<ToolSet, unknown> = await generateText({ - model: this.model, + const { messages, tools, toolChoice, ...rest } = chatOptions; + const model = (chatBody as any).model ?? this.model; + const result: GenerateTextResult<ToolSet, unknown> = await generateText({ + model, messages: messages || [], ...(tools && { tools }), ...(toolChoice && { toolChoice }), ...rest, }); - const response: ChatCompleteResponse = transformChatResult(result, this.llmConfig.model); + const response: ChatCompleteResponse = transformChatResult(result, model);
🧹 Nitpick comments (5)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (3)
142-151
: Regex guard: switch to non-capturing group to satisfy ESLint and reduce overheadGood call adding a safety heuristic for patterns. Minor nit: the regex uses an unused capturing group; make it non-capturing to silence regexp/no-unused-capturing-group.
Apply this diff:
- const unsafe = /(\.\*|\.\+|\[[^\]]+\]\+|\)[*+]){2,}/.test(schema.pattern); + const unsafe = /(?:\.\*|\.\+|\[[^\]]+\]\+|\)[*+]){2,}/.test(schema.pattern);
232-233
: Avoid noisy warnings for arrays without items: don’t pass an empty schema into jsonSchemaToZodPassing
{}
triggers the “Unknown schema type” warning and still yields z.any(). Use z.any() directly when items is undefined.Apply this diff:
- let arraySchema = z.array(jsonSchemaToZod(schema.items || {})); + const itemSchema = schema.items ? jsonSchemaToZod(schema.items) : z.any(); + let arraySchema = z.array(itemSchema);
316-317
: Preserve description/default on null type by applying constraintsCurrently the null branch skips applyConstraints, losing description/default metadata.
Apply this diff:
- case 'null': - return z.null(); + case 'null': + return applyConstraints(z.null(), schema);packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (2)
25-26
: Fix import order to satisfy ESLint and keep imports consistentSwap these two imports so utils comes before default-config as flagged by import-x/order.
Apply this diff:
-import { getDefaultModel } from './default-config.js'; -import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { getDefaultModel } from './default-config.js';
143-166
: Drop unused ‘prompt’ destructuring in both chat and chatStreamprompt is unused in the SDK chat pathway (you send messages). Removing it quiets linters and clarifies intent.
Apply this diff (two places):
- const { messages, tools, toolChoice, prompt, ...rest } = chatOptions; + const { messages, tools, toolChoice, ...rest } = chatOptions;And similarly in chatStream:
- const { messages, tools, toolChoice, prompt, ...rest } = chatOptions; + const { messages, tools, toolChoice, ...rest } = chatOptions;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-08-25T08:51:04.023Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts:16-39
Timestamp: 2025-08-25T08:51:04.023Z
Learning: REST API endpoints for AI models only accept string type model names, not complex model objects. When sending requests to REST APIs, ChatBody.model (typed as LanguageModel which can be string | LanguageModelV2) must be normalized to a string before JSON.stringify.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
🧬 Code graph analysis (1)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (5)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)packages/mcp/mcp-client-chat/src/types/ai-SDK.ts (3)
LanguageModel
(8-8)GenerateTextOptions
(4-4)StreamTextOptions
(6-6)packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts (1)
getDefaultModel
(18-20)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (1)
jsonSchemaToZod
(288-328)packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts (2)
transformChatResult
(6-44)toOpenAIChunk
(46-84)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./default-config.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 175-175: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 198-198: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[error] 145-145: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
🪛 ast-grep (0.38.6)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[warning] 146-146: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(schema.pattern)
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🔇 Additional comments (1)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
161-206
: Streaming path looks solid — request model honored and SSE error frames hardenedGood changes: honoring per-request model, wrapping iterator.next() in try/catch, and JSON-stringifying error frames prevents malformed SSE. This aligns chatStream with chat once the chat() fix lands.
…able stringification and enhance allOf handling with intersection logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (7)
98-106
: Model JSON Schema oneOf as “exactly one matches” (not just union)JSON Schema oneOf requires XOR semantics. Treating oneOf and anyOf identically under-validates and can accept ambiguous inputs. Keep anyOf as a plain union, but add an exact-one refinement for oneOf.
Apply these diffs:
-// 处理联合类型(oneOf 和 anyOf) -const handleUnion = (schemas: JsonSchema[] | undefined, parentSchema: JsonSchema): ZodTypeAny | null => { +// 处理联合类型(oneOf 和 anyOf) +const handleUnion = ( + schemas: JsonSchema[] | undefined, + parentSchema: JsonSchema, + kind: 'oneOf' | 'anyOf', +): ZodTypeAny | null => { if (!Array.isArray(schemas)) return null; if (schemas.length === 0) return z.never(); if (schemas.length === 1) return jsonSchemaToZod(schemas[0]); - const unionSchemas = schemas.map((subSchema) => jsonSchemaToZod(subSchema)); - return applyConstraints(createUnion(unionSchemas), parentSchema); + const compiled = schemas.map((subSchema) => jsonSchemaToZod(subSchema)); + let union = createUnion(compiled); + + if (kind === 'oneOf') { + union = (union as ZodTypeAny).superRefine((val, ctx) => { + let matches = 0; + for (const s of compiled) { + if ((s as ZodTypeAny).safeParse(val).success) { + matches++; + if (matches > 1) break; + } + } + if (matches !== 1) { + ctx.addIssue({ + code: z.ZodIssueCode.custom, + message: `Value must match exactly one subschema (matched ${matches}).`, + }); + } + }) as ZodTypeAny; + } + return applyConstraints(union, parentSchema); };return ( handleEnum(schema) || - handleUnion(schema.oneOf, schema) || - handleUnion(schema.anyOf, schema) || + handleUnion(schema.oneOf, schema, 'oneOf') || + handleUnion(schema.anyOf, schema, 'anyOf') || handleAllOf(schema) ||Also applies to: 301-305
142-151
: Fix ESLint regex complaint and avoid unnecessary capture groupSwitch the heuristic to a non-capturing group. This resolves eslint(regexp/no-unused-capturing-group) without changing behavior.
- const unsafe = /(\.\*|\.\+|\[[^\]]+\]\+|\)[*+]){2,}/.test(schema.pattern); + const unsafe = /(?:\.\*|\.\+|\[[^\]]+\]\+|\)[*+]){2,}/.test(schema.pattern);Optional: consider compiling with the 'u' flag for Unicode patterns if aligned with your JSON Schema dialect.
170-172
: Align date-time format with RFC 3339 by requiring an offsetJSON Schema’s date-time is RFC 3339; offsets (‘Z’ or ±hh:mm) should be allowed. Zod’s string().datetime({ offset: true }) enforces that.
- case 'date-time': - stringSchema = stringSchema.datetime(); + case 'date-time': + stringSchema = stringSchema.datetime({ offset: true }); break;
159-174
: Support common JSON Schema formats ‘uri’ and ‘uri-reference’Many schemas use ‘uri’/‘uri-reference’ rather than ‘url’. Mapping them to Zod’s url() improves interoperability.
if (schema.format) { switch (schema.format) { case 'email': stringSchema = stringSchema.email(); break; case 'url': stringSchema = stringSchema.url(); break; + case 'uri': + case 'uri-reference': + // Map to closest Zod validator + stringSchema = stringSchema.url(); + break; case 'uuid': stringSchema = stringSchema.uuid(); break; case 'date-time': stringSchema = stringSchema.datetime({ offset: true }); break; } }
37-38
: Tighten union helper typingType-assert the array to the variadic tuple Zod expects, rather than reconstructing it manually.
-const createUnion = (schemas: ZodTypeAny[]) => - schemas.length === 1 ? schemas[0] : z.union([schemas[0], schemas[1], ...schemas.slice(2)]); +const createUnion = (schemas: ZodTypeAny[]) => + schemas.length === 1 ? schemas[0] : z.union(schemas as [ZodTypeAny, ZodTypeAny, ...ZodTypeAny[]]);
205-231
: Implement tuple additionalItems semantics (draft-07/prefixItems)When items is an array (tuple), JSON Schema allows controlling extra items via additionalItems. Current code forbids extras. Add support for:
- additionalItems: false — current behavior (no change)
- additionalItems: true — allow any type for the rest
- additionalItems: object — rest items validated against that schema
Apply these diffs:
if (Array.isArray(schema.items)) { if (schema.items.length === 0) { - return applyConstraints(z.tuple([]), schema); + let emptyTuple: ZodTypeAny = z.tuple([]); + if (schema.additionalItems === true) { + emptyTuple = (emptyTuple as any).rest(z.any()); + } else if (schema.additionalItems && typeof schema.additionalItems === 'object') { + emptyTuple = (emptyTuple as any).rest(jsonSchemaToZod(schema.additionalItems)); + } + return applyConstraints(emptyTuple, schema); } const tupleItems = schema.items.map((itemSchema) => jsonSchemaToZod(itemSchema)); let tupleSchema: ZodTypeAny = tupleItems.length === 1 ? z.tuple([tupleItems[0]]) : z.tuple([tupleItems[0], ...tupleItems.slice(1)] as [ZodTypeAny, ...ZodTypeAny[]]); + if (schema.additionalItems === true) { + tupleSchema = (tupleSchema as any).rest(z.any()); + } else if (schema.additionalItems && typeof schema.additionalItems === 'object') { + tupleSchema = (tupleSchema as any).rest(jsonSchemaToZod(schema.additionalItems)); + } + if (isNumber(schema.minItems)) { tupleSchema = tupleSchema.refine((arr) => arr.length >= schema.minItems!, { message: `Tuple must have at least ${schema.minItems} items`, }); }Companion interface tweak below adds the missing field.
24-33
: Add missing additionalItems to JsonSchema interfaceThis unlocks tuple rest semantics wired above.
multipleOf?: number; minItems?: number; maxItems?: number; uniqueItems?: boolean; + additionalItems?: boolean | JsonSchema; minProperties?: number; maxProperties?: number; additionalProperties?: boolean | JsonSchema;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
(1 hunks)
🧰 Additional context used
🪛 ast-grep (0.38.6)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[warning] 146-146: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(schema.pattern)
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🪛 ESLint
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts
[error] 145-145: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
🔇 Additional comments (4)
packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (4)
287-328
: Solid dispatch order and graceful fallbacksNice precedence chain: const → enum → unions → allOf → multi-type → primitives, with sensible fallbacks and warnings. This keeps behavior predictable and easy to reason about.
241-244
: LGTM: uniqueItems via deep structural equalityUsing stableStringify gives deterministic equality across object key orders. Good balance between correctness and simplicity.
59-66
: LGTM: stableStringify is deterministic and side-effect freeSorting keys ensures order-insensitive comparison for objects while preserving array order, matching JSON Schema semantics for const/uniqueItems.
1-2
: Zod API compatibility verified – The earliest Zod version pinned in this monorepo is ^3.24.2, which fully supports all of the APIs used:
- Monorepo Zod versions:
• Root & docs packages → ^3.25.13
• mcp-client-chat, proxy-server, service, task-mcp → ^3.24.2- string().datetime was introduced in Zod 3.20 (github.com)
- number().multipleOf has been available since early Zod 3 (confirmed in current docs) (zodjs.netlify.app)
- object().passthrough exists in all Zod 3.x releases (shown in v3.23 docs) (npmjs.com)
- No .rest() usage found in the codebase
Because all required methods are supported by the lowest pinned version (3.24.2), there’s no build-time risk—this concern can be resolved.
…ve error handling in chat iteration process
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (1)
46-46
: streamSwitch source likely wrong object (duplicate of earlier feedback).Earlier review noted streamSwitch belongs to MCPClientOptions, not llmConfig. Reading from llmConfig will ignore a top-level toggle.
Apply:
- this.streamSwitch = options.llmConfig.streamSwitch ?? true; + this.streamSwitch = (options as any).streamSwitch ?? true;Run to confirm where streamSwitch is declared:
#!/usr/bin/env bash set -euo pipefail rg -nP --type=ts '\binterface\s+MCPClientOptions\b(?s).*?\}' -C3 rg -nP --type=ts '\binterface\s+LlmConfig\b(?s).*?\}' -C3 rg -nP --type=ts '\bstreamSwitch\b' -n -C2
🧹 Nitpick comments (5)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (5)
37-38
: Tighten aiInstance typing; keep streamSwitch here.
- Prefer definite assignment over nullable to avoid non-null assertions later.
Apply:
- protected aiInstance: BaseAi | null = null; + protected aiInstance!: BaseAi;If there’s any chance chat() is called before init(), add a runtime guard (or document the init()-then-chat contract).
197-208
: Streaming branch: type and cleanup on error.
- Use globalThis.ReadableStream to appease Node/ESLint.
- Cancel the stream if assembly fails to avoid leaks.
Apply:
- if (this.streamSwitch) { - try { - const streamResponses: ReadableStream = await this.queryChatCompleteStreaming(); - response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); - } catch (error) { - response = error instanceof Error ? error : new Error(String(error)); - } - } else { + if (this.streamSwitch) { + let streamResponses: globalThis.ReadableStream | undefined; + try { + streamResponses = await this.queryChatCompleteStreaming(); + response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); + } catch (error) { + try { await streamResponses?.cancel?.(); } catch {} + response = error instanceof Error ? error : new Error(String(error)); + } + } else { response = await this.queryChatComplete(); }
215-215
: Avoid magic number sentinel for “manual end”.-1 as a control flag obscures intent and risks misuse. Prefer a boolean (e.g., manualEnd) or an enum.
Example:
- this.iterationSteps = -1; // 手动结束为-1 + this.manualEnd = true;(Define protected manualEnd = false; and adjust checks accordingly.)
Also applies to: 225-225
469-476
: Non-streaming error handling is fine, but unify error surface if possible.Returning Error here while streaming throws can complicate callers. Consider standardizing on Error return or structured ChatCompleteResponse with error across both.
507-517
: Streaming call path looks good; minor log clarity.The log label says chat/complete; consider “chat/complete (streaming)” for clarity. Signature already uses globalThis.ReadableStream.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/mcp/mcp-client-chat/README.md
(5 hunks)packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
(10 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/mcp/mcp-client-chat/README.md
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
🧬 Code graph analysis (1)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (4)
MCPClientOptions
(72-77)ChatCompleteResponse
(168-177)ChatBody
(96-102)ToolCall
(126-130)packages/mcp/mcp-client-chat/src/ai/ai-instance.ts (1)
getAiInstance
(4-12)packages/mcp/mcp-client-chat/src/utils/transformer.ts (1)
generateStreamingResponses
(87-156)
🪛 ESLint
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
[error] 201-201: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (6)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (6)
6-8
: Imports alignment looks good.Type and module paths are consistent with the new layering (types/utils/ai).
Also applies to: 12-12, 21-21
50-50
: AI backend init path is correct.getAiInstance(llmConfig) selection matches the SDK/REST switch semantics.
232-234
: Thought delta only in non-streaming path is correct.Prevents duplicate content when generateStreamingResponses already emitted deltas.
270-275
: Iteration finalization is wrapped properly.Centralizing end-of-loop work in completeChatIteration is good.
405-405
: Including tool name on 'tool' messages aligns with types.Matches Message variant for role 'tool' (name?: string).
557-561
: Abstract signatures look consistent with new types.Clear contract for implementors.
protected async completeChatIteration(): Promise<void> { | ||
try { | ||
|
||
if ( | ||
this.messages[this.messages.length - 1].role === Role.ASSISTANT && | ||
this.messages[this.messages.length - 1].content?.length > 0 | ||
) { | ||
if (this.iterationSteps === -1) { | ||
await this.writeMessageDelta(this.messages[this.messages.length - 1].content as string, 'assistant'); | ||
} | ||
|
||
this.writeMessageEnd(); | ||
return; | ||
} | ||
|
||
const summaryPrompt = this.options.llmConfig.summarySystemPrompt || 'Please provide a brief summary.'; | ||
|
||
this.organizePromptMessages({ role: Role.USER, content: summaryPrompt }); | ||
|
||
const result = await this.queryChatCompleteStreaming(); | ||
|
||
result.pipeTo(this.transformStream.writable); | ||
} catch (error) { | ||
console.error('Chat iteration failed:', error); | ||
console.error('Complete chat iteration failed:', error); | ||
throw error; | ||
} | ||
} | ||
|
||
protected async writeMessageEnd() { | ||
if (this.transformStream.writable.locked) { | ||
console.warn('Stream is already locked, skipping end message'); | ||
return; | ||
} | ||
|
||
const writer = this.transformStream.writable.getWriter(); | ||
|
||
try { | ||
await writer.ready; | ||
await writer.write(new TextEncoder().encode('data: [DONE]\n\n')); | ||
await writer.close(); | ||
} catch (error) { | ||
console.error('Failed to write end message:', error); | ||
} finally { | ||
writer.releaseLock(); | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Respect streamSwitch in summary path and await end sentinel.
- Summary currently always streams; it should honor streamSwitch.
- Await writeMessageEnd to ensure [DONE] is flushed before returning.
Apply:
- if (this.iterationSteps === -1) {
+ if (this.iterationSteps === -1) {
await this.writeMessageDelta(this.messages[this.messages.length - 1].content as string, 'assistant');
}
-
- this.writeMessageEnd();
+ await this.writeMessageEnd();
return;
}
const summaryPrompt = this.options.llmConfig.summarySystemPrompt || 'Please provide a brief summary.';
this.organizePromptMessages({ role: Role.USER, content: summaryPrompt });
-
- const result = await this.queryChatCompleteStreaming();
-
- result.pipeTo(this.transformStream.writable);
+ if (this.streamSwitch) {
+ const result = await this.queryChatCompleteStreaming();
+ await result.pipeTo(this.transformStream.writable);
+ } else {
+ const resp = await this.queryChatComplete();
+ if (resp instanceof Error) {
+ await this.generateErrorStream(resp.message).pipeTo(this.transformStream.writable);
+ } else {
+ const content =
+ (resp as any)?.choices?.[0]?.message?.content ??
+ (resp as any)?.choices?.[0]?.delta?.content ??
+ '';
+ if (content) {
+ await this.writeMessageDelta(content, 'assistant');
+ }
+ await this.writeMessageEnd();
+ }
+ }
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts around lines 277 to 323,
the summary path always streams and does not await the end sentinel; update the
summary branch to honor the instance's streamSwitch option (only pipe the
streaming result to transformStream when streamSwitch is true), and when not
streaming collect the streamed response fully (or use the appropriate
non-streaming API) and write it via the same message write method; also ensure
you await writeMessageEnd() before returning so the '[DONE]' sentinel is
flushed. Keep existing error handling and stream lock checks when calling
writeMessageEnd().
…th OpenAI provider initialization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (1)
478-504
: SSE framing bug: missing blank line after [DONE] and experimental type.SSE events require a blank line terminator. Also prefer
globalThis.ReadableStream
.- protected generateErrorStream(errorMessage: string) { + protected generateErrorStream(errorMessage: string) { @@ - return new ReadableStream({ + return new globalThis.ReadableStream({ start(controller) { controller.enqueue(data); - controller.enqueue('data: [DONE]\n'); + controller.enqueue('data: [DONE]\n\n'); controller.close(); }, }); }
♻️ Duplicate comments (3)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
37-39
: Stream toggle sourced from llmConfig — correct and preserves explicit false.Reading
streamSwitch
fromoptions.llmConfig.streamSwitch ?? true
matches the README/API surface. Keep as-is.Also applies to: 46-46
49-51
: aiInstance initialization via getAiInstance is correct.Given getAiInstance always returns a BaseAi or throws, no null-guard needed here.
277-289
: Honor streamSwitch in the summary path and await end/pipe to avoid unhandled rejections.Summary currently always streams and doesn’t await
[DONE]
. Apply:- if (this.iterationSteps === -1) { + if (this.iterationSteps === -1) { await this.writeMessageDelta(this.messages[this.messages.length - 1].content as string, 'assistant'); } - - this.writeMessageEnd(); + await this.writeMessageEnd(); return; } const summaryPrompt = this.options.llmConfig.summarySystemPrompt || 'Please provide a brief summary.'; this.organizePromptMessages({ role: Role.USER, content: summaryPrompt }); - - const result = await this.queryChatCompleteStreaming(); - - result.pipeTo(this.transformStream.writable); + if (this.streamSwitch) { + const result = await this.queryChatCompleteStreaming(); + await result.pipeTo(this.transformStream.writable); + } else { + const resp = await this.queryChatComplete(); + if (resp instanceof Error) { + await this.generateErrorStream(resp.message).pipeTo(this.transformStream.writable); + } else { + const content = + (resp as any)?.choices?.[0]?.message?.content ?? + (resp as any)?.choices?.[0]?.delta?.content ?? + ''; + if (content) { + await this.writeMessageDelta(content, 'assistant'); + } + await this.writeMessageEnd(); + } + }Also applies to: 295-301
🧹 Nitpick comments (1)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (1)
304-308
: Avoid silently dropping the [DONE] sentinel when locked.Early-return on
locked
can skip[DONE]
. Try a brief retry before giving up.- if (this.transformStream.writable.locked) { - console.warn('Stream is already locked, skipping end message'); - return; - } + if (this.transformStream.writable.locked) { + // Try briefly to acquire the lock to avoid dropping [DONE] + for (let i = 0; i < 3 && this.transformStream.writable.locked; i++) { + await Promise.resolve(); + } + if (this.transformStream.writable.locked) { + console.warn('Stream is locked, skipping end message'); + return; + } + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/mcp/mcp-client-chat/README.md
(5 hunks)packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
(11 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
📚 Learning: 2025-05-28T12:25:07.698Z
Learnt from: rhlin
PR: opentiny/tiny-agent#16
File: packages/mcp/mcp-proxy-server/test.ts:15-16
Timestamp: 2025-05-28T12:25:07.698Z
Learning: When using MCP SDK transports (like StreamableHTTPServerTransport, SSEServerTransport) with Express, do not add standard JSON parsing middleware like express.json() as it conflicts with how the official MCP transports handle request data internally. MCP transports are designed to process raw HTTP request bodies and streams directly.
Applied to files:
packages/mcp/mcp-client-chat/README.md
🧬 Code graph analysis (1)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (4)
MCPClientOptions
(72-77)ChatCompleteResponse
(168-177)ChatBody
(96-102)ToolCall
(126-130)packages/mcp/mcp-client-chat/src/ai/ai-instance.ts (1)
getAiInstance
(4-12)packages/mcp/mcp-client-chat/src/utils/transformer.ts (1)
generateStreamingResponses
(87-156)
🪛 ESLint
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts
[error] 201-201: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (7)
packages/mcp/mcp-client-chat/src/mcp-client-chat.ts (3)
6-9
: Imports look consistent with the new abstraction.No issues spotted.
403-404
: Including tool name on tool messages is correct per updated Message union.LGTM.
468-475
: No changes required: createMCPClientChat always calls init() before returning, so aiInstance is safely initialized.packages/mcp/mcp-client-chat/README.md (4)
5-12
: Feature list updates read well.Matches the implementation.
21-43
: Config surface docs align with code (useSDK, streamSwitch, summarySystemPrompt, etc.).No issues.
59-61
: Section header changes are fine.Proceed.
55-56
: customTransport doc is helpful and matches code.LGTM.
streamSwitch: true, | ||
temperature: 0.7, | ||
maxTokens: 1000, | ||
}, | ||
maxIterationSteps: 3, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
REST example: bridge Web ReadableStream to Node and set SSE headers.
streamResponse
is a Web stream; res
is a Node writable. Also set SSE headers.
-import express from "express";
+import express from "express";
+import { Readable } from "node:stream";
@@
- try {
- // 流式数据返回
- const streamResponse = await mcpClientChat.chat("your question...");
-
- streamResponse.pipe(res);
+ try {
+ // 流式数据返回
+ const streamResponse = await mcpClientChat.chat("your question...");
+ res.setHeader("Content-Type", "text/event-stream");
+ res.setHeader("Cache-Control", "no-cache");
+ res.setHeader("Connection", "keep-alive");
+ Readable.fromWeb(streamResponse as any).pipe(res);
Also applies to: 100-107
🤖 Prompt for AI Agents
packages/mcp/mcp-client-chat/README.md around lines 79-83 (also apply same
changes to 100-107): the REST example currently treats streamResponse as a Node
stream; instead set SSE headers on the Node response (Content-Type:
text/event-stream, Cache-Control: no-cache, Connection: keep-alive) and bridge
the Web ReadableStream to the Node writable by either using
Readable.fromWeb(streamResponse) and piping it to res, or by reading the Web
stream via getReader() and writing chunks to res.write() (formatting each chunk
as SSE events if needed), and ensure you end/close the response when the Web
stream is done and handle reader cancellation/errors.
```typescript | ||
import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat"; | ||
import { createOpenAI } from '@ai-sdk/openai'; | ||
// 创建 openai provider | ||
const openai = createOpenAI({ | ||
apiKey: "<your-openai-api-key>", // 通过 Authorization 头部发送的 API 密钥。默认为 OPENAI_API_KEY 环境变量。 | ||
baseURL: "https://api.openai.com/v1", // 用于 API 调用的不同 URL 前缀,例如使用代理服务器。默认前缀是 https://api.openai.com/v1。 | ||
name: "", // 提供商名称。在使用 OpenAI 兼容提供商时,您可以设置此属性来更改模型提供商属性。默认为 openai。 | ||
organization: "", // OpenAI 组织。 | ||
project: "", // OpenAI 项目。 | ||
fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>, // 自定义 fetch 实现。默认为全局 fetch 函数。您可以用它作为中间件来拦截请求,或为测试等提供自定义 fetch 实现。 | ||
headers: { // 要包含在请求中的自定义头部。 | ||
'header-name': 'header-value', | ||
}, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenAI example: remove type signature placeholder for fetch.
Using a TS type annotation as a value will break copy/paste.
- fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>, // 自定义 fetch 实现。默认为全局 fetch 函数。您可以用它作为中间件来拦截请求,或为测试等提供自定义 fetch 实现。
+ // fetch: customFetch, // 自定义 fetch 实现(可选)
Optionally show a concrete example above:
// const customFetch = (input: RequestInfo | URL, init?: RequestInit) => fetch(input, init);
🤖 Prompt for AI Agents
In packages/mcp/mcp-client-chat/README.md around lines 118 to 132, the example
uses a TypeScript type annotation as the value for the fetch field which breaks
copy/paste; replace the typed placeholder with either a real function reference
(e.g., a custom fetch implementation or the global fetch) or a commented example
invocation, ensuring the README shows a runnable snippet without TS-only value
syntax, and update the comments to explain the expected signature (input, init)
rather than embedding a type annotation as the value.
if (this.streamSwitch) { | ||
try { | ||
const streamResponses: ReadableStream = await this.queryChatCompleteStreaming(); | ||
response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); | ||
} catch (error) { | ||
response = error instanceof Error ? error : new Error(String(error)); | ||
} | ||
} else { | ||
response = await this.queryChatComplete(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix ESLint Node built-ins warning: use globalThis.ReadableStream.
Type-annotate with globalThis.ReadableStream
to satisfy n/no-unsupported-features/node-builtins
.
- try {
- const streamResponses: ReadableStream = await this.queryChatCompleteStreaming();
+ try {
+ const streamResponses: globalThis.ReadableStream = await this.queryChatCompleteStreaming();
response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (this.streamSwitch) { | |
try { | |
const streamResponses: ReadableStream = await this.queryChatCompleteStreaming(); | |
response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); | |
} catch (error) { | |
response = error instanceof Error ? error : new Error(String(error)); | |
} | |
} else { | |
response = await this.queryChatComplete(); | |
} | |
if (this.streamSwitch) { | |
try { | |
const streamResponses: globalThis.ReadableStream = await this.queryChatCompleteStreaming(); | |
response = await generateStreamingResponses(streamResponses, this.writeMessageDelta.bind(this)); | |
} catch (error) { | |
response = error instanceof Error ? error : new Error(String(error)); | |
} | |
} else { | |
response = await this.queryChatComplete(); | |
} |
🧰 Tools
🪛 ESLint
[error] 201-201: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🤖 Prompt for AI Agents
In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts around lines 199 to 208,
the local type for streamResponses uses the Node builtin ReadableStream which
triggers the ESLint rule n/no-unsupported-features/node-builtins; change the
type annotation to globalThis.ReadableStream (e.g. const streamResponses:
globalThis.ReadableStream = await ...) or cast the result to
globalThis.ReadableStream so the runtime-agnostic global is used; ensure no
import of ReadableStream is added and update any related function signatures to
accept globalThis.ReadableStream if necessary.
… in ai-sdk module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
74-92
: Honor per-request model in chat() and in transform resultUse the request’s model (fallback to this.model) and pass the same into transformChatResult. Also drop unused prompt.
async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { try { const chatOptions = this.generateChatOptions(chatBody); - const { messages, tools, toolChoice, prompt, ...rest } = chatOptions; + const model = (chatBody as any).model ?? this.model; + const { messages, tools, toolChoice, ...rest } = chatOptions; const result: GenerateTextResult<ToolSet, unknown> = await generateText({ - model: this.model, + model, messages: messages || [], ...(tools && { tools }), ...(toolChoice && { toolChoice }), ...rest, }); - const response: ChatCompleteResponse = transformChatResult(result, this.llmConfig.model); + const response: ChatCompleteResponse = transformChatResult(result, model); return response;
🧹 Nitpick comments (7)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts (3)
1-11
: Sort imported type members to satisfy lintAlphabetize the named imports to fix the sort-imports warning.
-import type { - ModelMessage, - UserModelMessage, - AssistantModelMessage, - AssistantContent, - ToolCallPart, - ToolResultPart, - ToolModelMessage, - SystemModelMessage, - UserContent, -} from 'ai'; +import type { + AssistantContent, + AssistantModelMessage, + ModelMessage, + SystemModelMessage, + ToolCallPart, + ToolModelMessage, + ToolResultPart, + UserContent, + UserModelMessage, +} from 'ai';
14-14
: Fix typo in comment“opneai” → “openai”.
-// 将 opneai 格式的 messages 转换为 ai-sdk 格式的 messages +// 将 openai 格式的 messages 转换为 ai-sdk 格式的 messages
64-72
: Guard toolName to avoid undefined at runtimemsg.name is optional; casting to string can yield undefined. Add a safe fallback.
- toolName: msg.name as string, + toolName: msg.name ?? '',packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (4)
2-15
: Drop unused types from 'ai' importOnly these are used here: GenerateTextResult, StreamTextResult, ToolSet, ModelMessage. Removing the rest quiets no-unused-vars.
-import type { - AssistantContent, - ModelMessage, - SystemModelMessage, - UserModelMessage, - AssistantModelMessage, - ToolCallPart, - ToolModelMessage, - GenerateTextResult, - StreamTextResult, - ToolSet, - UserContent, - ToolResultPart, -} from 'ai'; +import type { GenerateTextResult, ModelMessage, StreamTextResult, ToolSet } from 'ai';
25-27
: Fix import order and member sorting per lintPlace utils import before local file import and sort members.
-import { getDefaultModel } from './default-config.js'; -import { jsonSchemaToZod, transformChatResult, toOpenAIChunk } from '../../utils/index.js'; +import { jsonSchemaToZod, toOpenAIChunk, transformChatResult } from '../../utils/index.js'; +import { getDefaultModel } from './default-config.js';
99-106
: Remove unused ‘prompt’ from destructuringNot used; trimming it avoids lint noise.
- const { messages, tools, toolChoice, prompt, ...rest } = chatOptions; + const { messages, tools, toolChoice, ...rest } = chatOptions;
109-139
: ReadableStream lint warning is a config false positiveReadableStream is stable in modern Node; adjust ESLint/node plugin target or engines to suppress n/no-unsupported-features warnings.
If your workspace targets Node >=18/22, set engines.node accordingly at the root and configure eslint-plugin-n to that version range.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/index.ts
(1 hunks)packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts
(1 hunks)packages/mcp/mcp-client-chat/src/index.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/mcp/mcp-client-chat/src/index.ts
- packages/mcp/mcp-client-chat/src/ai/ai-sdk/index.ts
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
📚 Learning: 2025-05-28T01:54:36.631Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#18
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169
Timestamp: 2025-05-28T01:54:36.631Z
Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts
📚 Learning: 2025-08-15T03:03:54.467Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:48-50
Timestamp: 2025-08-15T03:03:54.467Z
Learning: The getAiInstance function in packages/mcp/mcp-client-chat/src/ai/ai-instance.ts always returns a valid BaseAi instance (either AiSDK or AiRestApi) and cannot return null. It would throw an error if initialization fails rather than returning null.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-06-05T08:34:19.168Z
Learnt from: rhlin
PR: opentiny/tiny-agent#32
File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:145-210
Timestamp: 2025-06-05T08:34:19.168Z
Learning: In packages/mcp/mcp-client-chat/src/mcp-client-chat.ts, the fire-and-forget pattern for chatIteration() and immediate return of the readable stream are intentional design choices to enable streaming responses before chat processing completes.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
📚 Learning: 2025-08-25T08:51:04.023Z
Learnt from: yy-wow
PR: opentiny/tiny-agent#62
File: packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts:16-39
Timestamp: 2025-08-25T08:51:04.023Z
Learning: REST API endpoints for AI models only accept string type model names, not complex model objects. When sending requests to REST APIs, ChatBody.model (typed as LanguageModel which can be string | LanguageModelV2) must be normalized to a string before JSON.stringify.
Applied to files:
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
🧬 Code graph analysis (2)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts (1)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (1)
Message
(194-214)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (6)
packages/mcp/mcp-client-chat/src/types/mcp-client-chat.ts (3)
LlmConfig
(62-70)ChatBody
(96-102)ChatCompleteResponse
(168-177)packages/mcp/mcp-client-chat/src/types/ai-SDK.ts (4)
LanguageModel
(8-8)GenerateTextOptions
(4-4)ModelMessage
(10-10)StreamTextOptions
(6-6)packages/mcp/mcp-client-chat/src/ai/ai-sdk/default-config.ts (1)
getDefaultModel
(18-20)packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts (1)
transformMessagesToAiSdk
(15-84)packages/mcp/mcp-client-chat/src/utils/zod-schema.ts (1)
jsonSchemaToZod
(288-328)packages/mcp/mcp-client-chat/src/utils/ai-SDK-transformer.ts (2)
transformChatResult
(6-44)toOpenAIChunk
(46-84)
🪛 ESLint
packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts
[error] 4-4: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts
[error] 3-3: 'AssistantContent' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 5-5: 'SystemModelMessage' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 6-6: 'UserModelMessage' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 7-7: Member 'AssistantModelMessage' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 7-7: 'AssistantModelMessage' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 8-8: 'ToolCallPart' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 9-9: 'ToolModelMessage' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 13-13: 'UserContent' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 14-14: 'ToolResultPart' is defined but never used. Allowed unused vars must match /^_/u.
(@typescript-eslint/no-unused-vars)
[error] 21-21: Member 'GenerateTextOptions' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 26-26: ../../utils/index.js
import should occur before import of ./default-config.js
(import-x/order)
[error] 26-26: Member 'toOpenAIChunk' of the import declaration should be sorted alphabetically.
(sort-imports)
[error] 109-109: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
[error] 132-132: The 'ReadableStream' is still an experimental feature The configured version range is '^20.19.0 || >=22.12.0'.
(n/no-unsupported-features/node-builtins)
🔇 Additional comments (2)
packages/mcp/mcp-client-chat/src/ai/ai-sdk/utils.ts (1)
26-46
: Preserve assistant text and conform to mixed-content SDK shape
Inutils.ts:26-46
, whenmsg.tool_calls
exists, don’t replacecontent
with onlyToolCallPart[]
. Instead build an array mixing a TextPart and your tool-call parts, e.g.:const parts: Array<TextPart | ToolCallPart> = [ { type: 'text', text: msg.content as string }, ...msg.tool_calls.map(tc => ({ type: 'tool-call', toolCallId: tc.id, toolName: tc.function.name, args: (() => { try { return JSON.parse(tc.function.arguments) } catch { return {} } })(), })), ]; assistantMessage.content = parts;This preserves the original assistant text and matches
AssistantContent = string | (TextPart | ToolCallPart)[]
. Rename yourinput
field toargs
to align with the SDK.⛔ Skipped due to learnings
Learnt from: yy-wow PR: opentiny/tiny-agent#18 File: packages/mcp/mcp-client-chat/src/mcp-client-chat.ts:165-169 Timestamp: 2025-05-28T01:54:36.631Z Learning: The Message type in packages/mcp/mcp-client-chat/src/type.ts is a union type that doesn't include tool_calls field for assistant messages, only content and optional name for 'user'|'assistant'|'system' roles, and separate variant for 'tool' role with tool_call_id.
packages/mcp/mcp-client-chat/src/ai/ai-sdk/ai-sdk.ts (1)
143-154
: Streaming generator looks solidChunk conversion, SSE framing, and final [DONE] are correct.
…h statement for improved readability and maintainability
Summary by CodeRabbit
New Features
Breaking Changes
Documentation
Chores