Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
b9d040d
feat: import ai-sdk
yy-wow Aug 13, 2025
af39daa
docs: add new feature
yy-wow Aug 13, 2025
10ed8fc
fix: remove useless comments
yy-wow Aug 13, 2025
f93bb1a
fix: review
yy-wow Aug 13, 2025
e9cb4c7
fix: review
yy-wow Aug 15, 2025
93fcc75
fix: review
yy-wow Aug 15, 2025
92ffb0b
refactor: enhance jsonSchemaToZod function with comprehensive type ha…
yy-wow Aug 21, 2025
2281cdb
chore: update package.json to lower Node.js version requirement and a…
yy-wow Aug 21, 2025
9947485
chore: update package.json to require Node.js 22.12.0, add testing sc…
yy-wow Aug 21, 2025
e8786c7
feat: add abort signal to AiRestApi requests for better request manag…
yy-wow Aug 21, 2025
3f3edab
fix: reorder response body check for improved error handling in AiRes…
yy-wow Aug 25, 2025
5b4e8e1
refactor: normalize model ID in request body for AiRestApi to ensure …
yy-wow Aug 25, 2025
3f01318
refactor: reorganize and rename function modules for improved clarity…
yy-wow Aug 25, 2025
d2f1b4e
chore: lower Node.js version requirement to 18.18.0 in package.json a…
yy-wow Aug 25, 2025
6e0ab9a
refactor: enhance AiSDK chatStream and openaiChunkGenerator methods t…
yy-wow Aug 25, 2025
577264c
refactor: improve object literal schema validation by implementing st…
yy-wow Aug 25, 2025
266b0cb
refactor: update README for OpenAI and DeepSeek configurations, impro…
yy-wow Sep 9, 2025
efe1154
refactor: clean up whitespace in mcp-client-chat and update README wi…
yy-wow Sep 10, 2025
5101d47
feat: add message transformation utility for AiSDK and update exports…
yy-wow Sep 10, 2025
91bec05
refactor: streamline chunk handling in ai-SDK transformer using switc…
yy-wow Sep 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 106 additions & 2 deletions packages/mcp/mcp-client-chat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,14 @@

一个基于 TypeScript 的 Model Context Protocol (MCP) 客户端实现,支持与 LLM 集成。

## 特性

- 🚀 **双模式支持**:支持传统 REST API 和 AI SDK 两种模式
- 🔧 **工具调用**:支持 Function Calling 和 ReAct 两种 Agent 策略
- 📡 **流式响应**:支持实时流式数据返回
- 🔌 **灵活配置**:支持多种 LLM 提供商和自定义传输层
- 🛠️ **MCP 集成**:完整的 MCP 协议支持,可连接多个 MCP 服务器

- **安装**

```bash
Expand All @@ -10,12 +18,28 @@ npm install @opentiny/tiny-agent-mcp-client-chat --save

- **参数说明**

- `agentStrategy(string, optional)`: Agent 策略,可选值:`'Function Calling'` 或 `'ReAct'`,默认为 `'Function Calling'`

- `llmConfig(object)`: LLM 配置项

- `url(string)`: AI 平台 API 接口地址
- `useSDK(boolean, optional)`: 是否使用 AI SDK,默认为 false
- `url(string)`: AI 平台 API 接口地址,使用 AI SDK 时可不填
- `apiKey(string)`: AI 平台的 API Key
- `model(string)`: 模型名称
- `model(string | LanguageModelV2)`: 模型名称或 AI SDK 模型实例
- `systemPrompt(string)`: 系统提示词
- `summarySystemPrompt(string, optional)`: 总结提示词
- `streamSwitch(boolean, optional)`: 是否使用流式响应,默认为 true
- `maxTokens(number, optional)`: 最大生成 token 数
- `temperature(number, optional)`: 温度参数,控制随机性
- `topP(number, optional)`: Top-p 采样参数
- `topK(number, optional)`: Top-k 采样参数
- `presencePenalty(number, optional)`: 存在惩罚参数
- `frequencyPenalty(number, optional)`: 频率惩罚参数
- `stopSequences(string[], optional)`: 停止序列
- `seed(number, optional)`: 随机种子
- `maxRetries(number, optional)`: 最大重试次数
- `abortSignal(AbortSignal, optional)`: 中止信号
- `headers(Record<string, string>, optional)`: 自定义请求头

- `maxIterationSteps(number)`: Agent 最大迭代步数

Expand All @@ -28,9 +52,12 @@ npm install @opentiny/tiny-agent-mcp-client-chat --save
- `url(string)`: mcp-server 连接地址
- `headers(object)`: 请求头
- `timeout(number)`: 超时时长
- `customTransport(Transport | function, optional)`: 自定义传输层

- **示例**

## REST API 使用示例

```typescript
import express from "express";
import cors from "cors";
Expand All @@ -48,6 +75,10 @@ async function main() {
apiKey: "<your-api-key>",
model: "mistralai/mistral-7b-instruct:free",
systemPrompt: "You are a helpful assistant with access to tools.",
summarySystemPrompt: "Please provide a brief summary!",
streamSwitch: true,
temperature: 0.7,
maxTokens: 1000,
},
maxIterationSteps: 3,
Comment on lines +79 to 83
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

REST example: bridge Web ReadableStream to Node and set SSE headers.

streamResponse is a Web stream; res is a Node writable. Also set SSE headers.

-import express from "express";
+import express from "express";
+import { Readable } from "node:stream";
@@
-    try {
-      // 流式数据返回
-      const streamResponse = await mcpClientChat.chat("your question...");
-
-      streamResponse.pipe(res);
+    try {
+      // 流式数据返回
+      const streamResponse = await mcpClientChat.chat("your question...");
+      res.setHeader("Content-Type", "text/event-stream");
+      res.setHeader("Cache-Control", "no-cache");
+      res.setHeader("Connection", "keep-alive");
+      Readable.fromWeb(streamResponse as any).pipe(res);

Also applies to: 100-107

🤖 Prompt for AI Agents
packages/mcp/mcp-client-chat/README.md around lines 79-83 (also apply same
changes to 100-107): the REST example currently treats streamResponse as a Node
stream; instead set SSE headers on the Node response (Content-Type:
text/event-stream, Cache-Control: no-cache, Connection: keep-alive) and bridge
the Web ReadableStream to the Node writable by either using
Readable.fromWeb(streamResponse) and piping it to res, or by reading the Web
stream via getReader() and writing chunks to res.write() (formatting each chunk
as SSE events if needed), and ensure you end/close the response when the Web
stream is done and handle reader cancellation/errors.

mcpServersConfig: {
Expand Down Expand Up @@ -79,3 +110,76 @@ async function main() {

main();
```

## AI SDK 使用示例

### 使用 [OpenAI](https://ai-sdk.dev/providers/ai-sdk-providers/openai)

```typescript
import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat";
import { createOpenAI } from '@ai-sdk/openai';
// 创建 openai provider
const openai = createOpenAI({
apiKey: "<your-openai-api-key>", // 通过 Authorization 头部发送的 API 密钥。默认为 OPENAI_API_KEY 环境变量。
baseURL: "https://api.openai.com/v1", // 用于 API 调用的不同 URL 前缀,例如使用代理服务器。默认前缀是 https://api.openai.com/v1。
name: "", // 提供商名称。在使用 OpenAI 兼容提供商时,您可以设置此属性来更改模型提供商属性。默认为 openai。
organization: "", // OpenAI 组织。
project: "", // OpenAI 项目。
fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>, // 自定义 fetch 实现。默认为全局 fetch 函数。您可以用它作为中间件来拦截请求,或为测试等提供自定义 fetch 实现。
headers: { // 要包含在请求中的自定义头部。
'header-name': 'header-value',
},
});
Comment on lines +118 to +132
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

OpenAI example: remove type signature placeholder for fetch.

Using a TS type annotation as a value will break copy/paste.

-  fetch:  (input: RequestInfo, init?: RequestInit) => Promise<Response>, // 自定义 fetch 实现。默认为全局 fetch 函数。您可以用它作为中间件来拦截请求,或为测试等提供自定义 fetch 实现。
+  // fetch: customFetch, // 自定义 fetch 实现(可选)

Optionally show a concrete example above:

// const customFetch = (input: RequestInfo | URL, init?: RequestInit) => fetch(input, init);
🤖 Prompt for AI Agents
In packages/mcp/mcp-client-chat/README.md around lines 118 to 132, the example
uses a TypeScript type annotation as the value for the fetch field which breaks
copy/paste; replace the typed placeholder with either a real function reference
(e.g., a custom fetch implementation or the global fetch) or a commented example
invocation, ensuring the README shows a runnable snippet without TS-only value
syntax, and update the comments to explain the expected signature (input, init)
rather than embedding a type annotation as the value.


const mcpClientChat = await createMCPClientChat({
llmConfig: {
useSDK: true, // 启用 AI SDK
model: openai("gpt-4o"), // 使用 AI SDK 模型
systemPrompt: "You are a helpful assistant with access to tools.",
temperature: 0.7,
maxTokens: 1000,
},
maxIterationSteps: 3,
mcpServersConfig: {
mcpServers: {
"localhost-mcp": {
url: "http://localhost:3000",
headers: {},
timeout: 60
},
},
},
});
```

### 使用 [DeepSeek](https://ai-sdk.dev/providers/ai-sdk-providers/deepseek)

```.env
DEEPSEEK_API_KEY={your-deepseek-api-key}
```


```typescript
import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat";
import { deepseek } from '@ai-sdk/deepseek';

const mcpClientChat = await createMCPClientChat({
llmConfig: {
useSDK: true, // 启用 AI SDK
model: deepseek("deepseek-chat"),
systemPrompt: "You are a helpful assistant with access to tools.",
temperature: 0.6,
maxTokens: 1500,
},
maxIterationSteps: 3,
mcpServersConfig: {
mcpServers: {
"localhost-mcp": {
url: "http://localhost:3000",
headers: {},
timeout: 60
},
},
},
});
```
7 changes: 6 additions & 1 deletion packages/mcp/mcp-client-chat/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,21 @@
"type": "module",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"engines": { "node": ">=22.12.0" },
"engines": {
"node": ">=18.18.0"
},
"scripts": {
"build": "vite build"
},
"files": [
"dist"
],
"dependencies": {
"@ai-sdk/openai": "^2.0.10",
"@ai-sdk/provider": "^2.0.0",
"@anthropic-ai/sdk": "^0.41.0",
"@modelcontextprotocol/sdk": "^1.11.1",
"ai": "^5.0.10",
"openai": "^4.98.0",
"zod": "^3.24.2"
},
Expand Down
12 changes: 12 additions & 0 deletions packages/mcp/mcp-client-chat/src/ai/ai-instance.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import type { LlmConfig } from '../types/index.js';
import type { BaseAi } from './base-ai.js';

export async function getAiInstance(llmConfig: LlmConfig): Promise<BaseAi> {
if (llmConfig.useSDK) {
const { AiSDK } = await import('./ai-sdk/ai-sdk.js');
return new AiSDK(llmConfig);
}

const { AiRestApi } = await import('./ai-rest-api/ai-rest-api.js');
return new AiRestApi(llmConfig);
}
114 changes: 114 additions & 0 deletions packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
import type { ChatBody, ChatCompleteResponse, LlmConfig } from '../../types/index.js';
import { Role } from '../../types/index.js';
import { BaseAi } from '../base-ai.js';

type AiRestApiConfig = Extract<LlmConfig, { useSDK?: false | undefined; url: string; apiKey: string; model: string }>;

export class AiRestApi extends BaseAi {
llmConfig: AiRestApiConfig;

constructor(llmConfig: AiRestApiConfig) {
super();

this.llmConfig = llmConfig;
}

async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> {
const { url, apiKey } = this.llmConfig;

try {
const modelId =
typeof chatBody.model === 'string'
? chatBody.model
: ((chatBody.model as any)?.modelId ?? String(chatBody.model));
const normalizedBody = { ...chatBody, model: modelId };
const response = await fetch(url, {
method: 'POST',
headers: {
...(this.llmConfig.headers ?? {}),
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(normalizedBody),
signal: this.llmConfig.abortSignal,
});
if (!response.ok) {
return new Error(`HTTP error ${response.status}: ${await response.text()}`);
}

return (await response.json()) as ChatCompleteResponse;
} catch (error) {
console.error('Error calling chat/complete:', error);

return error as Error;
}
}

async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream<Uint8Array>> {
const { url, apiKey } = this.llmConfig;

try {
const modelId =
typeof chatBody.model === 'string'
? chatBody.model
: ((chatBody.model as any)?.modelId ?? String(chatBody.model));
const normalizedBody = { ...chatBody, model: modelId };
const response = await fetch(url, {
method: 'POST',
headers: {
...(this.llmConfig.headers ?? {}),
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ ...normalizedBody, stream: true }),
signal: this.llmConfig.abortSignal,
});

if (!response.ok) {
const errorText = await response.text();
const errorMessage = `Failed to call chat API! ${errorText}`;
console.error('Failed to call chat API:', errorMessage);
return this.generateErrorStream(errorMessage);
}

if (!response.body) {
return this.generateErrorStream('Response body is empty!');
}

return response.body;
} catch (error) {
console.error('Failed to call streaming chat/complete:', error);

return this.generateErrorStream(`Failed to call chat API! ${error}`);
}
}

protected generateErrorStream(errorMessage: string): ReadableStream<Uint8Array> {
const errorResponse: ChatCompleteResponse = {
id: `chat-error-${Date.now()}`,
object: 'chat.completion.chunk',
created: Math.floor(Date.now() / 1000),
model: this.llmConfig.model,
choices: [
{
finish_reason: 'error',
native_finish_reason: 'error',
delta: {
role: Role.ASSISTANT,
content: errorMessage,
},
},
],
};
const data = `data: ${JSON.stringify(errorResponse)}\n\n`;
const encoder = new TextEncoder();

return new ReadableStream<Uint8Array>({
start(controller) {
controller.enqueue(encoder.encode(data));
controller.enqueue(encoder.encode('data: [DONE]\n\n'));
controller.close();
},
});
}
}
1 change: 1 addition & 0 deletions packages/mcp/mcp-client-chat/src/ai/ai-rest-api/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
export * from './ai-rest-api.js';
Loading