-
Notifications
You must be signed in to change notification settings - Fork 3
feat: support ai-sdk, add stream-switch #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
yy-wow
wants to merge
20
commits into
main
Choose a base branch
from
feat/merge-poc-changes
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
20 commits
Select commit
Hold shift + click to select a range
b9d040d
feat: import ai-sdk
yy-wow af39daa
docs: add new feature
yy-wow 10ed8fc
fix: remove useless comments
yy-wow f93bb1a
fix: review
yy-wow e9cb4c7
fix: review
yy-wow 93fcc75
fix: review
yy-wow 92ffb0b
refactor: enhance jsonSchemaToZod function with comprehensive type ha…
yy-wow 2281cdb
chore: update package.json to lower Node.js version requirement and a…
yy-wow 9947485
chore: update package.json to require Node.js 22.12.0, add testing sc…
yy-wow e8786c7
feat: add abort signal to AiRestApi requests for better request manag…
yy-wow 3f3edab
fix: reorder response body check for improved error handling in AiRes…
yy-wow 5b4e8e1
refactor: normalize model ID in request body for AiRestApi to ensure …
yy-wow 3f01318
refactor: reorganize and rename function modules for improved clarity…
yy-wow d2f1b4e
chore: lower Node.js version requirement to 18.18.0 in package.json a…
yy-wow 6e0ab9a
refactor: enhance AiSDK chatStream and openaiChunkGenerator methods t…
yy-wow 577264c
refactor: improve object literal schema validation by implementing st…
yy-wow 266b0cb
refactor: update README for OpenAI and DeepSeek configurations, impro…
yy-wow efe1154
refactor: clean up whitespace in mcp-client-chat and update README wi…
yy-wow 5101d47
feat: add message transformation utility for AiSDK and update exports…
yy-wow 91bec05
refactor: streamline chunk handling in ai-SDK transformer using switc…
yy-wow File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,6 +2,14 @@ | |
|
||
一个基于 TypeScript 的 Model Context Protocol (MCP) 客户端实现,支持与 LLM 集成。 | ||
|
||
## 特性 | ||
|
||
- 🚀 **双模式支持**:支持传统 REST API 和 AI SDK 两种模式 | ||
- 🔧 **工具调用**:支持 Function Calling 和 ReAct 两种 Agent 策略 | ||
- 📡 **流式响应**:支持实时流式数据返回 | ||
- 🔌 **灵活配置**:支持多种 LLM 提供商和自定义传输层 | ||
- 🛠️ **MCP 集成**:完整的 MCP 协议支持,可连接多个 MCP 服务器 | ||
|
||
- **安装** | ||
|
||
```bash | ||
|
@@ -10,12 +18,28 @@ npm install @opentiny/tiny-agent-mcp-client-chat --save | |
|
||
- **参数说明** | ||
|
||
- `agentStrategy(string, optional)`: Agent 策略,可选值:`'Function Calling'` 或 `'ReAct'`,默认为 `'Function Calling'` | ||
|
||
- `llmConfig(object)`: LLM 配置项 | ||
|
||
- `url(string)`: AI 平台 API 接口地址 | ||
- `useSDK(boolean, optional)`: 是否使用 AI SDK,默认为 false | ||
- `url(string)`: AI 平台 API 接口地址,使用 AI SDK 时可不填 | ||
- `apiKey(string)`: AI 平台的 API Key | ||
- `model(string)`: 模型名称 | ||
- `model(string | LanguageModelV2)`: 模型名称或 AI SDK 模型实例 | ||
- `systemPrompt(string)`: 系统提示词 | ||
- `summarySystemPrompt(string, optional)`: 总结提示词 | ||
- `streamSwitch(boolean, optional)`: 是否使用流式响应,默认为 true | ||
- `maxTokens(number, optional)`: 最大生成 token 数 | ||
- `temperature(number, optional)`: 温度参数,控制随机性 | ||
- `topP(number, optional)`: Top-p 采样参数 | ||
- `topK(number, optional)`: Top-k 采样参数 | ||
- `presencePenalty(number, optional)`: 存在惩罚参数 | ||
- `frequencyPenalty(number, optional)`: 频率惩罚参数 | ||
- `stopSequences(string[], optional)`: 停止序列 | ||
- `seed(number, optional)`: 随机种子 | ||
- `maxRetries(number, optional)`: 最大重试次数 | ||
- `abortSignal(AbortSignal, optional)`: 中止信号 | ||
- `headers(Record<string, string>, optional)`: 自定义请求头 | ||
|
||
- `maxIterationSteps(number)`: Agent 最大迭代步数 | ||
|
||
|
@@ -28,9 +52,12 @@ npm install @opentiny/tiny-agent-mcp-client-chat --save | |
- `url(string)`: mcp-server 连接地址 | ||
- `headers(object)`: 请求头 | ||
- `timeout(number)`: 超时时长 | ||
- `customTransport(Transport | function, optional)`: 自定义传输层 | ||
|
||
- **示例** | ||
|
||
## REST API 使用示例 | ||
|
||
```typescript | ||
import express from "express"; | ||
import cors from "cors"; | ||
|
@@ -48,6 +75,10 @@ async function main() { | |
apiKey: "<your-api-key>", | ||
model: "mistralai/mistral-7b-instruct:free", | ||
systemPrompt: "You are a helpful assistant with access to tools.", | ||
summarySystemPrompt: "Please provide a brief summary!", | ||
streamSwitch: true, | ||
temperature: 0.7, | ||
maxTokens: 1000, | ||
}, | ||
maxIterationSteps: 3, | ||
mcpServersConfig: { | ||
|
@@ -79,3 +110,76 @@ async function main() { | |
|
||
main(); | ||
``` | ||
|
||
## AI SDK 使用示例 | ||
|
||
### 使用 [OpenAI](https://ai-sdk.dev/providers/ai-sdk-providers/openai) | ||
|
||
```typescript | ||
import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat"; | ||
import { createOpenAI } from '@ai-sdk/openai'; | ||
// 创建 openai provider | ||
const openai = createOpenAI({ | ||
apiKey: "<your-openai-api-key>", // 通过 Authorization 头部发送的 API 密钥。默认为 OPENAI_API_KEY 环境变量。 | ||
baseURL: "https://api.openai.com/v1", // 用于 API 调用的不同 URL 前缀,例如使用代理服务器。默认前缀是 https://api.openai.com/v1。 | ||
name: "", // 提供商名称。在使用 OpenAI 兼容提供商时,您可以设置此属性来更改模型提供商属性。默认为 openai。 | ||
organization: "", // OpenAI 组织。 | ||
project: "", // OpenAI 项目。 | ||
fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>, // 自定义 fetch 实现。默认为全局 fetch 函数。您可以用它作为中间件来拦截请求,或为测试等提供自定义 fetch 实现。 | ||
headers: { // 要包含在请求中的自定义头部。 | ||
'header-name': 'header-value', | ||
}, | ||
}); | ||
Comment on lines
+118
to
+132
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OpenAI example: remove type signature placeholder for fetch. Using a TS type annotation as a value will break copy/paste. - fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>, // 自定义 fetch 实现。默认为全局 fetch 函数。您可以用它作为中间件来拦截请求,或为测试等提供自定义 fetch 实现。
+ // fetch: customFetch, // 自定义 fetch 实现(可选) Optionally show a concrete example above: // const customFetch = (input: RequestInfo | URL, init?: RequestInit) => fetch(input, init); 🤖 Prompt for AI Agents
|
||
|
||
const mcpClientChat = await createMCPClientChat({ | ||
llmConfig: { | ||
useSDK: true, // 启用 AI SDK | ||
model: openai("gpt-4o"), // 使用 AI SDK 模型 | ||
systemPrompt: "You are a helpful assistant with access to tools.", | ||
temperature: 0.7, | ||
maxTokens: 1000, | ||
}, | ||
maxIterationSteps: 3, | ||
mcpServersConfig: { | ||
mcpServers: { | ||
"localhost-mcp": { | ||
url: "http://localhost:3000", | ||
headers: {}, | ||
timeout: 60 | ||
}, | ||
}, | ||
}, | ||
}); | ||
``` | ||
|
||
### 使用 [DeepSeek](https://ai-sdk.dev/providers/ai-sdk-providers/deepseek) | ||
|
||
```.env | ||
DEEPSEEK_API_KEY={your-deepseek-api-key} | ||
``` | ||
|
||
|
||
```typescript | ||
import { createMCPClientChat } from "@opentiny/tiny-agent-mcp-client-chat"; | ||
import { deepseek } from '@ai-sdk/deepseek'; | ||
|
||
const mcpClientChat = await createMCPClientChat({ | ||
llmConfig: { | ||
useSDK: true, // 启用 AI SDK | ||
model: deepseek("deepseek-chat"), | ||
systemPrompt: "You are a helpful assistant with access to tools.", | ||
temperature: 0.6, | ||
maxTokens: 1500, | ||
}, | ||
maxIterationSteps: 3, | ||
mcpServersConfig: { | ||
mcpServers: { | ||
"localhost-mcp": { | ||
url: "http://localhost:3000", | ||
headers: {}, | ||
timeout: 60 | ||
}, | ||
}, | ||
}, | ||
}); | ||
``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import type { LlmConfig } from '../types/index.js'; | ||
import type { BaseAi } from './base-ai.js'; | ||
|
||
export async function getAiInstance(llmConfig: LlmConfig): Promise<BaseAi> { | ||
if (llmConfig.useSDK) { | ||
const { AiSDK } = await import('./ai-sdk/ai-sdk.js'); | ||
return new AiSDK(llmConfig); | ||
} | ||
|
||
const { AiRestApi } = await import('./ai-rest-api/ai-rest-api.js'); | ||
return new AiRestApi(llmConfig); | ||
} |
114 changes: 114 additions & 0 deletions
114
packages/mcp/mcp-client-chat/src/ai/ai-rest-api/ai-rest-api.ts
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,114 @@ | ||
import type { ChatBody, ChatCompleteResponse, LlmConfig } from '../../types/index.js'; | ||
import { Role } from '../../types/index.js'; | ||
import { BaseAi } from '../base-ai.js'; | ||
|
||
yy-wow marked this conversation as resolved.
Show resolved
Hide resolved
|
||
type AiRestApiConfig = Extract<LlmConfig, { useSDK?: false | undefined; url: string; apiKey: string; model: string }>; | ||
|
||
export class AiRestApi extends BaseAi { | ||
llmConfig: AiRestApiConfig; | ||
|
||
constructor(llmConfig: AiRestApiConfig) { | ||
super(); | ||
|
||
this.llmConfig = llmConfig; | ||
} | ||
|
||
async chat(chatBody: ChatBody): Promise<ChatCompleteResponse | Error> { | ||
const { url, apiKey } = this.llmConfig; | ||
|
||
try { | ||
const modelId = | ||
typeof chatBody.model === 'string' | ||
? chatBody.model | ||
: ((chatBody.model as any)?.modelId ?? String(chatBody.model)); | ||
const normalizedBody = { ...chatBody, model: modelId }; | ||
const response = await fetch(url, { | ||
method: 'POST', | ||
headers: { | ||
...(this.llmConfig.headers ?? {}), | ||
Authorization: `Bearer ${apiKey}`, | ||
'Content-Type': 'application/json', | ||
}, | ||
body: JSON.stringify(normalizedBody), | ||
signal: this.llmConfig.abortSignal, | ||
}); | ||
if (!response.ok) { | ||
return new Error(`HTTP error ${response.status}: ${await response.text()}`); | ||
} | ||
|
||
return (await response.json()) as ChatCompleteResponse; | ||
} catch (error) { | ||
console.error('Error calling chat/complete:', error); | ||
|
||
return error as Error; | ||
} | ||
} | ||
|
||
async chatStream(chatBody: ChatBody): Promise<globalThis.ReadableStream<Uint8Array>> { | ||
const { url, apiKey } = this.llmConfig; | ||
|
||
try { | ||
const modelId = | ||
typeof chatBody.model === 'string' | ||
? chatBody.model | ||
: ((chatBody.model as any)?.modelId ?? String(chatBody.model)); | ||
const normalizedBody = { ...chatBody, model: modelId }; | ||
const response = await fetch(url, { | ||
method: 'POST', | ||
headers: { | ||
...(this.llmConfig.headers ?? {}), | ||
Authorization: `Bearer ${apiKey}`, | ||
'Content-Type': 'application/json', | ||
}, | ||
body: JSON.stringify({ ...normalizedBody, stream: true }), | ||
signal: this.llmConfig.abortSignal, | ||
}); | ||
|
||
if (!response.ok) { | ||
const errorText = await response.text(); | ||
const errorMessage = `Failed to call chat API! ${errorText}`; | ||
console.error('Failed to call chat API:', errorMessage); | ||
return this.generateErrorStream(errorMessage); | ||
} | ||
|
||
if (!response.body) { | ||
return this.generateErrorStream('Response body is empty!'); | ||
} | ||
|
||
return response.body; | ||
} catch (error) { | ||
console.error('Failed to call streaming chat/complete:', error); | ||
|
||
return this.generateErrorStream(`Failed to call chat API! ${error}`); | ||
} | ||
} | ||
|
||
protected generateErrorStream(errorMessage: string): ReadableStream<Uint8Array> { | ||
const errorResponse: ChatCompleteResponse = { | ||
id: `chat-error-${Date.now()}`, | ||
object: 'chat.completion.chunk', | ||
created: Math.floor(Date.now() / 1000), | ||
model: this.llmConfig.model, | ||
choices: [ | ||
{ | ||
finish_reason: 'error', | ||
native_finish_reason: 'error', | ||
delta: { | ||
role: Role.ASSISTANT, | ||
content: errorMessage, | ||
}, | ||
}, | ||
], | ||
}; | ||
const data = `data: ${JSON.stringify(errorResponse)}\n\n`; | ||
const encoder = new TextEncoder(); | ||
|
||
return new ReadableStream<Uint8Array>({ | ||
start(controller) { | ||
controller.enqueue(encoder.encode(data)); | ||
controller.enqueue(encoder.encode('data: [DONE]\n\n')); | ||
controller.close(); | ||
}, | ||
}); | ||
} | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
export * from './ai-rest-api.js'; |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
REST example: bridge Web ReadableStream to Node and set SSE headers.
streamResponse
is a Web stream;res
is a Node writable. Also set SSE headers.Also applies to: 100-107
🤖 Prompt for AI Agents