Skip to content

Commit 03eae9d

Browse files
committed
Add Chinese docs for AI plugin docs
1 parent 2c041a3 commit 03eae9d

12 files changed

+3417
-4
lines changed

docs/en/latest/plugins/ai-aliyun-content-moderation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: ai-aws-content-moderation
2+
title: ai-aliyun-content-moderation
33
keywords:
44
- Apache APISIX
55
- API Gateway
66
- Plugin
77
- ai-aliyun-content-moderation
8-
description: This document contains information about the Apache APISIX ai-aws-content-moderation Plugin.
8+
description: This document contains information about the Apache APISIX ai-aliyun-content-moderation Plugin.
99
---
1010

1111
<!--
@@ -29,7 +29,7 @@ description: This document contains information about the Apache APISIX ai-aws-c
2929

3030
## Description
3131

32-
The ai-aliyun-content-moderation plugin integrates with Aliyun's content moderation service to check both request and response content for inappropriate material when working with LLMs. It supports both real-time streaming checks and final packet moderation.
32+
The `ai-aliyun-content-moderation` plugin integrates with Aliyun's content moderation service to check both request and response content for inappropriate material when working with LLMs. It supports both real-time streaming checks and final packet moderation.
3333

3434
This plugin must be used in routes that utilize the ai-proxy or ai-proxy-multi plugins.
3535

docs/en/latest/plugins/ai-proxy-multi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ description: The ai-proxy-multi Plugin extends the capabilities of ai-proxy with
3535

3636
## Description
3737

38-
The `ai-proxy-multi` Plugin simplifies access to LLM and embedding models by transforming Plugin configurations into the designated request format for OpenAI, DeepSeek, Azure, AIMLAPI, and other OpenAI-compatible APIs. It extends the capabilities of [`ai-proxy-multi`](./ai-proxy.md) with load balancing, retries, fallbacks, and health checks.
38+
The `ai-proxy-multi` Plugin simplifies access to LLM and embedding models by transforming Plugin configurations into the designated request format for OpenAI, DeepSeek, Azure, AIMLAPI, and other OpenAI-compatible APIs. It extends the capabilities of [`ai-proxy`](./ai-proxy.md) with load balancing, retries, fallbacks, and health checks.
3939

4040
In addition, the Plugin also supports logging LLM request information in the access log, such as token usage, model, time to the first response, and more.
4141

Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
---
2+
title: ai-aliyun-content-moderation
3+
keywords:
4+
- Apache APISIX
5+
- API 网关
6+
- Plugin
7+
- ai-aliyun-content-moderation
8+
- 阿里云
9+
- 内容审核
10+
description: 本文档包含有关 Apache APISIX ai-aliyun-content-moderation 插件的信息。
11+
---
12+
13+
<!--
14+
#
15+
# Licensed to the Apache Software Foundation (ASF) under one or more
16+
# contributor license agreements. See the NOTICE file distributed with
17+
# this work for additional information regarding copyright ownership.
18+
# The ASF licenses this file to You under the Apache License, Version 2.0
19+
# (the "License"); you may not use this file except in compliance with
20+
# the License. You may obtain a copy of the License at
21+
#
22+
# http://www.apache.org/licenses/LICENSE-2.0
23+
#
24+
# Unless required by applicable law or agreed to in writing, software
25+
# distributed under the License is distributed on an "AS IS" BASIS,
26+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
27+
# See the License for the specific language governing permissions and
28+
# limitations under the License.
29+
#
30+
-->
31+
32+
## 描述
33+
34+
`ai-aliyun-content-moderation` 插件集成了阿里云的内容审核服务,用于在与大语言模型 (LLM) 交互时检查请求和响应内容是否包含不当材料。它支持实时流式检查和最终数据包审核两种模式。
35+
36+
此插件必须在使用 `ai-proxy``ai-proxy-multi` 插件的路由中使用。
37+
38+
## 插件属性
39+
40+
| **字段** | **必选项** | **类型** | **描述** |
41+
| ---------------------------- | ---------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
42+
| endpoint || String | 阿里云服务端点 URL |
43+
| region_id || String | 阿里云区域标识符 |
44+
| access_key_id || String | 阿里云访问密钥 ID |
45+
| access_key_secret || String | 阿里云访问密钥密码 |
46+
| check_request || Boolean | 启用请求内容审核。默认值:`true` |
47+
| check_response || Boolean | 启用响应内容审核。默认值:`false` |
48+
| stream_check_mode || String | 流式审核模式。默认值:`"final_packet"`。有效值:`["realtime", "final_packet"]` |
49+
| stream_check_cache_size || Integer | 实时模式下每次审核批次的最大字符数。默认值:`128`。必须 `>= 1` |
50+
| stream_check_interval || Number | 实时模式下批次检查之间的间隔秒数。默认值:`3`。必须 `>= 0.1` |
51+
| request_check_service || String | 用于请求审核的阿里云服务。默认值:`"llm_query_moderation"` |
52+
| request_check_length_limit || Number | 每个请求审核块的最大字符数。默认值:`2000` |
53+
| response_check_service || String | 用于响应审核的阿里云服务。默认值:`"llm_response_moderation"` |
54+
| response_check_length_limit || Number | 每个响应审核块的最大字符数。默认值:`5000` |
55+
| risk_level_bar || String | 内容拒绝的阈值。默认值:`"high"`。有效值:`["none", "low", "medium", "high", "max"]` |
56+
| deny_code || Number | 被拒绝内容的 HTTP 状态码。默认值:`200` |
57+
| deny_message || String | 被拒绝内容的自定义消息。默认值:`-` |
58+
| timeout || Integer | 请求超时时间(毫秒)。默认值:`10000`。必须 `>= 1` |
59+
| ssl_verify || Boolean | 启用 SSL 证书验证。默认值:`true` |
60+
61+
## 使用示例
62+
63+
首先初始化这些 shell 变量:
64+
65+
```shell
66+
ADMIN_API_KEY=edd1c9f034335f136f87ad84b625c8f1
67+
ALIYUN_ACCESS_KEY_ID=your-aliyun-access-key-id
68+
ALIYUN_ACCESS_KEY_SECRET=your-aliyun-access-key-secret
69+
ALIYUN_REGION=cn-hangzhou
70+
ALIYUN_ENDPOINT=https://green.cn-hangzhou.aliyuncs.com
71+
OPENAI_KEY=your-openai-api-key
72+
```
73+
74+
创建一个带有 `ai-aliyun-content-moderation``ai-proxy` 插件的路由:
75+
76+
```shell
77+
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
78+
-H "X-API-KEY: ${ADMIN_API_KEY}" \
79+
-d '{
80+
"uri": "/v1/chat/completions",
81+
"plugins": {
82+
"ai-proxy": {
83+
"provider": "openai",
84+
"auth": {
85+
"header": {
86+
"Authorization": "Bearer '"$OPENAI_KEY"'"
87+
}
88+
},
89+
"override": {
90+
"endpoint": "http://localhost:6724/v1/chat/completions"
91+
}
92+
},
93+
"ai-aliyun-content-moderation": {
94+
"endpoint": "'"$ALIYUN_ENDPOINT"'",
95+
"region_id": "'"$ALIYUN_REGION"'",
96+
"access_key_id": "'"$ALIYUN_ACCESS_KEY_ID"'",
97+
"access_key_secret": "'"$ALIYUN_ACCESS_KEY_SECRET"'",
98+
"risk_level_bar": "high",
99+
"check_request": true,
100+
"check_response": true,
101+
"deny_code": 400,
102+
"deny_message": "您的请求违反了内容政策"
103+
}
104+
}
105+
}'
106+
```
107+
108+
这里使用 `ai-proxy` 插件是因为它简化了对 LLM 的访问。不过,您也可以在上游配置中配置 LLM。
109+
110+
现在发送一个请求:
111+
112+
```shell
113+
curl http://127.0.0.1:9080/v1/chat/completions -i \
114+
-H "Content-Type: application/json" \
115+
-d '{
116+
"model": "gpt-3.5-turbo",
117+
"messages": [
118+
{"role": "user", "content": "I want to kill you"}
119+
],
120+
"stream": false
121+
}'
122+
```
123+
124+
然后请求将被阻止,并返回如下错误:
125+
126+
```text
127+
HTTP/1.1 400 Bad Request
128+
Content-Type: application/json
129+
130+
{"id":"chatcmpl-123","object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"index":0,"message":{"role":"assistant","content":"您的请求违反了内容政策"},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
131+
```

0 commit comments

Comments
 (0)