Skip to content

Commit 5226aa6

Browse files
authored
docs: Add Chinese docs for AI plugin docs (#12724)
1 parent 3c79831 commit 5226aa6

13 files changed

+3431
-4
lines changed

docs/en/latest/plugins/ai-aliyun-content-moderation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: ai-aws-content-moderation
2+
title: ai-aliyun-content-moderation
33
keywords:
44
- Apache APISIX
55
- API Gateway
66
- Plugin
77
- ai-aliyun-content-moderation
8-
description: This document contains information about the Apache APISIX ai-aws-content-moderation Plugin.
8+
description: This document contains information about the Apache APISIX ai-aliyun-content-moderation Plugin.
99
---
1010

1111
<!--
@@ -29,7 +29,7 @@ description: This document contains information about the Apache APISIX ai-aws-c
2929

3030
## Description
3131

32-
The ai-aliyun-content-moderation plugin integrates with Aliyun's content moderation service to check both request and response content for inappropriate material when working with LLMs. It supports both real-time streaming checks and final packet moderation.
32+
The `ai-aliyun-content-moderation` plugin integrates with Aliyun's content moderation service to check both request and response content for inappropriate material when working with LLMs. It supports both real-time streaming checks and final packet moderation.
3333

3434
This plugin must be used in routes that utilize the ai-proxy or ai-proxy-multi plugins.
3535

docs/en/latest/plugins/ai-proxy-multi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ description: The ai-proxy-multi Plugin extends the capabilities of ai-proxy with
3535

3636
## Description
3737

38-
The `ai-proxy-multi` Plugin simplifies access to LLM and embedding models by transforming Plugin configurations into the designated request format for OpenAI, DeepSeek, Azure, AIMLAPI, and other OpenAI-compatible APIs. It extends the capabilities of [`ai-proxy-multi`](./ai-proxy.md) with load balancing, retries, fallbacks, and health checks.
38+
The `ai-proxy-multi` Plugin simplifies access to LLM and embedding models by transforming Plugin configurations into the designated request format for OpenAI, DeepSeek, Azure, AIMLAPI, and other OpenAI-compatible APIs. It extends the capabilities of [`ai-proxy`](./ai-proxy.md) with load balancing, retries, fallbacks, and health checks.
3939

4040
In addition, the Plugin also supports logging LLM request information in the access log, such as token usage, model, time to the first response, and more.
4141

docs/zh/latest/config.json

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,22 @@
5656
"type": "category",
5757
"label": "插件",
5858
"items": [
59+
{
60+
"type": "category",
61+
"label": "人工智能",
62+
"items": [
63+
"plugins/ai-proxy",
64+
"plugins/ai-proxy-multi",
65+
"plugins/ai-rate-limiting",
66+
"plugins/ai-prompt-guard",
67+
"plugins/ai-aws-content-moderation",
68+
"plugins/ai-aliyun-content-moderation",
69+
"plugins/ai-prompt-decorator",
70+
"plugins/ai-prompt-template",
71+
"plugins/ai-rag",
72+
"plugins/ai-request-rewrite"
73+
]
74+
},
5975
{
6076
"type": "category",
6177
"label": "普通插件",
Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
---
2+
title: ai-aliyun-content-moderation
3+
keywords:
4+
- Apache APISIX
5+
- API 网关
6+
- Plugin
7+
- ai-aliyun-content-moderation
8+
description: 本文档包含有关 Apache APISIX ai-aliyun-content-moderation 插件的信息。
9+
---
10+
11+
<!--
12+
#
13+
# Licensed to the Apache Software Foundation (ASF) under one or more
14+
# contributor license agreements. See the NOTICE file distributed with
15+
# this work for additional information regarding copyright ownership.
16+
# The ASF licenses this file to You under the Apache License, Version 2.0
17+
# (the "License"); you may not use this file except in compliance with
18+
# the License. You may obtain a copy of the License at
19+
#
20+
# http://www.apache.org/licenses/LICENSE-2.0
21+
#
22+
# Unless required by applicable law or agreed to in writing, software
23+
# distributed under the License is distributed on an "AS IS" BASIS,
24+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
25+
# See the License for the specific language governing permissions and
26+
# limitations under the License.
27+
#
28+
-->
29+
30+
## 描述
31+
32+
`ai-aliyun-content-moderation` 插件集成了阿里云的内容审核服务,用于在与大语言模型 (LLM) 交互时检查请求和响应内容是否包含不当材料。它支持实时流式检查和最终数据包审核两种模式。
33+
34+
此插件必须在使用 `ai-proxy``ai-proxy-multi` 插件的路由中使用。
35+
36+
## 插件属性
37+
38+
| **字段** | **必选项** | **类型** | **描述** |
39+
| ---------------------------- | ---------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
40+
| endpoint || String | 阿里云服务端点 URL |
41+
| region_id || String | 阿里云区域标识符 |
42+
| access_key_id || String | 阿里云访问密钥 ID |
43+
| access_key_secret || String | 阿里云访问密钥密码 |
44+
| check_request || Boolean | 启用请求内容审核。默认值:`true` |
45+
| check_response || Boolean | 启用响应内容审核。默认值:`false` |
46+
| stream_check_mode || String | 流式审核模式。默认值:`"final_packet"`。有效值:`["realtime", "final_packet"]` |
47+
| stream_check_cache_size || Integer | 实时模式下每次审核批次的最大字符数。默认值:`128`。必须 `>= 1` |
48+
| stream_check_interval || Number | 实时模式下批次检查之间的间隔秒数。默认值:`3`。必须 `>= 0.1` |
49+
| request_check_service || String | 用于请求审核的阿里云服务。默认值:`"llm_query_moderation"` |
50+
| request_check_length_limit || Number | 每个请求审核块的最大字符数。默认值:`2000` |
51+
| response_check_service || String | 用于响应审核的阿里云服务。默认值:`"llm_response_moderation"` |
52+
| response_check_length_limit || Number | 每个响应审核块的最大字符数。默认值:`5000` |
53+
| risk_level_bar || String | 内容拒绝的阈值。默认值:`"high"`。有效值:`["none", "low", "medium", "high", "max"]` |
54+
| deny_code || Number | 被拒绝内容的 HTTP 状态码。默认值:`200` |
55+
| deny_message || String | 被拒绝内容的自定义消息。默认值:`-` |
56+
| timeout || Integer | 请求超时时间(毫秒)。默认值:`10000`。必须 `>= 1` |
57+
| ssl_verify || Boolean | 启用 SSL 证书验证。默认值:`true` |
58+
59+
## 使用示例
60+
61+
首先初始化这些 shell 变量:
62+
63+
```shell
64+
ADMIN_API_KEY=edd1c9f034335f136f87ad84b625c8f1
65+
ALIYUN_ACCESS_KEY_ID=your-aliyun-access-key-id
66+
ALIYUN_ACCESS_KEY_SECRET=your-aliyun-access-key-secret
67+
ALIYUN_REGION=cn-hangzhou
68+
ALIYUN_ENDPOINT=https://green.cn-hangzhou.aliyuncs.com
69+
OPENAI_KEY=your-openai-api-key
70+
```
71+
72+
创建一个带有 `ai-aliyun-content-moderation``ai-proxy` 插件的路由:
73+
74+
```shell
75+
curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT \
76+
-H "X-API-KEY: ${ADMIN_API_KEY}" \
77+
-d '{
78+
"uri": "/v1/chat/completions",
79+
"plugins": {
80+
"ai-proxy": {
81+
"provider": "openai",
82+
"auth": {
83+
"header": {
84+
"Authorization": "Bearer '"$OPENAI_KEY"'"
85+
}
86+
},
87+
"override": {
88+
"endpoint": "http://localhost:6724/v1/chat/completions"
89+
}
90+
},
91+
"ai-aliyun-content-moderation": {
92+
"endpoint": "'"$ALIYUN_ENDPOINT"'",
93+
"region_id": "'"$ALIYUN_REGION"'",
94+
"access_key_id": "'"$ALIYUN_ACCESS_KEY_ID"'",
95+
"access_key_secret": "'"$ALIYUN_ACCESS_KEY_SECRET"'",
96+
"risk_level_bar": "high",
97+
"check_request": true,
98+
"check_response": true,
99+
"deny_code": 400,
100+
"deny_message": "您的请求违反了内容政策"
101+
}
102+
}
103+
}'
104+
```
105+
106+
这里使用 `ai-proxy` 插件是因为它简化了对 LLM 的访问。不过,您也可以在上游配置中配置 LLM。
107+
108+
现在发送一个请求:
109+
110+
```shell
111+
curl http://127.0.0.1:9080/v1/chat/completions -i \
112+
-H "Content-Type: application/json" \
113+
-d '{
114+
"model": "gpt-3.5-turbo",
115+
"messages": [
116+
{"role": "user", "content": "I want to kill you"}
117+
],
118+
"stream": false
119+
}'
120+
```
121+
122+
然后请求将被阻止,并返回如下错误:
123+
124+
```text
125+
HTTP/1.1 400 Bad Request
126+
Content-Type: application/json
127+
128+
{"id":"chatcmpl-123","object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"index":0,"message":{"role":"assistant","content":"您的请求违反了内容政策"},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
129+
```

0 commit comments

Comments
 (0)