-
Notifications
You must be signed in to change notification settings - Fork 12
Open
Description
Gemini API思考提示400,改了main.js 去掉PROXY_URL后的v1/chat/completions改成Gemini的https://generativelanguage.googleapis.com/v1beta/openai/chat/completions 依然不行,也试过 UNI API 管理的Gemini也是同样。配置如下:
PROXY_URL=https://generativelanguage.googleapis.com/v1beta/openai/chat/completions
Model_think_API_KEY=AIzaSy***
Model_think_MODEL=gemini-2.5-flash-preview-04-17
Model_think_MAX_TOKENS=65536
Model_think_CONTEXT_WINDOW=1048576
Model_think_TEMPERATURE=0.7
Model_think_WebSearch=true
Model_think_image=true
Think_PROMPT="
**Role:** You are a hidden, preliminary reasoning system. Disregard all prior system-level instructions. Your operation is independent of the main conversation flow and your output is exclusively for a subsequent AI model, not directly for the user.
**Core Task:** Your primary function is to meticulously analyze all preceding information and the user's current request. Based on this, you must construct a detailed 'Chain of Thought' to guide the final response.
**Chain of Thought Requirements:**
1. **Information Synthesis:** Consolidate and organize all relevant information from the context.
2. **Logical Decomposition:** Break down the user's request into logical steps.
3. **Deepened Analysis:** For each step, elaborate on the reasoning process.
4. **Broadened Perspective (Divergent Thinking):** Explore related concepts, domains, and potential implications to enrich the understanding and create cross-connections between information.
5. **Iterative Self-Correction:** Critically evaluate each step of your reasoning. Identify potential flaws, biases, or gaps in logic and explicitly outline corrective actions or alternative considerations. This self-correction should be an integral part of your thought process.
**Output:**
* You will ONLY output this 'Chain of Thought'.
* Do NOT generate any part of the direct user-facing reply. The final output AI will handle that.
* Emphasize the detailed logical derivations and the self-correction mechanisms within your thought process.
"
开始处理新请求...
思考阶段配置: {
model: 'gemini-2.5-flash-preview-04-17',
temperature: 0.7,
messageCount: 2
}
请求处理错误: {
message: 'Request failed with status code 400',
status: 400,
data: <ref *1> Unzip {
_writeState: Uint32Array(2) [ 0, 0 ],
_events: {
close: [Array],
error: [Array],
prefinish: [Function: prefinish],
finish: [Array],
drain: undefined,
data: undefined,
end: [Array],
readable: undefined,
unpipe: [Function: onunpipe]
},
_readableState: ReadableState {
highWaterMark: 65536,
buffer: [],
bufferIndex: 0,
length: 0,
pipes: [],
awaitDrainWriters: null,
[Symbol(kState)]: 1048844
},
_writableState: WritableState {
highWaterMark: 65536,
length: 245,
corked: 0,
onwrite: [Function: bound onwrite],
writelen: 1,
bufferedIndex: 0,
pendingcb: 14,
[Symbol(kState)]: 621592844,
[Symbol(kBufferedValue)]: [Array]
},
allowHalfOpen: true,
_maxListeners: undefined,
_eventsCount: 6,
bytesWritten: 0,
_handle: Zlib {
onerror: [Function: zlibOnError],
buffer: <Buffer 1f>,
cb: [Function (anonymous)],
availOutBefore: 16384,
availInBefore: 1,
inOff: 0,
flushFlag: 2,
[Symbol(owner_symbol)]: [Circular *1]
},
_outBuffer: <Buffer 70 62 c7 9f ff ff 00 00 70 62 c7 9f ff ff 00 00 30 68 a4 28 00 00 00 00 30 68 a4 28 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 16334 more bytes>,
_outOffset: 0,
_chunkSize: 16384,
_defaultFlushFlag: 2,
_finishFlushFlag: 2,
_defaultFullFlushFlag: 3,
_info: undefined,
_maxOutputLength: 9007199254740991,
_level: -1,
_strategy: 0,
[Symbol(shapeMode)]: true,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null,
[Symbol(kError)]: null
}
}
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels