Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
261 changes: 135 additions & 126 deletions packages/core/src/core/__snapshots__/prompts.test.ts.snap

Large diffs are not rendered by default.

29 changes: 15 additions & 14 deletions packages/core/src/core/prompts.ts
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ When requested to perform tasks like fixing bugs, adding features, refactoring,
- **File Paths:** Always use absolute paths when referring to files with tools like '${ReadFileTool.Name}' or '${WriteFileTool.Name}'. Relative paths are not supported. You must provide an absolute path.
- **Parallelism:** Execute multiple independent tool calls in parallel when feasible (i.e. searching the codebase).
- **Command Execution:** Use the '${ShellTool.Name}' tool for running shell commands, remembering the safety rule to explain modifying commands first.
- **Tool Calling (Critical):** Never emit pseudo tool-call markers (e.g. "tool_call: ...") in normal text. When you need to use a tool, call it using the tool-calling mechanism.
- **Background Processes:** Use background processes (via \`&\`) for commands that are unlikely to stop on their own, e.g. \`node server.js &\`. If unsure, ask the user.
- **Interactive Commands:** Try to avoid shell commands that are likely to require user interaction (e.g. \`git rebase -i\`). Use non-interactive versions of commands (e.g. \`npm init -y\` instead of \`npm init\`) when available, and otherwise remind the user that interactive shell commands are not supported and may cause hangs until canceled by the user.
- **Remembering Facts:** Use the '${MemoryTool.Name}' tool to remember specific, *user-related* facts or preferences when the user explicitly asks, or when they state a clear, concise piece of information that would help personalize or streamline *your future interactions with them* (e.g., preferred coding style, common project paths they use, personal tool aliases). This tool is for user-specific information that should persist across sessions. Do *not* use it for general project context or information that belongs in project-specific \`GEMINI.md\` files. If unsure whether to save something, you can ask the user, "Should I remember that for you?"
Expand Down Expand Up @@ -167,24 +168,24 @@ model: true

<example>
user: list files here.
model: [tool_call: ${LSTool.Name} for path '.']
model: I'll use the ${LSTool.Name} tool to list files in '.'.
</example>

<example>
user: start the server implemented in server.js
model: [tool_call: ${ShellTool.Name} for 'node server.js &' because it must run in the background]
model: I'll run \`node server.js &\` using the ${ShellTool.Name} tool (it must run in the background).
</example>

<example>
user: Refactor the auth logic in src/auth.py to use the requests library instead of urllib.
model: Okay, I can refactor 'src/auth.py'.
First, I'll analyze the code and check for a test safety net before planning any changes.
[tool_call: ${GlobTool.Name} for path 'tests/test_auth.py']
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/tests/test_auth.py']
(I'll search for tests: ${GlobTool.Name} on 'tests/test_auth.py')
(I'll read the test file: ${ReadFileTool.Name} on '/path/to/tests/test_auth.py')
(After analysis)
Great, 'tests/test_auth.py' exists and covers the core authentication logic. With this safety net in place, I can safely plan the refactoring.
I'll also confirm 'requests' is a dependency.
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/requirements.txt']
(I'll check dependencies: ${ReadFileTool.Name} on '/path/to/requirements.txt')
(After analysis)
Looks good, 'requests' is available.

Expand All @@ -197,9 +198,9 @@ Here's the plan:
Should I proceed?
user: Yes
model:
[tool_call: ${WriteFileTool.Name} or ${EditTool.Name} to apply the refactoring to 'src/auth.py']
(I'll apply the refactor using ${WriteFileTool.Name} or ${EditTool.Name} on 'src/auth.py')
Refactoring complete. Running verification...
[tool_call: ${ShellTool.Name} for 'ruff check src/auth.py && pytest']
(I'll run verification using ${ShellTool.Name}: \`ruff check src/auth.py && pytest\`)
(After verification passes)
All checks passed. This is a stable checkpoint.
${(function () {
Expand All @@ -219,31 +220,31 @@ model: I can run \`rm -rf ./temp\`. This will permanently delete the directory a
user: Write tests for someFile.ts
model:
Okay, I can write those tests. First, I'll read \`someFile.ts\` to understand its functionality.
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/someFile.ts' or use ${GlobTool.Name} to find \`someFile.ts\` if its location is unknown]
(I'll read the file: ${ReadFileTool.Name} on '/path/to/someFile.ts', or use ${GlobTool.Name} if its location is unknown)
Now I'll look for existing or related test files to understand current testing conventions and dependencies.
[tool_call: ${ReadManyFilesTool.Name} for paths ['**/*.test.ts', 'src/**/*.spec.ts'] assuming someFile.ts is in the src directory]
(I'll read related tests: ${ReadManyFilesTool.Name} on ['**/*.test.ts', 'src/**/*.spec.ts'])
(After reviewing existing tests and the file content)
[tool_call: ${WriteFileTool.Name} to create /path/to/someFile.test.ts with the test code]
(I'll write the new test file: ${WriteFileTool.Name} on '/path/to/someFile.test.ts')
I've written the tests. Now I'll run the project's test command to verify them.
[tool_call: ${ShellTool.Name} for 'npm run test']
(I'll run tests using ${ShellTool.Name}: \`npm run test\`)
</example>

<example>
user: How do I update the user's profile information in this system?
model:
I'm not immediately sure how user profile information is updated. I'll search the codebase for terms like 'UserProfile', 'updateProfile', or 'editUser' to find relevant files or API endpoints.
[tool_call: ${GrepTool.Name} for pattern 'UserProfile|updateProfile|editUser']
(I'll search the codebase: ${GrepTool.Name} on 'UserProfile|updateProfile|editUser')
(After reviewing search results, assuming a relevant file like '/path/to/UserProfileService.java' was found)
Okay, \`/path/to/UserProfileService.java\` seems like the most relevant file. I'll read its content to understand how updates are handled.
[tool_call: ${ReadFileTool.Name} for absolute_path '/path/to/UserProfileService.java']
(I'll read the file: ${ReadFileTool.Name} on '/path/to/UserProfileService.java')
(After reading the file)
It appears the \`updateUserProfile\` method in \`UserProfileService.java\` is responsible for this. It expects a user ID and a \`UserProfileDTO\` object...
</example>

<example>
user: Where are all the 'app.config' files in this project? I need to check their settings.
model:
[tool_call: ${GlobTool.Name} for pattern '**/app.config']
(I'll find all matching files: ${GlobTool.Name} on '**/app.config')
(Assuming GlobTool returns a list of paths like ['/path/to/moduleA/app.config', '/path/to/moduleB/app.config'])
I found the following 'app.config' files:
- /path/to/moduleA/app.config
Expand Down
168 changes: 134 additions & 34 deletions packages/core/src/custom_llm/converter.ts
Original file line number Diff line number Diff line change
Expand Up @@ -162,33 +162,104 @@ export class ModelConverter {
const choice = response.choices[0];
const res = new GenerateContentResponse();

if (choice.message.content) {
const message = (choice.message || {}) as any;

if (Array.isArray(message.tool_calls) && message.tool_calls.length > 0) {
res.candidates = [
{
content: {
parts: [{ text: choice.message.content }],
parts: message.tool_calls.map((toolCall: any) => {
let args: Record<string, unknown> = {};
const rawArgs = toolCall?.function?.arguments;
if (typeof rawArgs === 'string' && rawArgs.trim().length > 0) {
try {
args = JSON.parse(rawArgs);
} catch {
args = {};
}
}
const id =
typeof toolCall?.id === 'string' && toolCall.id.trim().length > 0
? toolCall.id
: `call_${Math.random().toString(36).slice(2)}`;
return {
functionCall: {
id,
name: toolCall.function.name,
args,
},
};
}),
role: 'model',
},
index: 0,
safetyRatings: [],
},
];
} else if (choice.message.tool_calls) {
res.candidates = [
{
content: {
parts: choice.message.tool_calls.map((toolCall) => ({
functionCall: {
name: toolCall.function.name,
args: JSON.parse(toolCall.function.arguments),
},
})),
role: 'model',
} else {
const content = message.content;
const refusal = message.refusal;
const reasoningContent = message.reasoning_content;

let text: string | undefined;
if (typeof content === 'string' && content.trim().length > 0) {
text = content;
} else if (Array.isArray(content)) {
const segments = content
.map((part: any) => {
if (typeof part === 'string') {
return part;
}
if (typeof part === 'object' && part !== null && 'text' in part) {
return typeof part.text === 'string' ? part.text : '';
}
return '';
})
.filter(Boolean);
if (segments.length > 0) {
text = segments.join('');
}
} else if (typeof content === 'object' && content !== null) {
if (
'text' in content &&
typeof content.text === 'string' &&
content.text.trim().length > 0
) {
text = content.text;
} else {
try {
text = JSON.stringify(content);
} catch {
text = String(content);
}
}
}

if (typeof text !== 'string' || text.trim().length === 0) {
if (typeof refusal === 'string' && refusal.trim().length > 0) {
text = refusal;
} else if (
typeof reasoningContent === 'string' &&
reasoningContent.trim().length > 0
) {
text = reasoningContent;
} else if (typeof content === 'string') {
text = content;
}
}

if (typeof text === 'string') {
res.candidates = [
{
content: {
parts: [{ text }],
role: 'model',
},
index: 0,
safetyRatings: [],
},
index: 0,
safetyRatings: [],
},
];
];
}
}
res.usageMetadata = {
promptTokenCount: response.usage?.prompt_tokens || 0,
Expand Down Expand Up @@ -228,13 +299,24 @@ export class ModelConverter {
{
content: {
parts: Array.from(toolCallMap.entries()).map(
([_index, toolCall]) => ({
functionCall: {
id: `call_${Math.random().toString(36).slice(2)}`,
name: toolCall.name,
args: toolCall.arguments ? JSON.parse(toolCall.arguments) : {},
},
}),
([_index, toolCall]) => {
let args: Record<string, unknown> = {};
if (toolCall.arguments && toolCall.arguments.trim().length > 0) {
try {
args = JSON.parse(toolCall.arguments);
} catch {
args = {};
}
}
return {
functionCall: {
id:
toolCall.id || `call_${Math.random().toString(36).slice(2)}`,
name: toolCall.name,
args,
},
};
},
),
role: 'model',
},
Expand Down Expand Up @@ -297,10 +379,15 @@ export class ModelConverter {
): void {
const idx = toolCall.index;
const current = toolCallMap.get(idx) || {
id: '',
name: '',
arguments: '',
};

if (toolCall.id) {
current.id = toolCall.id;
}

if (toolCall.function?.name) {
current.name = toolCall.function.name;
}
Expand All @@ -319,14 +406,8 @@ export class ModelConverter {
chunk: OpenAI.Chat.Completions.ChatCompletionChunk,
toolCallMap: ToolCallMap,
): { response: GenerateContentResponse | null; shouldReturn: boolean } {
if (chunk.usage && chunk.usage.total_tokens) {
return {
response: this.toGeminiStreamUsageResponse(chunk.usage),
shouldReturn: true,
};
}

const choice = chunk.choices[0];
const usage = chunk.usage;
const choice = chunk.choices?.[0];

if (choice?.delta?.content) {
return {
Expand All @@ -341,18 +422,37 @@ export class ModelConverter {
}
}

if (choice.finish_reason === 'tool_calls' && toolCallMap.size > 0) {
if (choice?.finish_reason === 'tool_calls' && toolCallMap.size > 0) {
const response = this.toGeminiStreamToolCallsResponse(toolCallMap);
toolCallMap.clear();
if (usage?.total_tokens) {
response.usageMetadata = {
promptTokenCount: usage.prompt_tokens || 0,
candidatesTokenCount: usage.completion_tokens || 0,
totalTokenCount: usage.total_tokens || 0,
};
}
return {
response,
shouldReturn: false,
};
}

if (choice?.finish_reason) {
const response = this.toGeminiStreamEndResponse();
if (usage?.total_tokens) {
response.usageMetadata = {
promptTokenCount: usage.prompt_tokens || 0,
candidatesTokenCount: usage.completion_tokens || 0,
totalTokenCount: usage.total_tokens || 0,
};
}
return { response, shouldReturn: true };
}

if (usage?.total_tokens) {
return {
response: this.toGeminiStreamEndResponse(),
response: this.toGeminiStreamUsageResponse(usage),
shouldReturn: true,
};
}
Expand Down
33 changes: 33 additions & 0 deletions packages/core/src/custom_llm/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,12 @@ export class CustomLLMContentGenerator implements ContentGenerator {
): Promise<AsyncGenerator<GenerateContentResponse>> {
const messages = ModelConverter.toOpenAIMessages(request);
const tools = extractToolFunctions(request.config) || [];
if (process.env.EASY_LLM_CLI_DEBUG_TOOL_CALLS) {
console.error(
'[debug] openai tools:',
JSON.stringify(tools.map((t) => t.function?.name)),
);
}
const stream = await this.model.chat.completions.create({
messages,
stream: true,
Expand All @@ -66,6 +72,31 @@ export class CustomLLMContentGenerator implements ContentGenerator {
const map: ToolCallMap = new Map();
return (async function* (): AsyncGenerator<GenerateContentResponse> {
for await (const chunk of stream) {
if (process.env.EASY_LLM_CLI_DEBUG_TOOL_CALLS) {
const choice = chunk?.choices?.[0];
const delta = choice?.delta as any;
if (delta?.tool_calls) {
console.error(
'[debug] stream tool_calls:',
JSON.stringify(delta.tool_calls),
);
}
if (delta?.content) {
console.error('[debug] stream content:', JSON.stringify(delta.content));
}
if (delta?.reasoning_content) {
console.error(
'[debug] stream reasoning_content:',
JSON.stringify(delta.reasoning_content),
);
}
if (choice?.finish_reason) {
console.error(
'[debug] stream finish_reason:',
JSON.stringify(choice.finish_reason),
);
}
}
const { response } = ModelConverter.processStreamChunk(chunk, map);
if (response) {
yield response;
Expand All @@ -87,9 +118,11 @@ export class CustomLLMContentGenerator implements ContentGenerator {
request: GenerateContentParameters,
): Promise<GenerateContentResponse> {
const messages = ModelConverter.toOpenAIMessages(request);
const wantsJson = request.config?.responseMimeType === 'application/json';
const completion = await this.model.chat.completions.create({
messages,
stream: false,
...(wantsJson ? { response_format: { type: 'json_object' } } : {}),
...this.config,
});

Expand Down
1 change: 1 addition & 0 deletions packages/core/src/custom_llm/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ export interface CustomLLMContentGeneratorConfig {
* Tool call data structure for streaming
*/
export interface ToolCallData {
id: string;
name: string;
arguments: string;
}
Expand Down
Loading