Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add proxyHandler custom httpendpoint and a config option to use it per model #77

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

justyns
Copy link
Owner

@justyns justyns commented Jan 23, 2025

Summary by CodeRabbit

  • New Features

    • Added experimental support for proxying all LLM calls through the silverbullet server.
    • Enhanced model configuration with more flexible naming and routing options.
  • Improvements

    • Standardized URL handling across different AI providers.
    • Improved configuration flexibility for AI and embedding models.
  • Technical Updates

    • Updated constructor signatures for AI providers.
    • Introduced new proxyHandler for managing model requests.
    • Added optional proxyOnServer property to model configurations.

Copy link
Contributor

coderabbitai bot commented Jan 23, 2025

📝 Walkthrough

Walkthrough

This pull request introduces comprehensive changes to the AI model provider infrastructure, focusing on enhancing configuration flexibility and standardizing URL handling. The modifications span multiple files, including init.ts, provider interfaces, and specific provider implementations. Key changes include renaming the name property to fullName, introducing a new proxyOnServer option, and updating constructor signatures across different provider classes. A new proxyHandler function is also added to manage requests for AI models through a proxy mechanism.

Changes

File Change Summary
src/init.ts Updated setupAIProvider, setupEmbeddingProvider, and configureSelectedModel functions to include model.name and model.proxyOnServer parameters.
src/interfaces/EmbeddingProvider.ts Replaced name with fullName, added proxyOnServer property, modified constructor.
src/interfaces/ImageProvider.ts Renamed name property to fullName in AbstractImageProvider.
src/interfaces/Provider.ts Renamed name to fullName, added proxyOnServer property, updated constructor.
src/providers/gemini.ts Updated constructors and methods to use new getUrl and include modelConfigName.
src/providers/ollama.ts Modified constructors to include modelConfigName and proxyOnServer.
src/providers/openai.ts Updated constructors with modelConfigName and proxyOnServer, standardized URL construction.
src/proxyHandler.ts Added new function to handle proxy requests for AI models.
src/types.ts Added optional proxyOnServer property to ModelConfig, ImageModelConfig, and EmbeddingModelConfig.
docs/Changelog.md Added entry for "experimental support for proxying all LLM calls through the silverbullet server".

Sequence Diagram

sequenceDiagram
    participant User
    participant ProxyHandler
    participant AIProvider
    participant ExternalAPI

    User->>ProxyHandler: Send AI Model Request
    ProxyHandler->>ProxyHandler: Validate Model Configuration
    ProxyHandler->>AIProvider: Prepare Request
    AIProvider->>ExternalAPI: Forward Request
    ExternalAPI-->>AIProvider: Return Response
    AIProvider-->>ProxyHandler: Relay Response
    ProxyHandler-->>User: Return Result
Loading

Possibly related PRs

Suggested labels

documentation

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🔭 Outside diff range comments (2)
src/interfaces/ImageProvider.ts (2)

Line range hint 5-16: Inconsistent property naming between interface and class

The ImageProviderInterface still uses name: string; while AbstractImageProvider has updated the property to fullName: string;. This inconsistency can lead to type mismatches and confusion. Please update the interface to match the class.

Apply this diff to update the interface:

 export interface ImageProviderInterface {
-  name: string;
+  fullName: string;

Line range hint 22-30: Update constructor parameter to reflect property name

The constructor parameter name should be renamed to fullName to match the property it initializes, enhancing code clarity and consistency.

Apply this diff to rename the parameter:

   constructor(
     apiKey: string,
     baseUrl: string,
-    name: string,
+    fullName: string,
     modelName: string,
     requireAuth: boolean = true,
   ) {
-    this.fullName = name;
+    this.fullName = fullName;
🧹 Nitpick comments (5)
src/proxyHandler.ts (1)

44-45: Address the TODO regarding model configuration

There's a TODO comment indicating uncertainty about whether configureSelectedModel(modelConfig) overwrites the user's selected model. Please investigate and resolve this to ensure that the existing model configuration is not unintentionally modified.

Would you like me to help address this issue or open a new issue to track this task?

src/providers/gemini.ts (1)

96-96: Improve URL template formatting for readability.

The URL templates are long and could benefit from better formatting.

Apply this diff to improve readability:

-      const sseUrl = this.getUrl(`v1beta/models/${this.modelName}:streamGenerateContent?key=${this.apiKey}&alt=sse`);
+      const sseUrl = this.getUrl(
+        `v1beta/models/${this.modelName}:streamGenerateContent?key=${this.apiKey}&alt=sse`
+      );

-      this.getUrl(`v1beta/models/${this.modelName}:generateContent?key=${this.apiKey}`),
+      this.getUrl(
+        `v1beta/models/${this.modelName}:generateContent?key=${this.apiKey}`
+      ),

-      this.getUrl(`v1beta/models/${this.modelName}:embedContent?key=${this.apiKey}`),
+      this.getUrl(
+        `v1beta/models/${this.modelName}:embedContent?key=${this.apiKey}`
+      ),

Also applies to: 168-168, 216-216

🧰 Tools
🪛 GitHub Actions: Deno build test and push

[error] 96-222: Code formatting issues: improper string template formatting and line breaks

src/init.ts (3)

178-183: Maintain consistent parameter ordering across providers.

While the changes are functionally correct, consider maintaining the same parameter order as OpenAIProvider for better maintainability.

Apply this diff to align parameter ordering:

       currentAIProvider = new GeminiProvider(
         model.name,
         apiKey,
         model.modelName,
+        undefined,  // baseUrl
+        undefined,  // requireAuth
         model.proxyOnServer || false,
       );

187-192: Move Ollama's default baseUrl to settings.

For consistency with other providers, consider moving the default baseUrl to aiSettings.

Apply these changes:

  1. Add ollamaBaseUrl to the default settings in loadAndMergeSettings.
  2. Update the provider setup:
       currentAIProvider = new OllamaProvider(
         model.name,
         apiKey,
         model.modelName,
-        model.baseUrl || "http://localhost:11434/v1",
+        model.baseUrl || aiSettings.ollamaBaseUrl,
         model.requireAuth,
         model.proxyOnServer || false,
       );

Line range hint 231-236: Move Ollama embedding provider's default baseUrl to settings.

For consistency, move the default baseUrl to aiSettings.

Apply these changes:

  1. Use the same setting as the Ollama provider:
         model.name,
         apiKey,
         model.modelName,
-        model.baseUrl || "http://localhost:11434",
+        model.baseUrl || aiSettings.ollamaBaseUrl,
         model.requireAuth,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b4256bd and 1e471f3.

⛔ Files ignored due to path filters (2)
  • silverbullet-ai.plug.js is excluded by !silverbullet-ai.plug.js and included by none
  • silverbullet-ai.plug.yaml is excluded by none and included by none
📒 Files selected for processing (9)
  • src/init.ts (2 hunks)
  • src/interfaces/EmbeddingProvider.ts (2 hunks)
  • src/interfaces/ImageProvider.ts (2 hunks)
  • src/interfaces/Provider.ts (2 hunks)
  • src/providers/gemini.ts (4 hunks)
  • src/providers/ollama.ts (4 hunks)
  • src/providers/openai.ts (6 hunks)
  • src/proxyHandler.ts (1 hunks)
  • src/types.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
src/interfaces/ImageProvider.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/types.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/interfaces/EmbeddingProvider.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/proxyHandler.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/providers/openai.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/providers/gemini.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/init.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/providers/ollama.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/interfaces/Provider.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

🪛 GitHub Actions: Deno build test and push
src/interfaces/EmbeddingProvider.ts

[error] 33-49: Code formatting issues: inconsistent quote usage and missing commas

src/proxyHandler.ts

[error] 11-169: Code formatting issues: inconsistent function parameter formatting, quote usage, and line breaks

src/providers/openai.ts

[error] 61-235: Code formatting issues: inconsistent quote usage in API endpoint strings

src/providers/gemini.ts

[error] 96-222: Code formatting issues: improper string template formatting and line breaks

src/providers/ollama.ts

[error] 61-116: Code formatting issues: inconsistent quote usage (single quotes vs double quotes)

src/interfaces/Provider.ts

[error] 52-58: Code formatting issues: inconsistent quote usage and improper line breaks in console.log statements

🔇 Additional comments (9)
src/proxyHandler.ts (1)

11-169: Fix code formatting inconsistencies

The code between lines 11 and 169 has formatting issues as indicated by the pipeline failures, such as inconsistent function parameter formatting, quote usage, and line breaks. Please run the formatter to resolve these issues.

🧰 Tools
🪛 GitHub Actions: Deno build test and push

[error] 11-169: Code formatting issues: inconsistent function parameter formatting, quote usage, and line breaks

src/interfaces/EmbeddingProvider.ts (1)

33-49: Fix code formatting inconsistencies

There are formatting issues between lines 33 and 49, such as inconsistent quote usage and missing commas, as indicated by the pipeline failures. Please run the formatter to resolve these issues.

🧰 Tools
🪛 GitHub Actions: Deno build test and push

[error] 33-49: Code formatting issues: inconsistent quote usage and missing commas

src/types.ts (1)

132-132: LGTM! Consistent implementation of proxy configuration.

The proxyOnServer optional boolean property is consistently added across all model configuration types, following TypeScript best practices.

Also applies to: 143-143, 154-154

src/interfaces/Provider.ts (1)

12-12: Verify the impact of renaming name to fullName.

This is a breaking change that requires verification across the codebase to ensure all references are updated.

Also applies to: 30-30

✅ Verification successful

Rename from name to fullName is safe to proceed

The property rename in the Provider interface is isolated and doesn't impact other parts of the codebase. While there are other name properties in different interfaces and types, they are independent of the Provider interface implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any remaining references to the old property name
rg -A 2 'provider\.name' 

Length of output: 26


Script:

#!/bin/bash
# Search for Provider interface usage and name property
echo "=== Searching for Provider interface usage ==="
rg -p "implements Provider"

echo -e "\n=== Searching for name property in destructuring ==="
rg -p "const.*{.*name.*}.*:.*Provider"

echo -e "\n=== Searching for name property in any context ==="
ast-grep --pattern 'name: string'

echo -e "\n=== Checking Provider implementations ==="
ast-grep --pattern 'class $_ implements Provider'

Length of output: 819


Script:

#!/bin/bash
# Search for various property access patterns
echo "=== Searching for destructured name property ==="
rg -p "[{,]\s*name\s*[,}]"

echo -e "\n=== Searching for direct name property access ==="
rg -p "\.name\b"

echo -e "\n=== Checking Provider implementation details ==="
ast-grep --pattern 'class $_ implements Provider {
  $$$
}'

echo -e "\n=== Searching for name in method parameters ==="
rg -p "name:\s*string"

Length of output: 4194

src/providers/ollama.ts (1)

18-34: LGTM! Constructor changes properly handle new parameters.

The constructor correctly initializes the proxy configuration and passes parameters to the superclass.

src/providers/gemini.ts (1)

25-31: LGTM! Constructor properly initializes the provider.

The constructor correctly handles the new parameters and maintains the fixed base URL.

src/providers/openai.ts (2)

31-40: LGTM! Constructor changes align with proxy support requirements.

The updated constructor signature and initialization properly handle the new proxy configuration.


208-214: LGTM! Constructor changes maintain consistency across providers.

The updated constructor signature aligns with the standardization of provider configuration.

src/init.ts (1)

169-174: LGTM! Proper integration of proxy configuration.

The setup correctly handles the new proxy configuration with appropriate default value.

Comment on lines 117 to 119
console.log("Error from LLM API:", errorText);
return new Response(`Failed to fetch from LLM API: ${errorText}`, { status: response.status });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid exposing internal error details to clients

Returning raw error messages from the LLM API directly to the client may expose sensitive information. Consider returning a generic error message instead.

Apply this diff to replace the error messages with a generic one:

 return new Response(`Failed to fetch from LLM API: ${errorText}`, { status: response.status });
+return new Response('An error occurred while processing your request.', { status: response.status });

And

 body: `Failed to fetch from LLM API: ${errorText}`,
+body: 'An error occurred while processing your request.',

Also applies to: 146-153

Comment on lines 140 to 144
const response = await nativeFetch(apiUrl, {
method: request.method,
headers: requestHeaders,
body: request.method !== "GET" && request.method !== "HEAD" && requestBody ? JSON.stringify(requestBody) : undefined,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

'nativeFetch' is undefined

The function nativeFetch used in the fetch call is not defined or imported, which will cause a runtime error.

Apply this diff to replace nativeFetch with the standard fetch function:

-        const response = await nativeFetch(apiUrl, {
+        const response = await fetch(apiUrl, {

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +103 to +108
const response = await nativeFetch(apiUrl, {
method: request.method,
headers: requestHeaders,
body: JSON.stringify(requestBody),
});

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

'nativeFetch' is undefined

The function nativeFetch used in the fetch call is not defined or imported, which will cause a runtime error.

Apply this diff to replace nativeFetch with the standard fetch function:

-          const response = await nativeFetch(apiUrl, {
+          const response = await fetch(apiUrl, {

Committable suggestion skipped: line range outside the PR's diff.

@@ -2,7 +2,7 @@ import { EmbeddingGenerationOptions } from "../types.ts";
import * as cache from "../cache.ts";

export interface EmbeddingProviderInterface {
name: string;
fullName: string;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure consistency in property naming between interface and class

Both the EmbeddingProviderInterface and AbstractEmbeddingProvider should consistently use fullName as the property name to avoid confusion and potential errors.

Apply this diff to update the interface:

 export interface EmbeddingProviderInterface {
-  fullName: string;
+  fullName: string;

Ensure that the property is correctly defined in both the interface and the class.

Also applies to: 22-22

Comment on lines 28 to 41
modelConfigName: string,
apiKey: string,
baseUrl: string,
name: string,
modelName: string,
requireAuth: boolean = true,
proxyOnServer?: boolean
) {
this.apiKey = apiKey;
this.baseUrl = baseUrl;
this.name = name;
this.fullName = modelConfigName;
this.modelName = modelName;
this.requireAuth = requireAuth;
this.proxyOnServer = proxyOnServer;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Inconsistent naming of constructor parameter

The constructor parameter modelConfigName is assigned to this.fullName. For consistency and clarity, consider renaming the parameter to fullName to match the class property.

Apply this diff to rename the parameter:

   constructor(
-    modelConfigName: string,
+    fullName: string,
     apiKey: string,
     baseUrl: string,
     modelName: string,
     requireAuth: boolean = true,
     proxyOnServer?: boolean
   ) {
-    this.fullName = modelConfigName;
+    this.fullName = fullName;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
modelConfigName: string,
apiKey: string,
baseUrl: string,
name: string,
modelName: string,
requireAuth: boolean = true,
proxyOnServer?: boolean
) {
this.apiKey = apiKey;
this.baseUrl = baseUrl;
this.name = name;
this.fullName = modelConfigName;
this.modelName = modelName;
this.requireAuth = requireAuth;
this.proxyOnServer = proxyOnServer;
}
fullName: string,
apiKey: string,
baseUrl: string,
modelName: string,
requireAuth: boolean = true,
proxyOnServer?: boolean
) {
this.apiKey = apiKey;
this.baseUrl = baseUrl;
this.fullName = fullName;
this.modelName = modelName;
this.requireAuth = requireAuth;
this.proxyOnServer = proxyOnServer;
}

Comment on lines 50 to 59
protected getUrl(path: string): string {
// Remove leading slashes from the path
path = path.replace(/^\/+/, '');

if (this.proxyOnServer) {
console.log("Proxy on server, using proxy URL:", `/_/ai-proxy/${this.fullName}/${path}`);
return `/_/ai-proxy/${this.fullName}/${path}`;
} else {
return `${this.baseUrl}/${path}`;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix string formatting inconsistencies.

The URL construction uses single quotes instead of double quotes, which is inconsistent with the codebase style.

Apply this diff to fix the formatting:

-    path = path.replace(/^\/+/, '');
+    path = path.replace(/^\/+/, "");

     if (this.proxyOnServer) {
-      console.log("Proxy on server, using proxy URL:", `/_/ai-proxy/${this.fullName}/${path}`);
-      return `/_/ai-proxy/${this.fullName}/${path}`;
+      const proxyUrl = `/_/ai-proxy/${this.fullName}/${path}`;
+      console.log("Proxy on server, using proxy URL:", proxyUrl);
+      return proxyUrl;
     } else {
-      return `${this.baseUrl}/${path}`;
+      return `${this.baseUrl}/${path}`;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
protected getUrl(path: string): string {
// Remove leading slashes from the path
path = path.replace(/^\/+/, '');
if (this.proxyOnServer) {
console.log("Proxy on server, using proxy URL:", `/_/ai-proxy/${this.fullName}/${path}`);
return `/_/ai-proxy/${this.fullName}/${path}`;
} else {
return `${this.baseUrl}/${path}`;
}
protected getUrl(path: string): string {
// Remove leading slashes from the path
path = path.replace(/^\/+/, "");
if (this.proxyOnServer) {
const proxyUrl = `/_/ai-proxy/${this.fullName}/${path}`;
console.log("Proxy on server, using proxy URL:", proxyUrl);
return proxyUrl;
} else {
return `${this.baseUrl}/${path}`;
}
🧰 Tools
🪛 GitHub Actions: Deno build test and push

[error] 52-58: Code formatting issues: inconsistent quote usage and improper line breaks in console.log statements

@@ -54,7 +58,7 @@ export class OllamaProvider extends AbstractProvider {

// List models api isn't behind /v1/ like the other endpoints, but we don't want to force the user to change the config yet
const response = await nativeFetch(
`${this.baseUrl.replace(/\/v1\/?/, "")}/api/tags`,
this.getUrl('api/tags').replace(/\/v1\/?/, ''),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix string formatting inconsistencies.

The URL construction uses single quotes instead of double quotes.

Apply this diff to fix the formatting:

-        this.getUrl('api/tags').replace(/\/v1\/?/, ''),
+        this.getUrl("api/tags").replace(/\/v1\/?/, ""),

-      this.getUrl('api/embeddings'),
+      this.getUrl("api/embeddings"),

Also applies to: 116-116

🧰 Tools
🪛 GitHub Actions: Deno build test and push

[error] 61-116: Code formatting issues: inconsistent quote usage (single quotes vs double quotes)

@@ -56,7 +58,7 @@ export class OpenAIProvider extends AbstractProvider {
const { messages, onDataReceived, onResponseComplete } = options;

try {
const sseUrl = `${this.baseUrl}/chat/completions`;
const sseUrl = this.getUrl('chat/completions');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistent quote usage in API endpoint strings.

The code has inconsistent quote usage in API endpoint strings, which is causing pipeline failures.

Apply this diff to standardize quote usage:

-      const sseUrl = this.getUrl('chat/completions');
+      const sseUrl = this.getUrl("chat/completions");

-        this.getUrl('models'),
+        this.getUrl("models"),

-        this.getUrl('chat/completions'),
+        this.getUrl("chat/completions"),

-      this.getUrl('embeddings'),
+      this.getUrl("embeddings"),

Also applies to: 138-138, 176-176, 235-235

🧰 Tools
🪛 GitHub Actions: Deno build test and push

[error] 61-235: Code formatting issues: inconsistent quote usage in API endpoint strings

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (2)
src/proxyHandler.ts (2)

127-132: ⚠️ Potential issue

Fix undefined nativeFetch usage.

The nativeFetch function is not defined. Use the standard fetch function instead.


141-143: ⚠️ Potential issue

Avoid exposing internal error details.

Return a generic error message instead of exposing the LLM API error text.

🧹 Nitpick comments (8)
src/providers/openai.ts (1)

31-40: Remove redundant proxyOnServer assignment and consider reordering parameters.

  1. The proxyOnServer property is redundantly set both in the super constructor and as a class property. Remove line 40 since the super constructor already handles this.
  2. Consider grouping related parameters together and placing optional parameters last.
  constructor(
    modelConfigName: string,
    apiKey: string,
    modelName: string,
    baseUrl: string,
-   requireAuth: boolean,
-   proxyOnServer: boolean,
+   proxyOnServer: boolean = false,
+   requireAuth: boolean = true,
  ) {
    super(modelConfigName, apiKey, baseUrl, modelName, proxyOnServer);
    this.requireAuth = requireAuth;
-   this.proxyOnServer = proxyOnServer;
  }
src/providers/ollama.ts (1)

25-27: Remove redundant property assignment.

The proxyOnServer property is already set by the super constructor call, making the direct assignment redundant.

Apply this diff to remove the redundant assignment:

    super(modelConfigName, apiKey, baseUrl, modelName, proxyOnServer);
    this.requireAuth = requireAuth;
-   this.proxyOnServer = proxyOnServer;
src/providers/gemini.ts (3)

25-31: Consider making baseUrl configurable.

While the constructor changes look good, consider making the baseUrl configurable through the constructor parameters instead of hardcoding it. This would provide more flexibility for testing or using different API endpoints.

 constructor(
   modelConfigName: string,
   apiKey: string,
   modelName: string,
   proxyOnServer: boolean,
+  baseUrl: string = "https://generativelanguage.googleapis.com",
 ) {
-  const baseUrl = "https://generativelanguage.googleapis.com";
   super(modelConfigName, apiKey, baseUrl, modelName, proxyOnServer);
 }

Line range hint 35-45: Add type safety to listModels response.

While the URL handling change is good, consider adding type safety to the response data to prevent runtime errors.

+interface GeminiModel {
+  name: string;
+  version: string;
+  displayName?: string;
+  description?: string;
+}
-async listModels(): Promise<any> {
+async listModels(): Promise<GeminiModel[]> {
   const apiUrl = this.getUrl(`v1beta/models?key=${this.apiKey}`);
   try {
     const response = await fetch(apiUrl);
     if (!response.ok) {
       throw new Error(`HTTP error! status: ${response.status}`);
     }
     const data = await response.json();
-    return data.models || [];
+    return (data.models || []) as GeminiModel[];
   } catch (error) {

220-222: Add type safety to embedding response.

While the URL handling change and error logging are good, consider adding type safety to the response data.

+interface GeminiEmbeddingResponse {
+  embedding: {
+    values: number[];
+  };
+}

 async _generateEmbeddings(options: { text: string }): Promise<Array<number>> {
   // ... existing code ...
   const response = await nativeFetch(
     this.getUrl(`v1beta/models/${this.modelName}:embedContent?key=${this.apiKey}`),
     {
       method: "POST",
       headers: headers,
       body: body,
     },
   );

   if (!response.ok) {
     console.error("HTTP response: ", response);
     console.error("HTTP response body: ", await response.json());
     throw new Error(`HTTP error, status: ${response.status}`);
   }

-  const data = await response.json();
+  const data = (await response.json()) as GeminiEmbeddingResponse;
   if (!data || !data.embedding || !data.embedding.values) {
     throw new Error("Invalid response from Gemini.");
   }

   return data.embedding.values;
 }
src/interfaces/Provider.ts (1)

50-62: Enhance URL construction robustness.

Consider these improvements:

  1. Extract string literals as constants
  2. Use URL class for safer URL construction

Apply this diff:

+const PROXY_URL_PREFIX = "/_/ai-proxy";
+
 protected getUrl(path: string): string {
   // Remove leading slashes from the path
   path = path.replace(/^\/+/, "");

   if (this.proxyOnServer) {
-    return `/_/ai-proxy/${this.fullName}/${path}`;
+    const proxyUrl = new URL(`${PROXY_URL_PREFIX}/${this.fullName}/${path}`, window.location.origin);
+    return proxyUrl.toString();
   } else {
-    return `${this.baseUrl}/${path}`;
+    const apiUrl = new URL(path, this.baseUrl);
+    return apiUrl.toString();
   }
 }
src/proxyHandler.ts (1)

81-94: Improve headers handling with constants and types.

Define constants for header names and use a type for excluded headers.

Apply this diff:

+const DEFAULT_HEADERS = {
+  'Content-Type': 'application/json',
+  'Accept': '*/*'
+} as const;
+
+const EXCLUDED_HEADERS = [
+  'host',
+  'connection',
+  'origin',
+  'content-type',
+  'content-length'
+] as const;
+
-const requestHeaders: Record<string, string> = {
-  "Content-Type": "application/json",
-  "Accept": "*/*",
-};
+const requestHeaders: Record<string, string> = { ...DEFAULT_HEADERS };

 // Copy other relevant headers, but skip problematic ones
 for (const [key, value] of Object.entries(request.headers)) {
-  if (
-    !["host", "connection", "origin", "content-type", "content-length"]
-      .includes(key.toLowerCase())
-  ) {
+  if (!EXCLUDED_HEADERS.includes(key.toLowerCase() as typeof EXCLUDED_HEADERS[number])) {
     requestHeaders[key] = value as string;
   }
 }
docs/Changelog.md (1)

9-9: Enhance changelog entry with more details.

The changelog entry could be more informative about the proxy feature's benefits and configuration.

Apply this diff:

-Add experimental support for proxying all LLM calls through the silverbullet server
+Add experimental support for proxying LLM calls through the silverbullet server:
+- Configure proxy support per model using `proxyOnServer` option
+- Improves security by keeping API keys on the server
+- Enables better request management and monitoring
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1e471f3 and cce75cc.

⛔ Files ignored due to path filters (1)
  • silverbullet-ai.plug.yaml is excluded by none and included by none
📒 Files selected for processing (7)
  • docs/Changelog.md (1 hunks)
  • src/interfaces/EmbeddingProvider.ts (2 hunks)
  • src/interfaces/Provider.ts (2 hunks)
  • src/providers/gemini.ts (4 hunks)
  • src/providers/ollama.ts (4 hunks)
  • src/providers/openai.ts (6 hunks)
  • src/proxyHandler.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/interfaces/EmbeddingProvider.ts
🧰 Additional context used
📓 Path-based instructions (6)
docs/Changelog.md (1)

Pattern **/*.md: Review these markdown files. They are written for a markdown note taking tool called Silverbullet, and may contain extra syntax not typically found in markdown.

Ensure any documentation makes sense and is good.

src/proxyHandler.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/providers/gemini.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/interfaces/Provider.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/providers/openai.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

src/providers/ollama.ts (1)

Pattern **/*.ts: Review these typescript files for best practices, readability, and DRY.

🔇 Additional comments (9)
src/providers/openai.ts (2)

61-61: Fix inconsistent quote usage in API endpoint strings.

The code has inconsistent quote usage in API endpoint strings, which is causing pipeline failures.

Apply this diff to standardize quote usage:

-      const sseUrl = this.getUrl("chat/completions");
+      const sseUrl = this.getUrl('chat/completions');

-        this.getUrl("models"),
+        this.getUrl('models'),

-        this.getUrl("chat/completions"),
+        this.getUrl('chat/completions'),

-      this.getUrl("embeddings"),
+      this.getUrl('embeddings'),

Also applies to: 138-138, 176-176, 235-235


208-214: LGTM! Constructor changes look good.

The constructor signature changes are well-structured:

  • Parameters align with the parent class
  • Default value for requireAuth is appropriate
  • Super constructor call is correct
src/providers/ollama.ts (3)

61-61: Fix string formatting inconsistencies.

The URL construction uses single quotes instead of double quotes.

Apply this diff to fix the formatting:

-        this.getUrl('api/tags').replace(/\/v1\/?/, ''),
+        this.getUrl("api/tags").replace(/\/v1\/?/, ""),

89-95: LGTM! Constructor changes are well-implemented.

The updated constructor signature correctly implements the parent class requirements and maintains consistency with the rest of the codebase.


116-116: Fix string formatting inconsistencies.

The URL construction uses single quotes instead of double quotes.

Apply this diff to fix the formatting:

-      this.getUrl('api/embeddings'),
+      this.getUrl("api/embeddings"),
src/interfaces/Provider.ts (3)

30-34: LGTM! Properties are well-structured.

The addition of proxyOnServer aligns with the proxy feature implementation.


37-48: LGTM! Constructor changes are appropriate.

The constructor correctly handles the new proxyOnServer parameter with a default value of false.


12-12: Verify the impact of renaming name to fullName.

The property rename is a breaking change that requires updates in all implementing classes.

Run this script to find affected implementations:

✅ Verification successful

The interface change aligns with existing implementation patterns

The rename from name to fullName in Provider.ts is safe because:

  • Abstract implementations already use fullName internally while accepting name in their constructors
  • No direct access to the .name property was found in the implementing classes
  • This change brings consistency with EmbeddingProviderInterface which already uses fullName
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find all implementations referencing the old 'name' property
rg -l "implements.*ProviderInterface" | xargs rg "\.name\b"

Length of output: 61


Script:

#!/bin/bash
# Find class implementations of ProviderInterface
ast-grep --pattern 'class $_ implements ProviderInterface'

# Also search for possible interface implementations with different syntax
rg -l "ProviderInterface"

# Search for any usage of 'name' property in TypeScript files
fd -e ts -x rg -l "\.name\b" {}

Length of output: 404


Script:

#!/bin/bash
# Check interface inheritance
cat src/interfaces/ImageProvider.ts src/interfaces/EmbeddingProvider.ts

# Check actual usage of name in potentially affected files
rg -B2 -A2 "\.name\b" src/providers/ollama.ts src/init.ts

Length of output: 4450

src/proxyHandler.ts (1)

164-193: 🛠️ Refactor suggestion

Refactor response handling.

Extract response handling logic into a separate function and improve error handling.

Apply this diff:

+async function handleResponse(response: Response): Promise<EndpointResponse> {
+  if (!response.ok) {
+    const status = response.status;
+    return {
+      status,
+      headers: Object.fromEntries(response.headers.entries()),
+      body: 'An error occurred while processing your request.'
+    };
+  }
+
+  const responseBody = await response.json().catch(async () => {
+    console.log("JSON parsing failed, falling back to text");
+    return await response.text();
+  });
+
+  return {
+    status: response.status,
+    headers: Object.fromEntries(response.headers.entries()),
+    body: responseBody
+  };
+}
+
-// For non-streaming responses, return EndpointResponse
-const responseBody = await response.json().catch(async () => {
-  console.log("proxyHandler JSON parsing failed, falling back to text");
-  return await response.text();
-});
-
-return {
-  status: response.status,
-  headers: Object.fromEntries(response.headers.entries()),
-  body: responseBody,
-};
+return await handleResponse(response);

Likely invalid or redundant comment.

Comment on lines +170 to +172
this.getUrl(
`v1beta/models/${this.modelName}:generateContent?key=${this.apiKey}`,
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling and add type safety.

While the URL handling change is good, the error handling and type safety could be improved.

+interface GeminiResponse {
+  candidates: Array<{
+    content: {
+      parts: Array<{
+        text: string;
+      }>;
+    };
+  }>;
+}

 async nonStreamingChat(messages: Array<ChatMessage>): Promise<string> {
   const payloadContents: GeminiChatContent[] = this.mapRolesForGemini(messages);
   const response = await nativeFetch(
     this.getUrl(`v1beta/models/${this.modelName}:generateContent?key=${this.apiKey}`),
     {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({ contents: payloadContents }),
     },
   );

   if (!response.ok) {
-    throw new Error(`HTTP error! status: ${response.status}`);
+    const errorData = await response.json().catch(() => null);
+    throw new Error(
+      `Gemini API error: ${response.status} ${response.statusText}${
+        errorData ? ` - ${JSON.stringify(errorData)}` : ''
+      }`
+    );
   }

-  const responseData = await response.json();
+  const responseData = (await response.json()) as GeminiResponse;
   return responseData.candidates[0].content.parts[0].text;
 }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +96 to +98
const sseUrl = this.getUrl(
`v1beta/models/${this.modelName}:streamGenerateContent?key=${this.apiKey}&alt=sse`,
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add explicit return type and improve error handling.

The URL handling change looks good, but the method needs better type safety and error handling.

-streamChat(options: StreamChatOptions) {
+streamChat(options: StreamChatOptions): Promise<string> {
   const { messages, onDataReceived } = options;
-  try {
+  return new Promise((resolve, reject) => {
     const sseUrl = this.getUrl(
       `v1beta/models/${this.modelName}:streamGenerateContent?key=${this.apiKey}&alt=sse`,
     );

Also update the error handler:

     source.addEventListener("error", (e: sseEvent) => {
       console.error("SSE error:", e);
       source.close();
+      reject(new Error("SSE connection failed"));
     });

Committable suggestion skipped: line range outside the PR's diff.

export async function proxyHandler(
request: EndpointRequest,
): Promise<Response | EndpointResponse> {
// TODO: Add a token or something to restrict access to this endpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address the security TODO comment.

The endpoint needs proper authentication to prevent unauthorized access.

Would you like me to propose an implementation for token-based authentication?

Comment on lines +64 to +73
if (modelConfig.provider === Provider.Ollama) {
// For the list models endpoint, use /api/ without v1 prefix
if (remainingPath.includes("models") || remainingPath == "api/tags") {
baseUrl = baseUrl.replace(/v1\/?/, "");
} else {
// Everything else should use openai-compatible endpoints under /v1/
baseUrl = baseUrl.replace(/\/v1\/?$/, "") + "/v1/";
}
console.log("New baseUrl for ollama:", baseUrl);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refactor Ollama URL handling.

The URL manipulation logic could be clearer and the TODO comment should be addressed.

Apply this diff:

+const OLLAMA_API_PREFIX = "/api";
+const OLLAMA_OPENAI_PREFIX = "/v1";
+
 if (modelConfig.provider === Provider.Ollama) {
-  // For the list models endpoint, use /api/ without v1 prefix
+  // Ollama uses different API paths for model listing vs OpenAI-compatible endpoints
   if (remainingPath.includes("models") || remainingPath == "api/tags") {
-    baseUrl = baseUrl.replace(/v1\/?/, "");
+    baseUrl = new URL(OLLAMA_API_PREFIX, baseUrl).toString();
   } else {
-    // Everything else should use openai-compatible endpoints under /v1/
-    baseUrl = baseUrl.replace(/\/v1\/?$/, "") + "/v1/";
+    baseUrl = new URL(OLLAMA_OPENAI_PREFIX, baseUrl).toString();
   }
   console.log("New baseUrl for ollama:", baseUrl);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (modelConfig.provider === Provider.Ollama) {
// For the list models endpoint, use /api/ without v1 prefix
if (remainingPath.includes("models") || remainingPath == "api/tags") {
baseUrl = baseUrl.replace(/v1\/?/, "");
} else {
// Everything else should use openai-compatible endpoints under /v1/
baseUrl = baseUrl.replace(/\/v1\/?$/, "") + "/v1/";
}
console.log("New baseUrl for ollama:", baseUrl);
}
const OLLAMA_API_PREFIX = "/api";
const OLLAMA_OPENAI_PREFIX = "/v1";
if (modelConfig.provider === Provider.Ollama) {
// Ollama uses different API paths for model listing vs OpenAI-compatible endpoints
if (remainingPath.includes("models") || remainingPath == "api/tags") {
baseUrl = new URL(OLLAMA_API_PREFIX, baseUrl).toString();
} else {
baseUrl = new URL(OLLAMA_OPENAI_PREFIX, baseUrl).toString();
}
console.log("New baseUrl for ollama:", baseUrl);
}

Comment on lines +23 to +48
if (!modelName) {
return new Response("Model name not specified", { status: 400 });
}

try {
const aiSettings = await system.getSpaceConfig("ai", {});
if (!aiSettings || !aiSettings.textModels) {
return new Response("No AI models configured", { status: 404 });
}

// Find the model configuration that matches the requested model name
const modelConfig = aiSettings.textModels.find(
(model: ModelConfig) => model.name === modelName,
);

if (!modelConfig) {
return new Response(`Model '${modelName}' not found in configuration`, {
status: 404,
});
}

// Check if proxy is enabled for this model
if (modelConfig.proxyOnServer === false) {
return new Response(`Proxy is disabled for model '${modelName}'`, {
status: 403,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Standardize error responses.

Create a helper function to generate consistent error responses.

Apply this diff:

+interface ErrorResponse {
+  error: string;
+  status: number;
+}
+
+function createErrorResponse(message: string, status: number): Response {
+  const error: ErrorResponse = {
+    error: message,
+    status
+  };
+  return new Response(JSON.stringify(error), { 
+    status,
+    headers: { 'Content-Type': 'application/json' }
+  });
+}
+
 if (!modelName) {
-  return new Response("Model name not specified", { status: 400 });
+  return createErrorResponse("Model name not specified", 400);
 }

 try {
   const aiSettings = await system.getSpaceConfig("ai", {});
   if (!aiSettings || !aiSettings.textModels) {
-    return new Response("No AI models configured", { status: 404 });
+    return createErrorResponse("No AI models configured", 404);
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!modelName) {
return new Response("Model name not specified", { status: 400 });
}
try {
const aiSettings = await system.getSpaceConfig("ai", {});
if (!aiSettings || !aiSettings.textModels) {
return new Response("No AI models configured", { status: 404 });
}
// Find the model configuration that matches the requested model name
const modelConfig = aiSettings.textModels.find(
(model: ModelConfig) => model.name === modelName,
);
if (!modelConfig) {
return new Response(`Model '${modelName}' not found in configuration`, {
status: 404,
});
}
// Check if proxy is enabled for this model
if (modelConfig.proxyOnServer === false) {
return new Response(`Proxy is disabled for model '${modelName}'`, {
status: 403,
});
interface ErrorResponse {
error: string;
status: number;
}
function createErrorResponse(message: string, status: number): Response {
const error: ErrorResponse = {
error: message,
status
};
return new Response(JSON.stringify(error), {
status,
headers: { 'Content-Type': 'application/json' }
});
}
if (!modelName) {
return createErrorResponse("Model name not specified", 400);
}
try {
const aiSettings = await system.getSpaceConfig("ai", {});
if (!aiSettings || !aiSettings.textModels) {
return createErrorResponse("No AI models configured", 404);
}
// Find the model configuration that matches the requested model name
const modelConfig = aiSettings.textModels.find(
(model: ModelConfig) => model.name === modelName,
);
if (!modelConfig) {
return new Response(`Model '${modelName}' not found in configuration`, {
status: 404,
});
}
// Check if proxy is enabled for this model
if (modelConfig.proxyOnServer === false) {
return new Response(`Proxy is disabled for model '${modelName}'`, {
status: 403,
});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant