Skip to content

Conversation

@DS-Controller2
Copy link
Contributor

TLDR

This pull request introduces a dedicated ModelScopeOpenAICompatibleProvider to resolve a 400 Bad Request error when making non-streaming API calls to ModelScope. The provider ensures API compatibility by removing the stream_options parameter from non-streaming requests, which was the root cause of the error. A corresponding unit test has been added to verify this behavior.

Dive Deeper

The investigation into issue #840 revealed that the ModelScope API endpoint returns a 400 Bad Request if a request payload contains the stream_options parameter when the stream parameter is set to false. Our default OpenAI provider included stream_options by default, causing all non-streaming requests to fail.

This fix introduces a new ModelScopeOpenAICompatibleProvider that overrides the buildRequest method. This overridden method inspects the request payload and explicitly deletes the stream_options property if the request is not a streaming request. The determineProvider factory function in packages/core/src/core/openaiContentGenerator/index.ts has been updated to use this new provider when the configured baseUrl includes 'modelscope'.

This ensures that requests sent to ModelScope are compliant with their API specification, resolving the error while maintaining correct behavior for other OpenAI-compatible providers.

Reviewer Test Plan

A reviewer can validate this change in two ways:

  1. Unit Tests: Run the newly added test suite for the provider, which validates the logic for removing stream_options.

    npm run test -- packages/core/src/core/openaiContentGenerator/provider/modelscope.test.ts
  2. Manual E2E Test (Requires ModelScope API Key):

    • Configure your environment to point to the ModelScope API:
      export OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
      export OPENAI_API_KEY="<your-modelscope-api-key>"
    • Run a simple non-interactive, non-streaming prompt:
      echo "What is the capital of France?" | qwen
    • Before this change: The command fails with a [API ERROR: 400 STATUS CODE (NO BODY)].
    • After this change: The command should succeed and return a valid response from the model.

Testing Matrix

I have validated the changes on the following platform:

🍏 🪟 🐧
npm run
npx
Docker
Podman - -
Seatbelt - -

Linked issues / bugs

Fixes #840

## TLDR

This pull request introduces a dedicated `ModelScopeOpenAICompatibleProvider` to resolve a `400 Bad Request` error when making non-streaming API calls to ModelScope. The provider ensures API compatibility by removing the `stream_options` parameter from non-streaming requests, which was the root cause of the error. A corresponding unit test has been added to verify this behavior.

## Dive Deeper

The investigation into issue QwenLM#840 revealed that the ModelScope API endpoint returns a `400 Bad Request` if a request payload contains the `stream_options` parameter when the `stream` parameter is set to `false`. Our default OpenAI provider included `stream_options` by default, causing all non-streaming requests to fail.

This fix introduces a new `ModelScopeOpenAICompatibleProvider` that overrides the `buildRequest` method. This overridden method inspects the request payload and explicitly deletes the `stream_options` property if the request is not a streaming request. The `determineProvider` factory function in `packages/core/src/core/openaiContentGenerator/index.ts` has been updated to use this new provider when the configured `baseUrl` includes 'modelscope'.

This ensures that requests sent to ModelScope are compliant with their API specification, resolving the error while maintaining correct behavior for other OpenAI-compatible providers.

## Reviewer Test Plan

A reviewer can validate this change in two ways:

1.  **Unit Tests:** Run the newly added test suite for the provider, which validates the logic for removing `stream_options`.
    ```bash
    npm run test -- packages/core/src/core/openaiContentGenerator/provider/modelscope.test.ts
    ```

2.  **Manual E2E Test (Requires ModelScope API Key):**
    *   Configure your environment to point to the ModelScope API:
        ```bash
        export OPENAI_BASE_URL="https://api-inference.modelscope.cn/v1"
        export OPENAI_API_KEY="<your-modelscope-api-key>"
        ```
    *   Run a simple non-interactive, non-streaming prompt:
        ```bash
        echo "What is the capital of France?" | qwen
        ```
    *   **Before this change:** The command fails with a `[API ERROR: 400 STATUS CODE (NO BODY)]`.
    *   **After this change:** The command should succeed and return a valid response from the model.

## Testing Matrix

I have validated the changes on the following platform:

|          | 🍏  | 🪟  | 🐧  |
| -------- | --- | --- | --- |
| npm run  | ❓  | ❓  | ✅  |
| npx      | ❓  | ❓  | ❓  |
| Docker   | ❓  | ❓  | ❓  |
| Podman   | ❓  | -   | -   |
| Seatbelt | ❓  | -   | -   |

## Linked issues / bugs

Fixes QwenLM#840
@Mingholy
Copy link
Collaborator

It seems there is a settings.json file that was committed unexpectedly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

✕ [API Error: 400 status code (no body)]

2 participants