Skip to content

Commit 2f5bab0

Browse files
authored
[REST] Endpoints start with / (#144)
1 parent 40c6145 commit 2f5bab0

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

1_developer/_2_rest/index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -57,14 +57,14 @@ The following endpoints are available in LM Studio's v1 REST API.
5757
</table>
5858

5959
## Inference endpoint comparison
60-
The table below compares the features of LM Studio's `api/v1/chat` endpoint with the OpenAI-compatible `v1/responses` and `v1/chat/completions` endpoints.
60+
The table below compares the features of LM Studio's `/api/v1/chat` endpoint with the OpenAI-compatible `/v1/responses` and `/v1/chat/completions` endpoints.
6161
<table class="flexible-cols">
6262
<thead>
6363
<tr>
6464
<th>Feature</th>
65-
<th><code>api/v1/chat</code></th>
66-
<th><code>v1/responses</code></th>
67-
<th><code>v1/chat/completions</code></th>
65+
<th><code>/api/v1/chat</code></th>
66+
<th><code>/v1/responses</code></th>
67+
<th><code>/v1/chat/completions</code></th>
6868
</tr>
6969
</thead>
7070
<tbody>

1_developer/_2_rest/quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ See the full [chat](/docs/developer/rest/chat) docs for more details.
9191
## Use MCP servers via API
9292

9393

94-
Enable the model interact with ephemeral Model Context Protocol (MCP) servers in `api/v1/chat` by specifying servers in the `integrations` field.
94+
Enable the model interact with ephemeral Model Context Protocol (MCP) servers in `/api/v1/chat` by specifying servers in the `integrations` field.
9595

9696
```lms_code_snippet
9797
variants:

1_developer/_2_rest/streaming-events.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ index: 4
66

77
Streaming events let you render chat responses incrementally over Server‑Sent Events (SSE). When you call `POST /api/v1/chat` with `stream: true`, the server emits a series of named events that you can consume. These events arrive in order and may include multiple deltas (for reasoning and message content), tool call boundaries and payloads, and any errors encountered. The stream always begins with `chat.start` and concludes with `chat.end`, which contains the aggregated result equivalent to a non‑streaming response.
88

9-
List of event types that can be sent in an `api/v1/chat` response stream:
9+
List of event types that can be sent in an `/api/v1/chat` response stream:
1010
- `chat.start`
1111
- `model_load.start`
1212
- `model_load.progress`

0 commit comments

Comments
 (0)