Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
c5c8292
chore(internal): fix incremental formatting in some cases
stainless-app[bot] Sep 27, 2025
34da720
chore(internal): codegen related update
stainless-app[bot] Sep 27, 2025
252e0a2
chore(internal): codegen related update
stainless-app[bot] Sep 27, 2025
b5432de
feat(api): removing openai/v1
stainless-app[bot] Sep 27, 2025
a0b0fb7
feat(api): expires_after changes for /files
stainless-app[bot] Sep 30, 2025
367d775
feat(api)!: fixes to remove deprecated inference resources
stainless-app[bot] Sep 30, 2025
f1cf9d6
feat(api): updating post /v1/files to have correct multipart/form-data
stainless-app[bot] Sep 30, 2025
17b9eb3
docs: update examples
stainless-app[bot] Sep 30, 2025
a38809d
codegen metadata
stainless-app[bot] Sep 30, 2025
b0676c8
feat(api): SDKs for vector store file batches
stainless-app[bot] Sep 30, 2025
88731bf
feat(api): SDKs for vector store file batches apis
stainless-app[bot] Sep 30, 2025
793e069
feat(api): moving { rerank, agents } to `client.alpha.`
stainless-app[bot] Sep 30, 2025
a71b421
fix: fix stream event model reference
stainless-app[bot] Sep 30, 2025
aec1d5f
feat(api): move post_training and eval under alpha namespace
stainless-app[bot] Sep 30, 2025
25a0f10
feat(api): fix file batches SDK to list_files
stainless-app[bot] Sep 30, 2025
8910a12
feat(api)!: use input_schema instead of parameters for tools
stainless-app[bot] Oct 1, 2025
06f2bca
feat(api): tool api (input_schema, etc.) changes
stainless-app[bot] Oct 2, 2025
5cee3d6
fix(api): fix the ToolDefParam updates
stainless-app[bot] Oct 2, 2025
e4f7840
feat(api): fixes to URLs
stainless-app[bot] Oct 2, 2025
6acae91
fix(api): another fix to capture correct responses.create() params
stainless-app[bot] Oct 2, 2025
a246793
chore(internal): use npm pack for build uploads
stainless-app[bot] Oct 7, 2025
dcc7bb8
chore: extract some types in mcp docs
stainless-app[bot] Oct 9, 2025
e0728d5
feat(api): several updates including Conversations, Responses changes…
stainless-app[bot] Oct 10, 2025
b521df1
codegen metadata
stainless-app[bot] Oct 10, 2025
19535c2
feat(api): updates to vector_store, etc.
stainless-app[bot] Oct 13, 2025
b982ff9
release: 0.3.0-alpha.1
stainless-app[bot] Oct 13, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@
"postCreateCommand": "yarn install",
"customizations": {
"vscode": {
"extensions": [
"esbenp.prettier-vscode"
]
"extensions": ["esbenp.prettier-vscode"]
}
}
}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ dist
dist-deno
/*.tgz
.idea/
.eslintcache

2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.2.23-alpha.1"
".": "0.3.0-alpha.1"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 111
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-f252873ea1e1f38fd207331ef2621c511154d5be3f4076e59cc15754fc58eee4.yml
openapi_spec_hash: 10cbb4337a06a9fdd7d08612dd6044c3
config_hash: 0358112cc0f3d880b4d55debdbe1cfa3
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-15a929a0b71de779accc56bd09d1e5f580e216affdb408cf9890bc7a37847e9e.yml
openapi_spec_hash: 5db9f7c7e80427cfa0298cbb01689559
config_hash: 06758df5c4f261f9c97eafcef7e0028f
52 changes: 52 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,57 @@
# Changelog

## 0.3.0-alpha.1 (2025-10-13)

Full Changelog: [v0.2.23-alpha.1...v0.3.0-alpha.1](https://github.com/llamastack/llama-stack-client-typescript/compare/v0.2.23-alpha.1...v0.3.0-alpha.1)

### ⚠ BREAKING CHANGES

* **api:** use input_schema instead of parameters for tools
* **api:** fixes to remove deprecated inference resources

### Features

* **api:** expires_after changes for /files ([a0b0fb7](https://github.com/llamastack/llama-stack-client-typescript/commit/a0b0fb7aa74668f3f6996c178f9654723b8b0f22))
* **api:** fix file batches SDK to list_files ([25a0f10](https://github.com/llamastack/llama-stack-client-typescript/commit/25a0f10cffa7de7f1457d65c97259911bc70ab0a))
* **api:** fixes to remove deprecated inference resources ([367d775](https://github.com/llamastack/llama-stack-client-typescript/commit/367d775c3d5a2fd85bf138d2b175e91b7c185913))
* **api:** fixes to URLs ([e4f7840](https://github.com/llamastack/llama-stack-client-typescript/commit/e4f78407f74f3ba7597de355c314e1932dd94761))
* **api:** move post_training and eval under alpha namespace ([aec1d5f](https://github.com/llamastack/llama-stack-client-typescript/commit/aec1d5ff198473ba736bf543ad00c6626cab9b81))
* **api:** moving { rerank, agents } to `client.alpha.` ([793e069](https://github.com/llamastack/llama-stack-client-typescript/commit/793e0694d75c2af4535bf991d5858cd1f21300b4))
* **api:** removing openai/v1 ([b5432de](https://github.com/llamastack/llama-stack-client-typescript/commit/b5432de2ad56ff0d2fd5a5b8e1755b5237616b60))
* **api:** SDKs for vector store file batches ([b0676c8](https://github.com/llamastack/llama-stack-client-typescript/commit/b0676c837bbd835276fea3fe12f435afdbb75ef7))
* **api:** SDKs for vector store file batches apis ([88731bf](https://github.com/llamastack/llama-stack-client-typescript/commit/88731bfecd6f548ae79cbe2a1125620e488c42a3))
* **api:** several updates including Conversations, Responses changes, etc. ([e0728d5](https://github.com/llamastack/llama-stack-client-typescript/commit/e0728d5dd59be8723d9f967d6164351eb05528d1))
* **api:** tool api (input_schema, etc.) changes ([06f2bca](https://github.com/llamastack/llama-stack-client-typescript/commit/06f2bcaf0df2e5d462cbe2d9ef3704ab0cfe9248))
* **api:** updates to vector_store, etc. ([19535c2](https://github.com/llamastack/llama-stack-client-typescript/commit/19535c27147bf6f6861b807d9eeee471b5625148))
* **api:** updating post /v1/files to have correct multipart/form-data ([f1cf9d6](https://github.com/llamastack/llama-stack-client-typescript/commit/f1cf9d68b6b2569dfb5ea3e2d2c33eff1a832e47))
* **api:** use input_schema instead of parameters for tools ([8910a12](https://github.com/llamastack/llama-stack-client-typescript/commit/8910a121146aeddcb8f400101e6a2232245097e0))


### Bug Fixes

* **api:** another fix to capture correct responses.create() params ([6acae91](https://github.com/llamastack/llama-stack-client-typescript/commit/6acae910db289080e8f52864f1bdf6d7951d1c3b))
* **api:** fix the ToolDefParam updates ([5cee3d6](https://github.com/llamastack/llama-stack-client-typescript/commit/5cee3d69650a4c827e12fc046c1d2ec3b2fa9126))
* fix stream event model reference ([a71b421](https://github.com/llamastack/llama-stack-client-typescript/commit/a71b421152a609e49e76d01c6e4dd46eb3dbfae0))


### Chores

* extract some types in mcp docs ([dcc7bb8](https://github.com/llamastack/llama-stack-client-typescript/commit/dcc7bb8b4d940982c2e9c6d1a541636e99fdc5ff))
* **internal:** codegen related update ([252e0a2](https://github.com/llamastack/llama-stack-client-typescript/commit/252e0a2a38bd8aedab91b401c440a9b10c056cec))
* **internal:** codegen related update ([34da720](https://github.com/llamastack/llama-stack-client-typescript/commit/34da720c34c35dafb38775243d28dfbdce2497db))
* **internal:** fix incremental formatting in some cases ([c5c8292](https://github.com/llamastack/llama-stack-client-typescript/commit/c5c8292b631c678efff5498bbab9f5a43bee50b6))
* **internal:** use npm pack for build uploads ([a246793](https://github.com/llamastack/llama-stack-client-typescript/commit/a24679300cff93fea8ad4bc85e549ecc88198d58))


### Documentation

* update examples ([17b9eb3](https://github.com/llamastack/llama-stack-client-typescript/commit/17b9eb3c40957b63d2a71f7fc21944abcc720d80))


### Build System

* Bump version to 0.2.23 ([16e05ed](https://github.com/llamastack/llama-stack-client-typescript/commit/16e05ed9798233375e19098992632d223c3f5d8d))

## 0.2.23-alpha.1 (2025-09-26)

Full Changelog: [v0.2.19-alpha.1...v0.2.23-alpha.1](https://github.com/llamastack/llama-stack-client-typescript/compare/v0.2.19-alpha.1...v0.2.23-alpha.1)
Expand Down
36 changes: 18 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,13 @@ import LlamaStackClient from 'llama-stack-client';

const client = new LlamaStackClient();

const stream = await client.inference.chatCompletion({
const stream = await client.chat.completions.create({
messages: [{ content: 'string', role: 'user' }],
model_id: 'model_id',
model: 'model',
stream: true,
});
for await (const chatCompletionResponseStreamChunk of stream) {
console.log(chatCompletionResponseStreamChunk.completion_message);
for await (const chatCompletionChunk of stream) {
console.log(chatCompletionChunk);
}
```

Expand All @@ -64,11 +64,11 @@ import LlamaStackClient from 'llama-stack-client';

const client = new LlamaStackClient();

const params: LlamaStackClient.InferenceChatCompletionParams = {
const params: LlamaStackClient.Chat.CompletionCreateParams = {
messages: [{ content: 'string', role: 'user' }],
model_id: 'model_id',
model: 'model',
};
const chatCompletionResponse: LlamaStackClient.ChatCompletionResponse = await client.inference.chatCompletion(
const completion: LlamaStackClient.Chat.CompletionCreateResponse = await client.chat.completions.create(
params,
);
```
Expand Down Expand Up @@ -113,8 +113,8 @@ a subclass of `APIError` will be thrown:

<!-- prettier-ignore -->
```ts
const chatCompletionResponse = await client.inference
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
const completion = await client.chat.completions
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
.catch(async (err) => {
if (err instanceof LlamaStackClient.APIError) {
console.log(err.status); // 400
Expand Down Expand Up @@ -155,7 +155,7 @@ const client = new LlamaStackClient({
});

// Or, configure per-request:
await client.inference.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' }, {
await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' }, {
maxRetries: 5,
});
```
Expand All @@ -172,7 +172,7 @@ const client = new LlamaStackClient({
});

// Override per-request:
await client.inference.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' }, {
await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' }, {
timeout: 5 * 1000,
});
```
Expand All @@ -193,17 +193,17 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi
```ts
const client = new LlamaStackClient();

const response = await client.inference
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
const response = await client.chat.completions
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
.asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object

const { data: chatCompletionResponse, response: raw } = await client.inference
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
const { data: completion, response: raw } = await client.chat.completions
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
.withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(chatCompletionResponse.completion_message);
console.log(completion);
```

### Making custom/undocumented requests
Expand Down Expand Up @@ -307,8 +307,8 @@ const client = new LlamaStackClient({
});

// Override per-request:
await client.inference.chatCompletion(
{ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' },
await client.chat.completions.create(
{ messages: [{ content: 'string', role: 'user' }], model: 'model' },
{
httpAgent: new http.Agent({ keepAlive: false }),
},
Expand Down
Loading
Loading