Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
d23c909
1
lgrammel Dec 8, 2025
b32b10f
p 1
lgrammel Dec 8, 2025
14d261c
p xai
lgrammel Dec 8, 2025
443a21b
Merge branch 'main' into lg/XY180bQG
lgrammel Dec 8, 2025
4b4426a
p groq
lgrammel Dec 8, 2025
8e37d57
p deepseek
lgrammel Dec 8, 2025
297829f
p perplexity
lgrammel Dec 8, 2025
a39b0aa
f usage
lgrammel Dec 8, 2025
3243d21
x test
lgrammel Dec 8, 2025
80a8fd4
x core
lgrammel Dec 8, 2025
9c1b782
x core
lgrammel Dec 8, 2025
cdd4289
f core
lgrammel Dec 8, 2025
000a746
google usage
aayush-kapoor Dec 8, 2025
8f809ad
pretty
aayush-kapoor Dec 8, 2025
e18e23e
anthropic usage
aayush-kapoor Dec 8, 2025
1227c0e
U - openai chat usage
aayush-kapoor Dec 8, 2025
a1f9ae7
U - openai completion usage
aayush-kapoor Dec 8, 2025
a6eeae0
U - openai responses usage
aayush-kapoor Dec 8, 2025
ac0d948
rsc fix
aayush-kapoor Dec 8, 2025
418b82e
bedrock usage
aayush-kapoor Dec 8, 2025
44a0b6c
cohere usage
aayush-kapoor Dec 8, 2025
09e3685
huggingface usage
aayush-kapoor Dec 8, 2025
d161754
mistral usage
aayush-kapoor Dec 8, 2025
9efd807
type check fixes
aayush-kapoor Dec 8, 2025
f992820
tc fixes
aayush-kapoor Dec 8, 2025
9285d32
updated tests and snapshots
aayush-kapoor Dec 8, 2025
8d42543
revert rsc fix
aayush-kapoor Dec 8, 2025
c55001a
doGenerate transform added
aayush-kapoor Dec 8, 2025
5128e47
fix for type-check error in rsc
aayush-kapoor Dec 8, 2025
43c0a46
Merge branch 'main' into lg/XY180bQG
aayush-kapoor Dec 8, 2025
e20f636
cs
lgrammel Dec 9, 2025
0e5b13f
push
lgrammel Dec 9, 2025
07eee4f
d migration guide
lgrammel Dec 9, 2025
084f816
rm redundant data
lgrammel Dec 9, 2025
b0c6068
d
lgrammel Dec 9, 2025
b938982
p
lgrammel Dec 9, 2025
b4fad60
e
lgrammel Dec 9, 2025
3f89b15
t
lgrammel Dec 9, 2025
ded158e
strea
lgrammel Dec 9, 2025
905500a
c
lgrammel Dec 9, 2025
b84b64e
x
lgrammel Dec 9, 2025
dfe4aa9
i
lgrammel Dec 9, 2025
8429372
d
lgrammel Dec 9, 2025
38c6ad5
Merge branch 'main' into lg/XY180bQG
lgrammel Dec 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .changeset/large-vans-applaud.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
'@ai-sdk/openai-compatible': patch
'@ai-sdk/amazon-bedrock': patch
'@ai-sdk/google-vertex': patch
'@ai-sdk/huggingface': patch
'@ai-sdk/perplexity': patch
'@ai-sdk/anthropic': patch
'@ai-sdk/deepseek': patch
'@ai-sdk/provider': patch
'@ai-sdk/mistral': patch
'@ai-sdk/cohere': patch
'@ai-sdk/google': patch
'@ai-sdk/openai': patch
'@ai-sdk/azure': patch
'@ai-sdk/groq': patch
'@ai-sdk/rsc': patch
'@ai-sdk/xai': patch
'ai': patch
---

feat: extended token usage
10 changes: 5 additions & 5 deletions content/cookbook/05-node/45-stream-object-record-token-usage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ const result = streamObject({
The [`streamObject`](/docs/reference/ai-sdk-core/stream-object) result contains a `usage` promise that resolves to the total token usage.

```ts file='index.ts' highlight={"29,32"}
import { streamObject, TokenUsage } from 'ai';
import { streamObject, LanguageModelUsage } from 'ai';
import { z } from 'zod';

const result = streamObject({
Expand All @@ -55,21 +55,21 @@ const result = streamObject({
});

// your custom function to record token usage:
function recordTokenUsage({
function recordUsage({
inputTokens,
outputTokens,
totalTokens,
}: TokenUsage) {
}: LanguageModelUsage) {
console.log('Prompt tokens:', inputTokens);
console.log('Completion tokens:', outputTokens);
console.log('Total tokens:', totalTokens);
}

// use as promise:
result.usage.then(recordTokenUsage);
result.usage.then(recordUsage);

// use with async/await:
recordTokenUsage(await result.usage);
recordUsage(await result.usage);

// note: the stream needs to be consumed because of backpressure
for await (const partialObject of result.partialObjectStream) {
Expand Down
307 changes: 251 additions & 56 deletions content/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx

Large diffs are not rendered by default.

406 changes: 316 additions & 90 deletions content/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx

Large diffs are not rendered by default.

61 changes: 50 additions & 11 deletions content/docs/07-reference/01-ai-sdk-core/03-generate-object.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -595,30 +595,69 @@ To see `generateObject` in action, check out the [additional examples](#more-exa
{
name: 'inputTokens',
type: 'number | undefined',
description: 'The number of input (prompt) tokens used.',
description: 'The total number of input (prompt) tokens used.',
},
{
name: 'inputTokenDetails',
type: 'LanguageModelInputTokenDetails',
description:
'Detailed information about the input (prompt) tokens. See also: cached tokens and non-cached tokens.',
properties: [
{
name: 'noCacheTokens',
type: 'number | undefined',
description:
'The number of non-cached input (prompt) tokens used.',
},
{
name: 'cacheReadTokens',
type: 'number | undefined',
description:
'The number of cached input (prompt) tokens read.',
},
{
name: 'cacheWriteTokens',
type: 'number | undefined',
description:
'The number of cached input (prompt) tokens written.',
},
],
},
{
name: 'outputTokens',
type: 'number | undefined',
description: 'The number of output (completion) tokens used.',
description:
'The number of total output (completion) tokens used.',
},
{
name: 'totalTokens',
type: 'number | undefined',
name: 'outputTokenDetails',
type: 'LanguageModelOutputTokenDetails',
description:
'The total number of tokens as reported by the provider. This number might be different from the sum of inputTokens and outputTokens and e.g. include reasoning tokens or other overhead.',
'Detailed information about the output (completion) tokens.',
properties: [
{
name: 'textTokens',
type: 'number | undefined',
description: 'The number of text tokens used.',
},
{
name: 'reasoningTokens',
type: 'number | undefined',
description: 'The number of reasoning tokens used.',
},
],
},
{
name: 'reasoningTokens',
name: 'totalTokens',
type: 'number | undefined',
isOptional: true,
description: 'The number of reasoning tokens used.',
description: 'The total number of tokens used.',
},
{
name: 'cachedInputTokens',
type: 'number | undefined',
name: 'raw',
type: 'object | undefined',
isOptional: true,
description: 'The number of cached input tokens.',
description:
"Raw usage information from the provider. This is the provider's original usage information and may include additional fields.",
},
],
},
Expand Down
127 changes: 99 additions & 28 deletions content/docs/07-reference/01-ai-sdk-core/04-stream-object.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -602,44 +602,76 @@ To see `streamObject` in action, check out the [additional examples](#more-examp
type: 'OnFinishResult',
parameters: [
{
name: 'usage',
type: 'LanguageModelUsage',
description: 'The token usage of the generated text.',
properties: [
parameters: [
{
type: 'LanguageModelUsage',
parameters: [
name: 'inputTokens',
type: 'number | undefined',
description:
'The total number of input (prompt) tokens used.',
},
{
name: 'inputTokenDetails',
type: 'LanguageModelInputTokenDetails',
description:
'Detailed information about the input (prompt) tokens. See also: cached tokens and non-cached tokens.',
properties: [
{
name: 'inputTokens',
name: 'noCacheTokens',
type: 'number | undefined',
description: 'The number of input (prompt) tokens used.',
description:
'The number of non-cached input (prompt) tokens used.',
},
{
name: 'outputTokens',
name: 'cacheReadTokens',
type: 'number | undefined',
description:
'The number of output (completion) tokens used.',
'The number of cached input (prompt) tokens read.',
},
{
name: 'totalTokens',
name: 'cacheWriteTokens',
type: 'number | undefined',
description:
'The total number of tokens as reported by the provider. This number might be different from the sum of inputTokens and outputTokens and e.g. include reasoning tokens or other overhead.',
'The number of cached input (prompt) tokens written.',
},
],
},
{
name: 'outputTokens',
type: 'number | undefined',
description:
'The number of total output (completion) tokens used.',
},
{
name: 'outputTokenDetails',
type: 'LanguageModelOutputTokenDetails',
description:
'Detailed information about the output (completion) tokens.',
properties: [
{
name: 'reasoningTokens',
name: 'textTokens',
type: 'number | undefined',
isOptional: true,
description: 'The number of reasoning tokens used.',
description: 'The number of text tokens used.',
},
{
name: 'cachedInputTokens',
name: 'reasoningTokens',
type: 'number | undefined',
isOptional: true,
description: 'The number of cached input tokens.',
description: 'The number of reasoning tokens used.',
},
],
},
{
name: 'totalTokens',
type: 'number | undefined',
description: 'The total number of tokens used.',
},
{
name: 'raw',
type: 'object | undefined',
isOptional: true,
description:
"Raw usage information from the provider. This is the provider's original usage information and may include additional fields.",
},
],
},
{
Expand Down Expand Up @@ -726,30 +758,69 @@ To see `streamObject` in action, check out the [additional examples](#more-examp
{
name: 'inputTokens',
type: 'number | undefined',
description: 'The number of input (prompt) tokens used.',
description: 'The total number of input (prompt) tokens used.',
},
{
name: 'inputTokenDetails',
type: 'LanguageModelInputTokenDetails',
description:
'Detailed information about the input (prompt) tokens. See also: cached tokens and non-cached tokens.',
properties: [
{
name: 'noCacheTokens',
type: 'number | undefined',
description:
'The number of non-cached input (prompt) tokens used.',
},
{
name: 'cacheReadTokens',
type: 'number | undefined',
description:
'The number of cached input (prompt) tokens read.',
},
{
name: 'cacheWriteTokens',
type: 'number | undefined',
description:
'The number of cached input (prompt) tokens written.',
},
],
},
{
name: 'outputTokens',
type: 'number | undefined',
description: 'The number of output (completion) tokens used.',
description:
'The number of total output (completion) tokens used.',
},
{
name: 'totalTokens',
type: 'number | undefined',
name: 'outputTokenDetails',
type: 'LanguageModelOutputTokenDetails',
description:
'The total number of tokens as reported by the provider. This number might be different from the sum of inputTokens and outputTokens and e.g. include reasoning tokens or other overhead.',
'Detailed information about the output (completion) tokens.',
properties: [
{
name: 'textTokens',
type: 'number | undefined',
description: 'The number of text tokens used.',
},
{
name: 'reasoningTokens',
type: 'number | undefined',
description: 'The number of reasoning tokens used.',
},
],
},
{
name: 'reasoningTokens',
name: 'totalTokens',
type: 'number | undefined',
isOptional: true,
description: 'The number of reasoning tokens used.',
description: 'The total number of tokens used.',
},
{
name: 'cachedInputTokens',
type: 'number | undefined',
name: 'raw',
type: 'object | undefined',
isOptional: true,
description: 'The number of cached input tokens.',
description:
"Raw usage information from the provider. This is the provider's original usage information and may include additional fields.",
},
],
},
Expand Down
Loading
Loading