Skip to content

Conversation

@lgrammel
Copy link
Collaborator

@lgrammel lgrammel commented Dec 8, 2025

Background

Providers have added extended usage information over the past year, and current usage calculations are inconsistent (see #9921 ).

Summary

  • restructure LanguageModelV3Usage in specification to include caching, reasoning, and raw usage.
  • expose additional inputTokenDetails and outputTokenDetails in LanguageModelUsage
  • deprecate reasoningTokens and cachedInputTokens in LanguageModelUsage

Manual Verification

  • run examples/ai-core/src/generate-text/openai.ts
  • run examples/ai-core/src/stream-text/openai.ts

Tasks

  • spec
  • ai return
  • provider implementation
  • model v2 adapter generate
  • model v2 adapter stream
  • docs
  • migration guide
  • changeset

Future Work

  • add codemod for rewriting access to deprecated LanguageModelUsage properties
  • ensure that raw anthropic usage information is correctly forwarded
  • language model v2 mapping: map stream parts as needed

Related Issues

Resolves #9921

@vercel-ai-sdk vercel-ai-sdk bot added the ai/core label Dec 8, 2025
@lgrammel lgrammel changed the title 1 feat: extended token usage Dec 8, 2025
@lgrammel lgrammel marked this pull request as ready for review December 9, 2025 13:23
@lgrammel lgrammel merged commit 3bd2689 into main Dec 9, 2025
18 checks passed
@lgrammel lgrammel deleted the lg/XY180bQG branch December 9, 2025 15:03
nicoalbanese added a commit that referenced this pull request Dec 9, 2025
#10975 broke submodule action. This should fix.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

V3 Spec Proposal: Token Usage Normalization for Vercel AI SDK

4 participants