Skip to content

iyulab/TokenMeter

Repository files navigation

TokenMeter

NuGet License: MIT .NET

Token counting, cost calculation, and usage tracking for LLM applications.

Features

  • Accurate Token Counting — Microsoft.ML.Tokenizers for precise counts (cl100k_base, p50k_base)
  • 12 Providers Built-in — OpenAI, Anthropic, Google, xAI, Azure, Mistral, DeepSeek, Amazon Nova, Cohere, Meta Llama, Perplexity, Qwen
  • Usage Tracking — Session-based tracking with statistics and cost aggregation
  • Thread-Safe — All components are designed for concurrent access
  • Extensible — Register custom pricing for any model or provider

Installation

dotnet add package TokenMeter

If you only need the shared ITokenCounter interface (e.g., for cross-package interoperability without pulling in tokenizer dependencies):

dotnet add package TokenMeter.Abstractions

Quick Start

Token Counting

using TokenMeter;

var counter = TokenCounter.Default();

int tokens = counter.CountTokens("Hello, how are you today?");
// => 7

Cost Calculation

using TokenMeter;

var calculator = CostCalculator.Default();

decimal? cost = calculator.CalculateCost("gpt-4o", inputTokens: 1000, outputTokens: 500);
// => 0.007500

ModelPricing? pricing = calculator.GetPricing("claude-4-5-sonnet");
// pricing.InputPricePerMillion  => 3.00
// pricing.OutputPricePerMillion => 15.00

Usage Tracking

using TokenMeter;

var tracker = new UsageTracker(CostCalculator.Default());

tracker.Record("gpt-4o-mini", inputTokens: 500, outputTokens: 200);
tracker.Record("gemini-2.0-flash", inputTokens: 800, outputTokens: 350);

UsageStatistics stats = tracker.GetSessionStatistics();
// stats.RequestCount      => 2
// stats.TotalInputTokens  => 1300
// stats.TotalOutputTokens => 550
// stats.TotalCost         => 0.000265

Supported Providers & Pricing

Prices are in USD per 1 million tokens. See docs/pricing-update-guide.md for update instructions.

OpenAI

Model Input Output Context
GPT-4.1 $2.00 $8.00 1M
GPT-4.1 Mini $0.40 $1.60 1M
GPT-4.1 Nano $0.10 $0.40 1M
GPT-4o $2.50 $10.00 128K
GPT-4o-mini $0.15 $0.60 128K
o3-pro $20.00 $80.00 200K
o3 $2.00 $8.00 200K
o3-mini $1.10 $4.40 200K
o4-mini $1.10 $4.40 200K
o1 $15.00 $60.00 200K

Anthropic

Model Input Output Context
Claude Opus 4.6 $5.00 $25.00 200K
Claude Opus 4.1 $15.00 $75.00 200K
Claude 4.5 Opus $5.00 $25.00 200K
Claude Opus 4 $15.00 $75.00 200K
Claude 4.5 Sonnet $3.00 $15.00 200K
Claude 4.5 Haiku $1.00 $5.00 200K
Claude Sonnet 4 $3.00 $15.00 200K
Claude Sonnet 3.7 $3.00 $15.00 200K
Claude 3.5 Sonnet $3.00 $15.00 200K
Claude 3.5 Haiku $0.80 $4.00 200K
Claude 3 Opus $15.00 $75.00 200K
Claude 3 Sonnet $3.00 $15.00 200K
Claude 3 Haiku $0.25 $1.25 200K

Google

Model Input Output Context
Gemini 3 Pro Preview $2.00 $12.00 1M
Gemini 3 Flash Preview $0.50 $3.00 1M
Gemini 2.5 Pro $1.25 $10.00 1M
Gemini 2.5 Flash $0.30 $2.50 1M
Gemini 2.5 Flash-Lite $0.10 $0.40 1M
Gemini 2.0 Flash $0.10 $0.40 1M
Gemini 2.0 Flash-Lite $0.075 $0.30 1M
Gemini 1.5 Pro $1.25 $5.00 2M
Gemini 1.5 Flash $0.075 $0.30 1M

xAI

Model Input Output Context
Grok 4.1 Fast Thinking $0.20 $0.50 2M
Grok 4.1 Fast $0.20 $0.50 2M
Grok 4 Fast Thinking $0.20 $0.50 2M
Grok 4 Fast $0.20 $0.50 2M
Grok 4 $3.00 $15.00 256K
Grok Code Fast $0.20 $1.50 256K
Grok 3 $3.00 $15.00 131K
Grok 3 Mini $0.30 $0.50 131K

Azure OpenAI

Model Input Output Context
Azure GPT-4o $2.50 $10.00 128K
Azure GPT-4o-mini $0.15 $0.60 128K
Azure GPT-4.1 $2.00 $8.00 1M
Azure GPT-4.1 Mini $0.40 $1.60 1M
Azure GPT-4.1 Nano $0.10 $0.40 1M
Azure o3-pro $20.00 $80.00 200K
Azure o4-mini $1.10 $4.40 200K

Mistral

Model Input Output Context
Mistral Large $2.00 $6.00 128K
Mistral Medium 3 $0.40 $2.00 128K
Mistral Small $0.20 $0.60 128K
Codestral $0.30 $0.90 256K
Devstral Small $0.10 $0.30 128K
Pixtral Large $2.00 $6.00 128K

DeepSeek

Model Input Output Context
DeepSeek V3 $0.28 $0.42 128K
DeepSeek R1 $0.28 $0.42 128K
DeepSeek Coder $0.14 $0.28 128K

Amazon Nova

Model Input Output Context
Nova Premier $2.50 $12.50 1M
Nova Pro $0.80 $3.20 300K
Nova Lite $0.06 $0.24 300K
Nova Micro $0.035 $0.14 128K

Cohere

Model Input Output Context
Command A $2.50 $10.00 256K
Command R+ $2.50 $10.00 128K
Command R $0.50 $1.50 128K
Command R7B $0.0375 $0.15 128K
Command Light $0.30 $0.60 4K

Meta Llama

Model Input Output Context
Llama 4 Maverick $0.22 $0.85 1M
Llama 4 Scout $0.15 $0.50 10M

Perplexity

Model Input Output Context
Sonar Pro $3.00 $15.00 200K
Sonar Deep Research $2.00 $8.00 200K
Sonar $1.00 $1.00 128K

Qwen

Model Input Output Context
Qwen Max $1.20 $6.00 128K
Qwen Plus $0.20 $1.00 128K

Custom Pricing

Register custom pricing for local models or other providers:

var calculator = CostCalculator.Default();

calculator.RegisterPricing(new ModelPricing
{
    ModelId = "llama-3.2-70b",
    InputPricePerMillion = 0.50m,
    OutputPricePerMillion = 0.75m,
    Provider = "Local",
    DisplayName = "Llama 3.2 70B",
    ContextWindow = 128000
});

decimal? cost = calculator.CalculateCost("llama-3.2-70b", 1000, 500);

Query by Provider

// All models for a provider
var googleModels = ModelPricingData.GetByProvider("Google");

// All provider names
var providers = ModelPricingData.GetProviderNames();
// => ["OpenAI", "Anthropic", "Google", "xAI", "Azure", "Mistral", "DeepSeek", ...]

// Alias-aware lookup (exact → prefix → contains, longest-match wins)
ModelPricing? pricing = ModelPricingData.FindPricing("gpt-4o-2024-08-06");

// Last update date and staleness helpers
Console.WriteLine($"Pricing last updated: {ModelPricingData.LastUpdated}");
Console.WriteLine($"Pricing age: {ModelPricingData.PricingAgeDays} days");
bool isStale = ModelPricingData.IsPricingStale(maxAgeDays: 90);

API Reference

ITokenCounter

TokenMeter package (full interface with tokenizer):

public interface ITokenCounter : Abstractions.ITokenCounter
{
    int CountTokens(string text);
    int CountTokens(IEnumerable<string> texts);
    string ModelName { get; }
    bool IsApproximate(string modelId);
    bool SupportsModel(string modelId);
}

TokenMeter.Abstractions package (lightweight shared interface):

public interface ITokenCounter
{
    int Count(string text);
    int Count(IEnumerable<string> texts);
    bool SupportsModel(string modelId);
    bool IsApproximate(string modelId); // default: false
}

ICostCalculator

public interface ICostCalculator
{
    decimal? CalculateCost(string modelId, int inputTokens, int outputTokens);
    ModelPricing? GetPricing(string modelId);
    void RegisterPricing(ModelPricing pricing);
    IEnumerable<string> GetRegisteredModels();
}

CostCalculator provides two factory methods:

CostCalculator.Default()     // built-in pricing + custom overrides
CostCalculator.CustomOnly()  // custom-registered pricing only (no built-in data)

IUsageTracker

public interface IUsageTracker
{
    void Record(UsageRecord record);
    UsageRecord Record(string? modelId, int inputTokens, int outputTokens, string? sessionId = null);
    UsageStatistics GetSessionStatistics();
    UsageStatistics GetStatistics(DateTimeOffset startTime, DateTimeOffset endTime);
    UsageStatistics GetTodayStatistics();
    IReadOnlyList<UsageRecord> GetRecords();
    IReadOnlyList<UsageRecord> GetRecords(string sessionId);
    void Clear();
    string SessionId { get; }
    string StartNewSession();
}

ModelPricing

public record ModelPricing
{
    public required string ModelId { get; init; }
    public required decimal InputPricePerMillion { get; init; }
    public required decimal OutputPricePerMillion { get; init; }
    public string? Provider { get; init; }
    public string? DisplayName { get; init; }
    public int? ContextWindow { get; init; }

    public decimal CalculateCost(int inputTokens, int outputTokens);
}

Advanced Usage

Multi-Session Tracking

var tracker = new UsageTracker(CostCalculator.Default());

tracker.Record("gpt-4o", 1000, 500);
var session1Stats = tracker.GetSessionStatistics();

tracker.StartNewSession();
tracker.Record("gpt-4o-mini", 2000, 800);
var session2Stats = tracker.GetSessionStatistics();

var todayStats = tracker.GetTodayStatistics();

Token Counting for Chat Messages

var counter = TokenCounter.Default();

var messages = new[]
{
    "You are a helpful assistant.",   // System
    "What is the capital of France?", // User
    "The capital of France is Paris." // Assistant
};

int totalTokens = counter.CountTokens(messages);
// Add ~4 tokens per message for chat formatting overhead
int estimated = totalTokens + (messages.Length * 4);

Cost Comparison Across Providers

var calculator = CostCalculator.Default();
int inputTokens = 10_000;
int outputTokens = 2_000;

foreach (var provider in ModelPricingData.GetProviderNames())
{
    Console.WriteLine($"\n{provider}:");
    foreach (var model in ModelPricingData.GetByProvider(provider).Take(3))
    {
        var cost = model.CalculateCost(inputTokens, outputTokens);
        Console.WriteLine($"  {model.DisplayName}: ${cost:F4}");
    }
}

Requirements

  • .NET 10.0
  • Microsoft.ML.Tokenizers 2.0.0+

Related Projects

Documentation

License

MIT License — see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

For pricing data updates, see docs/pricing-update-guide.md.


Made with care by iyulab

About

.NET library for accurate token counting, cost calculation, and session-based usage tracking across 12 LLM providers including OpenAI, Anthropic, Google, Azure, and more.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages