A modern, browser-based token counter for Large Language Models with real-time visualization and cost estimation.
English | 简体中文
- 🔒 100% Private - All processing happens in your browser, no data leaves your machine
- ⚡ Real-time Analysis - Instant token counting as you type with smart debouncing
- 🎨 Token Visualization - See how your text is tokenized with colorful, interactive tokens
- 💰 Cost Estimation - Calculate API costs based on current pricing
- 🌓 Dark Mode - Beautiful UI that works in both light and dark themes
- 📊 Context Window Tracking - Monitor usage against model limits
- 🎯 Multiple Models - Support for OpenAI, DeepSeek, Qwen, and Llama models
- GPT-4o (128K context)
- GPT-4 Turbo (128K context)
- GPT-3.5 Turbo (16K context)
- DeepSeek V3 / R1 (64K context)
- Qwen 2.5 (32K context)
- Llama 3.1 (128K context)
- Node.js 18+
- npm or yarn
# Clone the repository
git clone https://github.com/li199959/llm-token-counter.git
cd llm-token-counter
# Install dependencies
npm install
# Start development server
npm run devVisit http://localhost:5173 to see the app in action!
npm run buildThe built files will be in the dist directory, ready to deploy to any static hosting service.
- Framework: React 18 with TypeScript
- Build Tool: Vite 7
- Styling: Tailwind CSS 4
- UI Components: Radix UI primitives
- Tokenizers:
js-tiktokenfor OpenAI models@xenova/transformersfor open-source models
- Icons: Lucide React
- Select a Model - Choose from OpenAI or open-source models
- Input Text - Paste or type your content (up to 100K characters)
- View Results - See token count, cost estimation, and context usage
- Visualize Tokens - Click the eye icon to see how text is tokenized
- Copy & Clear - Use toolbar buttons for quick actions
- Automatic counting with 300ms debounce
- Supports both synchronous (OpenAI) and asynchronous (Transformers.js) tokenizers
- Progress indicator during calculation
- Color-coded tokens for easy identification
- Hover effects with token details
- Smooth animations and transitions
- Based on current API pricing (as of 2025/2026)
- Displays cost per request
- Helps budget API usage
- Visual progress bar showing usage percentage
- Warning when approaching limits (>90%)
- Prevents context overflow issues
- Chrome/Edge 90+
- Firefox 88+
- Safari 14+
MIT License - feel free to use this project for personal or commercial purposes.
Contributions are welcome! Please feel free to submit a Pull Request.
- Tiktoken - OpenAI's tokenizer
- Transformers.js - ML models in the browser
- Radix UI - Accessible UI components
- Tailwind CSS - Utility-first CSS framework
For questions or feedback, please open an issue on GitHub.
Made with ❤️ by li199959