Local token + cost monitoring dashboard for AI coding agents — Claude Code, Gemini CLI, Codex, Cursor, GitHub Copilot, OpenCode, and more.
Token Telemetry (one word: TokenTelemetry) — free, open-source, 100% local.
TokenTelemetry is a free, open-source, 100% local observability dashboard that tracks token usage, LLM costs, tool calls, session traces, and reasoning steps across all your AI coding agents — in one unified place. No signup. No cloud. No telemetry.
🌐 Website & Docs: https://tokentelemetry.com
🖥️ macOS/Linux: curl -fsSL https://raw.githubusercontent.com/VasiHemanth/tokentelemetry/main/install.sh | bash
🧰 Windows: irm https://raw.githubusercontent.com/VasiHemanth/tokentelemetry/main/install.ps1 | iex
🐙 GitHub: github.com/VasiHemanth/tokentelemetry
AI coding agents like Claude Code, Gemini CLI, and Codex are powerful — but they burn through tokens fast. How many tokens did that refactor cost? Which agent is most efficient? What did it actually do?
TokenTelemetry answers all of that — locally, instantly, for free.
| Problem | TokenTelemetry Solution |
|---|---|
| "How much did that Claude Code session cost?" | Real-time cost tracking per session/project |
| "What tools did my agent call?" | Full waterfall trace of every tool call |
| "Which model is most token-efficient for my codebase?" | Per-model analytics & comparisons |
| "Did my agent follow its plan?" | Plan-mode capture & display |
| "I use 3 different agents — unified view?" | Multi-agent dashboard in one place |
TokenTelemetry reads session logs from these agents automatically:
| Agent | Status |
|---|---|
| Claude Code (Anthropic) | ✅ Fully supported |
| Gemini CLI (Google) | ✅ Fully supported |
| OpenAI Codex CLI | ✅ Fully supported |
| Cursor | ✅ Fully supported |
| GitHub Copilot | ✅ Fully supported |
| OpenCode | ✅ Fully supported |
| Qwen | ✅ Fully supported |
| Vibe | ✅ Fully supported |
| Antigravity | ✅ Fully supported |
More agents added regularly. Request support for your agent →
- 📊 Token Usage Dashboard — real-time tokens in/out per agent, model, and project
- 💰 Cost Tracking — see exact LLM API costs per session and cumulative over time
- 🔍 Session Traces — waterfall view of prompts, reasoning chains, tool calls, and responses
- 🛠️ Tool Call Analytics — which tools your agents call most, success/failure rates
- 📁 Per-Project Insights — heatmap, activity timeline, agent leaderboard per codebase
- 🧠 Plan Capture — view plan-mode outputs from Claude Code and other agents
- 📈 Model Analytics — compare GPT-5.4 vs Claude 4.6 Sonnet vs Gemini 3.1 Flash efficiency
- 🔒 100% Local — all data stays on your machine, zero cloud dependency
- ⚡ Zero Config — auto-detects agents from their default log locations
- 🆓 Free & Open Source — MIT licensed, forever free
macOS / Linux:
curl -fsSL https://tokentelemetry.com/install.sh | bashWindows (PowerShell):
irm https://tokentelemetry.com/install.ps1 | iexgit clone https://github.com/VasiHemanth/tokentelemetry.git
cd tokentelemetry
./start.sh # macOS/Linux
# start.bat # Windows
# node bin/cli.js # cross-platformThen open: http://localhost:3000
Connected agents, recent activity feed, model distribution pie chart, token burn rate.
Per-project heatmap, tool usage breakdown, agent leaderboard, session timeline.
Full waterfall: system prompt → reasoning → tool calls → responses → final output. See exactly what your agent was thinking.
Cumulative token & cost graphs per agent/model over time. Compare efficiency across models.
Captured plan-mode outputs from Claude Code's /plan command and equivalent in other agents.
- Node.js 18+
- Python 3.9+
- git
- Any supported AI coding agent already installed (Claude Code, Gemini CLI, Codex, etc.)
TokenTelemetry stores lightweight state in ~/.tokentelemetry/:
~/.tokentelemetry/
aliases.json # Rename/merge project folder paths
hidden.json # Hide specific projects from dashboard
VERSION # Current version
All hand-editable JSON — no database, no config GUI needed.
tokentelemetry/
backend/ FastAPI app (Python) — reads agent logs, serves REST API
frontend/ Next.js 16 dashboard — React UI
bin/cli.js Cross-platform launcher
install.sh One-line installer (macOS/Linux)
install.ps1 One-line installer (Windows)
Q: Does TokenTelemetry send any data to the cloud?
A: No. 100% local. It reads log files from your filesystem and serves a local web dashboard. Nothing leaves your machine.
Q: How does it track Claude Code token usage?
A: Claude Code writes JSONL session logs to ~/.claude/. TokenTelemetry watches those files and parses token counts, tool calls, and reasoning in real time.
Q: Does it work with multiple agents at the same time?
A: Yes. It detects all supported agents and shows them in a unified dashboard. You can filter by agent, model, or project.
Q: Is there a cost to use TokenTelemetry?
A: No. It is free and open-source under the MIT license.
Q: How is TokenTelemetry different from Langfuse, LangSmith, or Helicone?
A: Those tools require you to instrument your code, create an account, and send data to their cloud. TokenTelemetry is 100% local, zero-config, and works by reading the log files your agents already write — no SDK, no API key, no cloud.
Q: Can I monitor Gemini CLI token usage?
A: Yes. TokenTelemetry supports Gemini CLI and shows token counts, costs, and session traces for Google's Gemini models (Gemini 2.0 Flash, Gemini 1.5 Pro, etc.).
Q: Does it support Cursor or GitHub Copilot?
A: Yes. Cursor and GitHub Copilot sessions are detected and tracked.
| Feature | TokenTelemetry | Langfuse | LangSmith | Helicone |
|---|---|---|---|---|
| 100% Local | ✅ | ❌ | ❌ | ❌ |
| Zero config | ✅ | ❌ | ❌ | ❌ |
| No signup | ✅ | ❌ | ❌ | ❌ |
| Claude Code support | ✅ | Manual | Manual | Manual |
| Gemini CLI support | ✅ | Manual | Manual | ❌ |
| Codex CLI support | ✅ | Manual | Manual | Manual |
| Free | ✅ | Freemium | Freemium | Freemium |
| Open Source | ✅ | ✅ | ❌ | ❌ |
- Individual developers who want to understand how much their AI coding sessions cost
- Teams comparing Claude Code vs Gemini CLI vs Codex efficiency
- Researchers studying LLM agent behavior, tool call patterns, and reasoning chains
- Engineering managers tracking AI tooling ROI across projects
- Prompt engineers optimizing prompts by seeing exact token breakdowns
Port conflicts: Check/kill processes on ports 3000 and 8000.
Python not found: Install Python 3.9+ and ensure it's in your PATH.
No sessions showing: Run an agent (Claude Code, Gemini CLI, etc.) first — TokenTelemetry needs existing log files.
Windows issues: Run PowerShell as Administrator for the installer.
We welcome contributions! See CONTRIBUTING.md for guidelines.
git clone https://github.com/VasiHemanth/tokentelemetry.git
cd tokentelemetry
# Make your changes
git checkout -b feat/your-feature
git commit -m "feat: your feature"
git push origin feat/your-feature
# Open a Pull RequestWant to add support for a new agent? Open an issue with the agent name and log format.
claude-code token usage · gemini cli cost tracking · codex token monitor · AI agent observability · LLM token dashboard · coding agent analytics · local LLM monitoring · token cost calculator · AI coding tool metrics · claude code session viewer · openai codex usage · cursor ide analytics · github copilot usage tracker · LLM observability tool · AI agent telemetry · token usage dashboard open source
MIT © 2024 Hemanth Vasi
Hemanth Vasi
🌐 tokentelemetry.com
🐙 github.com/VasiHemanth
💼 LinkedIn
If you find TokenTelemetry useful, please ⭐ star this repo — it helps others discover it!