TokenCentric 1.0: Take Control of Your AI Context Files
5 different AI coding assistants. 5 different context file formats. Dozens of projects. Zero visibility into how many tokens those files actually cost you.
That was the state of things before TokenCentric. Now it isn't.
The Context File Problem
AI coding assistants have become essential to modern development workflows. Claude Code reads CLAUDE.md. Cursor reads .cursorrules. GitHub Copilot reads .github/copilot-instructions.md. Windsurf reads .windsurfrules. ChatGPT Codex reads AGENTS.md.
If you use more than one tool, you're maintaining multiple context files per project. If you work on more than a few projects, you're maintaining dozens. And you're doing it blind -- you have no idea how many tokens each file consumes, whether your instructions are bloated, or how the inheritance chain across directories is stacking up.
TokenCentric gives you that visibility, plus a proper editor to do something about it.
What You Get
Real-Time Token Counting with Official Tokenizers
This is the core feature and the one that matters most. TokenCentric counts tokens using the same tokenizers the AI providers use internally. Not rough estimates. Not "divide by 4" approximations. The actual tokenizers.
- Anthropic's official tokenizer (
@anthropic-ai/tokenizer) for Claude token counts - OpenAI's tiktoken for GPT-4 and compatible model token counts
A color-coded indicator tells you at a glance where each file stands:
- Green: Under 5,000 tokens. Lean and efficient.
- Yellow: 5,000 to 20,000 tokens. Worth reviewing for unnecessary content.
- Red: Over 20,000 tokens. Actively eating into your context window.
For files with inheritance chains (global + workspace + project), TokenCentric shows the cumulative token cost at each level so you can see exactly where the weight is coming from.
VS Code-Grade Editing
The editor is Monaco -- the same engine that powers VS Code. You get the keyboard shortcuts you already know, multi-cursor editing, find and replace, syntax highlighting, code folding, and a minimap. No learning curve.
Split view (horizontal or vertical) lets you compare files side by side. Multi-tab editing lets you keep several files open at once. Drag and drop to rearrange.
7 Built-In Templates
Starting a new project's context file from scratch is tedious. TokenCentric ships with templates for common project types, complete with variable substitution. Define your project name, framework, and language once, and the template fills in a solid starting point.
Variables like {{PROJECT_NAME}}, {{FRAMEWORK}}, and {{LANGUAGE}} get replaced when you generate the file. Customize the templates or create your own.
AI-Powered Context Review
Here's where it gets meta: TokenCentric can connect to Anthropic Claude, OpenAI, or a local Ollama instance to review your context files and suggest improvements. The AI can identify redundant instructions, unclear phrasing, and sections that are costing tokens without adding value.
Use AI to optimize the instructions you give to AI. It works better than you'd expect.
Automatic File Discovery
Point TokenCentric at a directory and it finds every context file across all your projects. CLAUDE.md, .cursorrules, copilot-instructions.md, .windsurfrules, AGENTS.md -- all surfaced in a single sidebar. No more hunting through file trees.
Who It's For
Developers using multiple AI tools. If you switch between Claude Code and Cursor, or use Copilot alongside either, you're maintaining redundant context files. TokenCentric lets you manage them all in one place.
Teams standardizing AI conventions. If your team has agreed on coding standards for AI-assisted development, TokenCentric's templates make it easy to enforce consistency across repositories.
Indie hackers and consultants. If you maintain many projects simultaneously, the token counting and file discovery features save real time. You can audit every project's context cost in minutes instead of opening each file individually.
Anyone curious about token economics. Even if you only work on one project, understanding how many tokens your context file consumes helps you write more effective AI instructions.
Built With
TokenCentric is a desktop app built with Electron 28, React 18, TypeScript, Vite, and Tailwind CSS. Monaco Editor provides the editing experience. Zustand handles state management. The entire codebase is 10,609 lines of TypeScript across 35+ components.
Three releases shipped in two months:
- v0.1.0 -- Editor and token counting
- v0.2.0 -- Templates, split view, multi-tab editing
- v1.0.0 -- AI provider integration, hierarchical context cost display, polish
Free and Open Source
TokenCentric is MIT licensed. Free to download, free to use, free to modify. No accounts, no subscriptions, no telemetry.
This is a tool I built because I needed it. Charging for it would mean fewer people use it, and fewer people using it means less feedback and fewer contributions. The open-source model serves this kind of developer tool best.
Get Started
Download TokenCentric for macOS at tokencentric.app. Windows and Linux builds are on the roadmap.
The source code is on GitHub if you want to build from source, contribute, or just see how it works under the hood.
If you manage AI context files across projects -- and if you use AI coding tools, you do -- TokenCentric gives you the visibility and control that the command line doesn't.
Have feedback or feature requests? Open an issue on GitHub or reach out on Twitter/X.