bolt ⚡ THE TOKENDOME
Source on GitHub ← back to leaderboard

How it works

The math behind every number on the leaderboard.

Three rules: no manual entry, prompts never leave your machine, counts come from the provider, not from us guessing.

computer

Your machine

Your app makes a normal call to OpenAI / Anthropic / Google / Ollama. Either through the bundled SDK shim, or pointed at the local proxy at localhost:4000.

→ forwarded verbatim, with your API key

cloud

Provider

The real provider answers normally — you get back the same response, the same latency, the same streaming chunks, the same errors.

→ response includes usage field

leaderboard

Tokendome

The shim/proxy reads just the count from usage.input_tokens / output_tokens, signs it with your agent token, posts it to the cloud.

→ counts only · HMAC-signed

verified

Accurate

We don't re-tokenize. We read the provider's own usage field — exactly what they bill you on. For Ollama and other local models we read prompt_eval_count + eval_count from the response.

lock

Safe

Your API keys stay local — they pass through the proxy directly to the upstream and never land on the Tokendome server. Prompts and completions are never sent to us. Only counts: {ts, model, input_tokens, output_tokens}.

bolt

Live

The agent batches events every 3 seconds and posts them over HTTPS. The leaderboard polls the API every 3 seconds. End-to-end you'll see your call show up within ~6 seconds of it completing.

Anti-cheat

Why you can trust the numbers

Provider-anchored

The agent and SDK never re-tokenize — they read the provider's own usage field. The Anthropic and OpenAI Admin-API backfill paths pull straight from your billing record, so you can't fabricate those rows.

HMAC-signed, replay-rejected

Every event is signed with HMAC-SHA256(agent_token, ts.sha256(body)). Stale (> 60s drift), unsigned, and replayed payloads are rejected — the server stores (user, ts, body_hash) and only accepts each tuple once. You can't forge events for someone else's account.

Open source

The agent and SDK shims are open source (Apache 2.0). Read the file that builds the event payload — it's about 30 lines. If you spot something we're sending that shouldn't be sent, file an issue and we'll cut a release that day.

Honest about the gap

Ingest rejects events > 2M tokens, batches > 500 events, and bodies > 512 KB. But a user with their own valid agent token can still hand-craft /api/ingest calls with whatever numbers they want — the leaderboard is honor-system at the user level. The Admin-API import is the only path anchored to data you can't fabricate. Planned: remote-attestation challenge.

The wire format

This is exactly what we send

No prompts. No completions. No system instructions. No tool calls. No file paths. Not even the upstream URL — that stays local.

POST /api/ingest
x-ta-user: 42
x-ta-ts:   1776207525088
x-ta-sig:  3a4f1c…  // HMAC-SHA256(agent_token, ts.sha256(body))

{
  "events": [
    {
      "ts":            1776207524217,
      "provider":      "anthropic",
      "model":         "claude-haiku-4-5",
      "is_local":      false,
      "input_tokens":  1234,
      "output_tokens": 567,
      "cache_read_tokens":  0,
      "cache_write_tokens": 0
    }
  ]
}
enter the dome →