tickerr

Tickerr / Token counter

AI Token Counter & Cost Calculator

Paste your text to count tokens and see real-time API costs across 51+ models. Prices updated daily from official documentation.

Or enter token count directly:

Tokens

0

Characters

0

Words

0

Model
Input/1M
Output/1M
Per request
$0.00
$0.00
$0.00
$0.00
$0.00
$0.00
$1.00
$0.00
$1.00
$0.00
$0.01
$0.01
$0.02
$0.03
$0.04
$0.04
$0.05
$0.08
$0.06
$0.07
$0.30
$0.08
$0.08
$0.10
$0.30
$0.10
$0.12

Token count uses cl100k_base encoding (GPT-4/4o). Claude, Gemini, and other models use similar tokenizers — actual counts may differ by ±10%. Pricing sourced from official documentation, updated daily.

What is a token?

Tokens are the basic units that large language models process. For English text, one token is roughly 4 characters or 0.75 words — so 1,000 words is approximately 1,333 tokens. Whitespace, punctuation, and common words each count as one token. Code and non-English text typically use more tokens per character.

How is API cost calculated?

AI API pricing is split into input tokens (your prompt) and output tokens (the model's response). Costs are measured per million tokens (1M tokens). A prompt with 1,000 input tokens and a 500-token response on GPT-4o ($2.50/1M input, $10/1M output) would cost:

(1,000 / 1,000,000) × $2.50 + (500 / 1,000,000) × $10.00 = $0.0025 + $0.005 = $0.0075

Which tokenizer does this use?

This counter uses cl100k_base — the same byte-pair encoding (BPE) tokenizer used by GPT-4, GPT-4o, GPT-3.5-turbo, and text-embedding-ada-002. For Claude and Gemini, Anthropic and Google use proprietary tokenizers that produce similar but not identical counts. Expect ±10% variance for non-GPT models.

Why do token counts matter?

Context window limits determine how much text a model can process in one request. GPT-4o supports up to 128K tokens; Claude 3.5 Sonnet supports up to 200K. Knowing your token count lets you estimate whether your prompt fits, how much it will cost, and which model gives the best price-to-performance for your use case.

Cached input pricing

Several providers offer prompt caching — if you send the same prefix repeatedly (e.g. a long system prompt), subsequent calls charge a reduced cached input rate. OpenAI charges 50% of the standard input rate for cached tokens; Anthropic charges around 10%. Check individual model pages for exact cache pricing.