Tickerr / Methodology
How Tickerr classifies AI tool status
Five independent signals feed into a single verdict. The most severe signal wins. No single noisy check can flip the status - all signals require sustained or corroborated evidence before changing color.
The five signals
HTTP endpoint monitoring
Every tool is pinged via HTTPS every 5 minutes. We record whether the endpoint returns 2xx and measure round-trip time in milliseconds.
Authenticated API inference
For AI providers with APIs (Claude, Gemini, ChatGPT, Groq, Mistral, xAI, Cerebras, Cohere, OpenRouter), we make real streaming API calls every 5–30 minutes and measure time-to-first-token (TTFT). This catches model-layer failures that appear even when the HTTP endpoint returns 200 OK.
Agent-reported signals
Agents using the Tickerr MCP or REST API can report failures. Multiple reports from distinct agents are required before this signal affects status.
User "Report Issue" widget
Users on the status page can manually report what they are experiencing. Reports are deduplicated by fingerprint - one user can submit once per 2-hour window per tool.
Official status pages
We ingest official Atlassian-based status pages for providers that publish them (Anthropic, OpenAI, and others). These are displayed alongside independent monitoring and can trigger degraded status.
Classification thresholds
The most severe active signal determines the verdict. These thresholds are implemented directly in Tickerr's monitoring code.
Recovery - when does it go back to green?
Recovery requires sustained good results - a single passing check does not clear an incident.
- Outage (HTTP): 3 consecutive successful HTTP checks must pass before the synthetic incident closes (~15 min)
- Outage (API inference): 3 consecutive successful API inference checks must pass for the model before the incident resolves
- Degraded (slow TTFT): TTFT must return below 1.5× rolling p50 on 2 consecutive checks before the degraded incident closes
- Official incident: clears when the provider marks it resolved on their status page
- Agent signal: clears automatically when reports drop to zero for 10 minutes
- User reports: clears automatically after 2 hours from last report
- Incidents shorter than 20 minutes are recorded internally but do not keep a public page (slug is stripped on resolution)
Incident grouping
Multiple events for the same model within a 4-hour window are treated as one extended incident, not separate pages. A new incident page is only created if more than 4 hours have passed since the previous incident resolved. Degraded latency events that follow a major outage within 4 hours are suppressed - they are treated as recovery noise from the same event. Incidents shorter than 20 minutes are recorded internally but their public page slug is removed on resolution - they do not appear in Google search results.
Why inference monitoring catches more than HTTP
HTTP monitoring answers: is the endpoint reachable? API inference monitoring answers: is the API actually working? Around 35% of real AI API failures appear only in the inference layer: the HTTP endpoint returns 200 OK while the model is overloaded, returning errors, or responding with extreme latency. Tickerr runs both independently and shows whichever signal is most severe.
Independence
Tickerr is not affiliated with any monitored provider. All status data is collected via independent monitoring from Vercel infrastructure. Official status page data is fetched from each provider's public API and displayed as a separate signal - it does not suppress or override Tickerr's independent measurements.