contxt stats
Usage analytics for your project memory — token efficiency, session counts, most-retrieved entries, and stale entries that need review.
Usage
$contxt stats
Example output
📊 Project stats (my-app, last 30 days) Memory decisions 12 patterns 8 documents 3 total 23 Token Efficiency avg returned 1 240 tokens avg reduction 62% est. cost saved $0.84 Sessions total 47 avg duration 34 min auto-captured 31 entries Most Retrieved 1. JWT in httpOnly cookies (decision) — 18× 2. API Error Handler pattern — 14× 3. Use Prisma for ORM (decision) — 11× ⚠ 4 stale entries (not updated in 30+ days) Run: contxt review --stale
Flags
| Flag | Default | Description |
|---|---|---|
| --days <n> | 30 | Number of days of history to include in the report |
| --export json | — | Output full stats as a JSON object (useful for scripting or CI dashboards) |
How token efficiency is measured
Every time Contxt loads context for an AI agent (via suggest_context, get_decisions, or similar), it records the number of tokens returned and estimates the baseline cost of sending your entire memory unfiltered. The reduction percentage is the savings from targeted retrieval vs. a full dump.
Cost estimates use the OpenAI GPT-4o pricing model as a benchmark ($5 / 1M input tokens). These are rough estimates — your actual savings depend on your model and usage pattern.
# Export stats for the last 7 days$contxt stats --days 7 --export json
MCP tool
The same data is available to AI agents via the contxt_stats MCP tool. When you ask your AI “what’s the project health?”, it automatically calls contxt_stats and summarises the results for you.
Note: token efficiency metrics are stored locally in your project’s SQLite database and are not synced to the cloud. The web dashboard at /dashboard/stats shows entry counts from Supabase; run contxt stats locally for full token analytics.