Pricing Transparency
Every cost number has a source. Published pricing models, measurement methods, and provider rates so you can verify our math.
Measurement method
Measured models have actual token counts from API responses. Heuristic models use published rates and context window patterns. We label which is which.
Source transparency
Every pricing rate includes a source URL and the number of data points used. Cache discounts are noted but not applied by default.
Every cost number has a source you can verify
8 pricing models from 4 providers. 8 models have measured token data from API logs. 0 use heuristic estimates (lower confidence, flagged in the data). Every rate has a source URL.
We do not guess at cost. For models where we have API access, we log actual token consumption from the agent runs. For models behind closed APIs or where we do not have integration, we estimate from published rates and context window specifications. Both approaches are documented so you can audit the numbers.
[KEY INSIGHT]
8/8 models have measured pricing
Measured means we have actual token counts from API responses. Heuristic means we estimated from published rates and context window patterns. Both are published so you can verify.
Measured pricing is more reliable. Heuristic estimates have built-in uncertainty - the actual cost when you run it might differ. We flag which findings rely on measured vs heuristic data so you know confidence levels. If a finding depends on a heuristic cost, we caveat it.
Published pricing models and sources
Per-model rates, methods, and source counts. Models with [MEASURED] have verified token data. [HEURISTIC] models use estimated rates.
| Model | Provider | Input $/Mtok | Output $/Mtok | Method | Sources |
|---|---|---|---|---|---|
| claude-opus-4-5 | Anthropic | $5 | $25 | [MEASURED] | 2 sources |
| claude-opus-4-6 | Anthropic | $5 | $25 | [MEASURED] | 2 sources |
| gpt-5.2 | OpenAI | $1.75 | $14 | [MEASURED] | 3 sources |
| gpt-5.2-codex | OpenAI | $1.75 | $14 | [MEASURED] | 2 sources |
| gemini-3-pro-preview | $2 | $12 | [MEASURED] | 2 sources | |
| o3 | OpenAI | $2 | $8 | [MEASURED] | 2 sources |
| gpt-5.3-codex | OpenAI | $1.75 | $14 | [MEASURED] | 2 sources |
| cursor-composer-1.5 | Cursor | $1.25 | $10 | [MEASURED] | 2 sources |
Cursor CLI agent routes to underlying provider models. Per-token costs follow the provider pricing. Cursor subscription ($20/month Pro) covers included usage; overages billed at API rates.
Added GPT-5.3-Codex and Cursor Composer 1.5 pricing for Cursor CLI benchmark integration. Re-verified all existing models — no changes needed.
Unlock full results
Enter your email to access the full methodology, per-sample analysis, and patch examples.
[NEXT STEPS]
See the cost rankings
These rates feed into the economics page, which ranks agents by cost per fix and identifies the Pareto frontier.
Explore more
- Execution metrics
- token usage and tool calls that drive costs
- Evaluation methodology
- how we score pass, fail, build, and infra outcomes
FAQ
How are costs calculated?
Cost per pass = total cost of all evaluations for an agent / number of passing evaluations. This penalizes agents with high failure rates since wasted runs still cost money.
Are costs measured or estimated?
Both. Models with measured token data from API logs are labeled [MEASURED]. Models with estimated rates are labeled [HEURISTIC]. Both are published so you can verify.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,920 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,920 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
15 agent-model configurations benchmarked on real vulnerabilities. Compare pass rates and costs.
Benchmark Methodology
How XOR benchmarks AI coding agents on real security vulnerabilities. Reproducible, deterministic, and transparent.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
See which agents produce fixes that work
128 CVEs. 15 agents. 1,920 evaluations. Agents learn from every run.