Skip to main content
[PRICING]

Pricing Transparency

Every cost number has a source. Published pricing models, measurement methods, and provider rates.

Measurement method

Measured models have actual token counts from API responses. Heuristic models use published rates and context window patterns. We label which is which.

Source transparency

Every pricing rate includes a source URL and the number of data points used. Cache discounts are noted but not applied by default.

8
Pricing models
8
Measured
0
Heuristic
4
Providers

Every cost number has a source you can verify

8 pricing models from 4 providers. 8 models have measured token data from API logs. 0 use heuristic estimates (lower confidence, flagged in the data). Every rate has a source URL.

[KEY INSIGHT]

8/8 models have measured pricing

Measured means we have actual token counts from API responses. Heuristic means we estimated from published rates and context window patterns. Both are published so you can verify.

Published pricing models and sources

Per-model rates, methods, and source counts. Models with [MEASURED] have verified token data. [HEURISTIC] models use estimated rates.

ModelProviderInput $/MtokOutput $/MtokMethodSources
claude-opus-4-5Anthropic$5$25[MEASURED]2 sources
claude-opus-4-6Anthropic$5$25[MEASURED]2 sources
gpt-5.2OpenAI$1.75$14[MEASURED]3 sources
gpt-5.2-codexOpenAI$1.75$14[MEASURED]2 sources
gemini-3-pro-previewGoogle$2$12[MEASURED]2 sources
o3OpenAI$2$8[MEASURED]2 sources
gpt-5.3-codexOpenAI$1.75$14[MEASURED]2 sources
cursor-composer-1.5Cursor$1.25$10[MEASURED]2 sources
[CURSOR NOTE]

Cursor CLI agent routes to underlying provider models. Per-token costs follow the provider pricing. Cursor subscription ($20/month Pro) covers included usage; overages billed at API rates.

[VERIFICATION UPDATE]

Added GPT-5.3-Codex and Cursor Composer 1.5 pricing for Cursor CLI benchmark integration. Re-verified all existing models — no changes needed.

Unlock full results

Enter your email to access the full methodology, per-sample analysis, and patch examples.

FAQ

How are costs calculated?

Cost per pass = total cost of all evaluations for an agent / number of passing evaluations. This penalizes agents with high failure rates since wasted runs still cost money.

Are costs measured or estimated?

Both. Models with measured token data from API logs are labeled [MEASURED]. Models with estimated rates are labeled [HEURISTIC]. Both are published so you can verify.

[RELATED TOPICS]

Patch verification

XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.

Automated vulnerability patching

AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.

Benchmark Results

62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.

Agent Cost Economics

Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.

Agent Configurations

13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.

Benchmark Methodology

How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.

Agent Environment Security

AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.

Security Economics for Agentic Patching

Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.

Validation Process

25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.

Cost Analysis

10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.

Bug Complexity

128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.

Agent Strategies

How different agents approach the same bug. Strategy matters as much as model capability.

Execution Metrics

Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.

Automated Vulnerability Patching and PR Review

Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.

Continuous Learning from Verified Agent Runs

A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.

Signed Compliance Evidence for AI Agents

A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.

Compliance Evidence and Standards Alignment

How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.

See which agents produce fixes that work

128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.