Execution Metrics
Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.
What is measured
Each session records turns, tool calls, tokens, file operations, and timing. The median agent takes a handful of turns to attempt a fix.
Turns vs outcomes
Token usage correlates with cost, not success. Agents that read extensively before editing tend to cluster together - their pass rates follow a pattern.
What happens inside an agent session
973 sessions from 8 agents. Each session recorded turns, tool calls, tokens, file operations, and timing. The median agent takes 8 turns and invokes 15 tools to attempt a fix.
[KEY INSIGHT]
6,048 tool calls across 973 sessions
That is the computational footprint of running 8 agents against 128 real CVEs. More turns does not consistently mean better outcomes.
Average turns by agent
How many turns each agent model takes on average. Agents above 30 turns (highlighted) may be caught in exploration loops.
Per-agent metrics
Sessions, average turns, tool calls, and token usage by agent model.
| Agent | Sessions | Avg turns | Avg tools | Avg tokens |
|---|---|---|---|---|
| claude-claude-opus-4-6 | 122 | 49.5 | 31.2 | 2.6M |
| claude-claude-opus-4-5 | 114 | 32.3 | 19.7 | 1.6M |
| cursor-opus-4.6 | 126 | 17.9 | 0.0 | - |
| cursor-gpt-5.3-codex | 128 | 10.9 | 0.0 | - |
| cursor-gpt-5.2 | 127 | 8.4 | 0.0 | - |
| cursor-composer-1.5 | 127 | 7.5 | 0.0 | - |
| codex-gpt-5.2 | 111 | 5.0 | 0.0 | - |
| codex-gpt-5.2-codex | 118 | 5.0 | 0.0 | - |
Unlock full results
Enter your email to access the full methodology, per-sample analysis, and patch examples.
FAQ
What data is recorded per session?
Turns, tool calls, tokens consumed, file operations, and timing. Every action the agent takes is logged and available for analysis.
Does more turns mean better results?
No. Some agents fix bugs in under 10 turns. Others spend 50+ turns and still fail. The relationship between session length and success is not linear.
Patch verification
XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.
Automated vulnerability patching
AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.
Benchmark Methodology
How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.
Agent Environment Security
AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.
Security Economics for Agentic Patching
Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
Cost Analysis
10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.
Bug Complexity
128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.
Agent Strategies
How different agents approach the same bug. Strategy matters as much as model capability.
Pricing Transparency
Every cost number has a source. Published pricing models, measurement methods, and provider rates.
Automated Vulnerability Patching and PR Review
Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.
Continuous Learning from Verified Agent Runs
A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.
Signed Compliance Evidence for AI Agents
A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.
Compliance Evidence and Standards Alignment
How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.
See which agents produce fixes that work
128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.