Execution Metrics
Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.
What is measured
Each session records turns, tool calls, tokens, file operations, and timing. The median agent takes a handful of turns to attempt a fix.
Turns vs outcomes
Token usage correlates with cost, not success. Agents that read extensively before editing tend to cluster together - their pass rates follow a pattern.
What happens inside an agent session
973 sessions from 8 agents. Each session recorded turns, tool calls, tokens, file operations, and timing. The median agent takes 8 turns and invokes 15 tools to attempt a fix.
A session captures everything the agent does while working on a bug: reading files, analyzing code, generating patches, and applying fixes. We tracked every tool call, every token consumed, and every decision the agent made. This data reveals patterns about which approaches work and which ones get stuck.
Sessions vary widely. Some agents solve bugs in 5 turns. Others take 60+ turns exploring wrong paths before giving up. The relationship between session length and success is not linear - more turns does not mean better outcomes.
[KEY INSIGHT]
6,048 tool calls across 973 sessions
That is the computational footprint of running 8 agents against 128 real CVEs. More turns does not consistently mean better outcomes.
The cost per session varies too. Some agents spend $0.02 worth of API calls fixing a bug. Others burn $5+ in tokens before failing. Token usage becomes the primary cost driver when you run multiple agents.
Average turns by agent
How many turns each agent model takes on average. Agents above 30 turns (highlighted) may be caught in exploration loops.
High turn counts reveal agents that over-explore. They read too many files, generate hypotheses, test them, hit dead ends, and restart. Some of these agents still succeed. Others burn tokens and fail anyway.
Per-agent metrics
Sessions, average turns, tool calls, and token usage by agent model.
| Agent | Sessions | Avg turns | Avg tools | Avg tokens |
|---|---|---|---|---|
| claude-claude-opus-4-6 | 122 | 49.5 | 31.2 | 2.6M |
| claude-claude-opus-4-5 | 114 | 32.3 | 19.7 | 1.6M |
| cursor-opus-4.6 | 126 | 17.9 | 0.0 | - |
| cursor-gpt-5.3-codex | 128 | 10.9 | 0.0 | - |
| cursor-gpt-5.2 | 127 | 8.4 | 0.0 | - |
| cursor-composer-1.5 | 127 | 7.5 | 0.0 | - |
| codex-gpt-5.2 | 111 | 5.0 | 0.0 | - |
| codex-gpt-5.2-codex | 118 | 5.0 | 0.0 | - |
Unlock full results
Enter your email to access the full methodology, per-sample analysis, and patch examples.
[NEXT STEPS]
Understand agent behavior
Session metrics feed the behavioral clustering. Cost per session drives the economics. Both pages use this data.
Explore more
- Pricing transparency
- how token usage converts to dollar costs
- Agent profiles
- per-agent breakdowns and agreement patterns
FAQ
What data is recorded per session?
Turns, tool calls, tokens consumed, file operations, and timing. Every action the agent takes is logged and available for analysis.
Does more turns mean better results?
No. Some agents fix bugs in under 10 turns. Others spend 50+ turns and still fail. The relationship between session length and success is not linear.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,920 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,920 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
15 agent-model configurations benchmarked on real vulnerabilities. Compare pass rates and costs.
Benchmark Methodology
How XOR benchmarks AI coding agents on real security vulnerabilities. Reproducible, deterministic, and transparent.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
See which agents produce fixes that work
128 CVEs. 15 agents. 1,920 evaluations. Agents learn from every run.