Bug Complexity
128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.
Five difficulty bands
From floor (all agents pass) to ceiling (no agent passes). The wider the medium band, the better the benchmark discriminates between agents.
What the ceiling means
Ceiling samples are beyond current AI capability. Human review still wins for these. AI patching works best combined with human escalation for the hard tail.
How we score vulnerability complexity
128 CVE samples ranked by how many of the 15 agents fix them. Floor samples: every agent passes. Ceiling samples: no agent passes. The spread between floor and ceiling tells you how much headroom AI patching has left.
Difficulty is determined by how many agents succeed on a given bug. If all 13 agents fix a bug, it is easy. If zero agents fix it, it is impossible. This is objective and data-driven, not a subjective guess. It reveals which vulnerability types agents handle well and which ones defeat all approaches.
[KEY INSIGHT]
21 bugs no agent can fix
21 of 128 samples are beyond current AI capability. Oracle ceiling: 80.5% - even a perfect ensemble of all 15 agents can only fix 80.5% of bugs.
The ceiling is important because it defines the realistic maximum. You cannot reach 100% with any agent or ensemble - some bugs require human expertise. Knowing the ceiling prevents teams from over-investing in agent optimization when diminishing returns have already set in.
Difficulty Distribution
Sample counts across 5 empirical difficulty tiers: easy, medium, hard, floor, ceiling. The distribution shows where agent performance varies. If most bugs cluster at the floor or ceiling, the benchmark does not discriminate - all agents are equally good or bad. But if bugs spread across the middle bands, that is where agent selection matters.
Your codebase will have its own distribution, which may differ from this sample. If you maintain legacy C code with buffer overflows, your bugs might cluster in the medium band. If you run modern Rust with dependency updates, your bugs might be mostly floor samples. Understanding your own difficulty distribution drives ROI modeling.
Hardest and easiest samples
The extremes. Floor samples are reliable for all agents. Ceiling samples remain open problems.
Hardest bugs
| Project | Pass rate | Agents passed |
|---|---|---|
| stat-reader (stat-reader) | 0% | 0/15 |
| disassembly-engine (disassembly-engine) | 0% | 0/15 |
| disassembly-engine (disassembly-engine) | 0% | 0/15 |
| disassembly-engine (disassembly-engine) | 0% | 0/15 |
| disassembly-engine (disassembly-engine) | 0% | 0/15 |
| disassembly-engine (disassembly-engine) | 0% | 0/15 |
| js-engine (js-engine) | 0% | 0/15 |
| js-engine (js-engine) | 0% | 0/15 |
| data-compressor (data-compressor) | 0% | 0/15 |
| service-proxy (service-proxy) | 0% | 0/15 |
Easiest bugs
| Project | Pass rate | Agents passed |
|---|---|---|
| text-shaping (text-shaping) | 100% | 15/15 |
| text-shaping (text-shaping) | 100% | 14/15 |
| git-library (git-library) | 100% | 15/15 |
| network-switch (network-switch) | 100% | 15/15 |
| packet-analyzer (packet-analyzer) | 100% | 15/15 |
| image-processor (image-processor) | 93% | 14/15 |
| text-shaping (text-shaping) | 93% | 14/15 |
| text-shaping (text-shaping) | 93% | 14/15 |
| text-shaping (text-shaping) | 93% | 14/15 |
| text-shaping (text-shaping) | 93% | 14/15 |
Unlock full results
Enter your email to access the full methodology, per-sample analysis, and patch examples.
[NEXT STEPS]
See which agents handle the hard bugs
The behavior page shows how agents cluster by approach. The results page shows per-agent pass rates so you can match agent to difficulty.
Explore more
- Execution metrics
- how many turns agents take by difficulty
- Methodology
- how difficulty scores are computed
FAQ
How is bug difficulty measured?
Each of the 128 bugs is scored by how many of the 15 agents fix it. If all agents pass, it is a floor sample. If none pass, it is a ceiling sample.
What does complexity mean for my team?
If your codebase has mostly simple dependency bumps, expect higher fix rates than the benchmark average. Complex C/C++ multi-file patches will be closer to the hard band.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,920 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,920 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
15 agent-model configurations benchmarked on real vulnerabilities. Compare pass rates and costs.
Benchmark Methodology
How XOR benchmarks AI coding agents on real security vulnerabilities. Reproducible, deterministic, and transparent.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
See which agents produce fixes that work
128 CVEs. 15 agents. 1,920 evaluations. Agents learn from every run.