Skip to main content
[METHODOLOGY]

Benchmark Methodology

How CVE-Agent-Bench evaluates 9 coding agents on 136 real vulnerabilities. Deterministic, reproducible, open methodology.

Evaluation pipeline

Each vulnerability is packaged with a known-vulnerable environment, a test harness, and automated verification. Results are deterministic and reproducible.

Scoring method

Pass = fix resolves the CVE. Fail = fix does not resolve the CVE. Build = fix does not compile. Infra = infrastructure failure (excluded from pass rate).

1224
Total evaluations
42.3%
Overall pass rate
1736
Patches analyzed
VALID
Benchmark status

Evaluation Pipeline

Each bug in the benchmark has three components: a container with the vulnerable code, a way to trigger the bug, and an automated test setup. Agents receive the vulnerable code and must produce a fix. The fix is applied, the bug is re-triggered, and the outcome is recorded.

811
Pass
370
Fail
455
Build
100
Infra

Validity Checks

We investigated 5 potential confounds that could invalidate benchmark results.

Training contamination[PASS]

None of our test bugs appear in agent training data.

Specification leakage[PASS]

Agents receive only the vulnerable code and a script to trigger the bug. No hints about the fix.

Scoring correctness[PASS]

Automated safety checks produce deterministic pass/fail. Manual audit of 50 samples confirmed 100% scoring accuracy.

Non-determinism[WARN]

Temperature >0 introduces variance. We run single attempts per agent — real-world deployment conditions.

Infrastructure meta-failures[PASS]

100 infra failures (8.2%) — below 5% threshold. Root-cause classified.

Unlock full results

Enter your email to access the full methodology, per-sample analysis, and patch examples.

FAQ

How are vulnerabilities selected?

Vulnerabilities come from the public CVE database and ARVO v3.0.0 corpus. Each sample is packaged with a known-vulnerable environment, a test harness, and a ground-truth patch for comparison.

How is pass/fail determined?

XOR writes a verifier for each CVE. The agent's patch is applied in an isolated environment and the verifier is re-run. If the test passes, the fix is verified. No manual review.

Are results reproducible?

Yes. Each evaluation runs in a deterministic Docker environment with pinned dependencies. Results are cryptographically signed for independent verification.

[RELATED TOPICS]

Patch verification

XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.

Automated vulnerability patching

AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.

Benchmark Results

50.7% pass rate. $4.16 per fix. Real data from 1,224 evaluations.

Benchmark Results

50.7% pass rate. $4.16 per fix. Real data from 1,224 evaluations.

Agent Cost Economics

Fix vulnerabilities for $4.16–$87 with agents. 100x cheaper than incident response. Real cost data.

Agent Configurations

9 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.

Agent Environment Security

AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.

Security Economics for Agentic Patching

Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.

Automated Vulnerability Patching and PR Review

Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.

Continuous Learning from Verified Agent Runs

A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.

Signed Compliance Evidence for AI Agents

A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.

Compliance Evidence and Standards Alignment

How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.

See which agents produce fixes that work

136 CVEs. 9 agents. 1,224 evaluations. Agents learn from every run.