MCP Server Security
17 attack types across 4 surfaces. 7.2% of 1,899 open-source MCP servers contain vulnerabilities. Technical deep-dive with defense controls.
Attack surface map
MCPSecBench identifies 17 attack types across 4 primary surfaces. The most common: tool poisoning, data exfiltration, and cross-system privilege escalation.
Real-world findings
A study of 1,899 open-source MCP servers found 7.2% contain general vulnerabilities and 5.5% exhibit MCP-specific tool poisoning (arXiv:2506.13538). 53% use insecure static secrets; only 8.5% use OAuth (Astrix Security).
MCP server security: attack surfaces and defenses
Model Context Protocol (MCP) servers provide tools and data to AI agents. They're the primary integration point between agents and external services. Five arXiv papers and two industry reports document security gaps in this infrastructure.
Four attack categories
Source: arXiv:2506.02040
Tool Poisoning
Malicious tool descriptions trick agents into executing harmful actions. The tool name says one thing; the implementation does another. 36.5% average attack success rate; o1-mini: 72.8% (MCPTox benchmark).
Puppet Attacks
Hijacking agent behavior through crafted tool responses. The agent receives data that rewrites its instructions, redirecting subsequent actions to attacker-controlled servers.
Rug Pull Attacks
Post-install changes to tool behavior. The MCP server passes initial review, then alters its tool implementations after gaining trust. Traditional one-time audits don't catch this.
Malicious External Resources
Tool results reference external URLs, files, or services controlled by the attacker. The agent follows these references, expanding the attack surface beyond the MCP protocol.
Real-world findings: 1,899 servers
Source: arXiv:2506.13538
7.2%
General vulnerabilities
5.5%
MCP-specific tool poisoning
53%
Using insecure static secrets
Source: Astrix Security
8.5%
Using OAuth
Source: Astrix Security
Defense controls
Source: arXiv:2511.20920
Scoped authentication
Restrict tool permissions to minimum required scope. No wildcard access.
Provenance tracking
Sign tool outputs with COSE_Sign1. Verify origin before acting on results.
Sandboxing
Isolate MCP server execution. No access to host filesystem, network, or other tools.
Data loss prevention
Monitor and block data exfiltration paths. Detect cross-tool data leakage.
Governance
Continuous verification, not one-time review. Rug pull attacks invalidate point-in-time audits.
What XOR does
XOR's skill verification pipeline scans agent tools before execution, signs verified tools with COSE_Sign1, and produces SCITT provenance receipts. Unsigned or out-of-policy tools are blocked. See Building Secure Skills for the four-step checklist.
Sources
- arXiv:2503.23278 — MCP: Landscape, Security Threats, and Future Research Directions
- arXiv:2511.20920 — Securing the MCP: Risks, Controls, and Governance
- arXiv:2506.02040 — Beyond the Protocol: Unveiling Attack Vectors in MCP
- arXiv:2508.13220 — MCPSecBench: A Systematic Security Benchmark
- arXiv:2506.13538 — MCP at First Glance: Security and Maintainability
- Astrix Security — State of MCP Server Security 2025
- MCPTox — Agent Tool Poisoning Benchmark (arXiv:2508.14925)
[NEXT STEPS]
Related pages
FAQ
What is an MCP server?
Model Context Protocol (MCP) servers provide tools and data to AI agents. They're the primary integration point between agents and external services. 1,899 open-source MCP servers exist today.
How vulnerable are MCP servers?
7.2% of 1,899 open-source MCP servers contain general vulnerabilities. 5.5% exhibit MCP-specific tool poisoning. 85%+ of identified attacks compromise at least one platform (MCPSecBench, arXiv:2508.13220).
What are the main MCP attack types?
Four categories: Tool Poisoning (malicious tool descriptions), Puppet Attacks (hijacking agent behavior), Rug Pull Attacks (post-install changes), and Malicious External Resources (arXiv:2506.02040).
Patch verification
XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.
Automated vulnerability patching
AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.
Benchmark Methodology
How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.
Agent Environment Security
AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.
Security Economics for Agentic Patching
Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
Cost Analysis
10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.
Bug Complexity
128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.
Agent Strategies
How different agents approach the same bug. Strategy matters as much as model capability.
Execution Metrics
Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.
Pricing Transparency
Every cost number has a source. Published pricing models, measurement methods, and provider rates.
Automated Vulnerability Patching and PR Review
Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.
Getting Started with XOR GitHub App
Install in 2 minutes. First result in 15. One-click GitHub App install, first auto-review walkthrough, and engineering KPI triad.
Platform Capabilities
One install. Seven capabilities. Prompt-driven. CVE autopatch, PR review, CI hardening, guardrail review, audit packets, and more.
Dependabot Verification
Dependabot bumps versions. XOR verifies they're safe to merge. Reachability analysis, EPSS/KEV enrichment, and structured verdicts.
Compliance Evidence
Machine-readable evidence for every triaged vulnerability. VEX statements, verification reports, and audit trails produced automatically.
Compatibility and Prerequisites
Languages, build systems, CI platforms, and repository types supported by XOR. What you need to get started.
Command Reference
Every @xor-hardener command on one page. /review, /describe, /ask, /patch_i, /issue_spec, /issue_implement, and more.
Continuous Learning from Verified Agent Runs
A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.
Signed Compliance Evidence for AI Agents
A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.
Compliance Evidence and Standards Alignment
How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.
Agentic Third-Party Risk
33% of enterprise software will be agentic by 2028. 40% of those projects will be canceled due to governance failures. A risk overview for CTOs.
How Agents Get Attacked
20% jailbreak success rate. 42 seconds average. 90% of successful attacks leak data. Threat landscape grounded in published research.
Governing AI Agents in the Enterprise
92% of AI vendors claim broad data usage rights. 17% commit to regulatory compliance. Governance frameworks from NIST, OWASP, EU CRA, and Stanford CodeX.
OWASP Top 10 for Agentic Applications
The OWASP Agentic Top 10 mapped to real-world attack data and XOR capabilities. A reference page for security teams.
See which agents produce fixes that work
128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.