Building Secure Agent Skills
36.82% of agent skills have known vulnerabilities. A four-step verification checklist for skill builders.
The supply chain problem
Agent skills run with the agent's permissions. A Snyk audit of 3,984 skills found 36.82% contain security flaws and 13.4% have critical issues including credential theft and data exfiltration.
Verification checklist
Sandbox execution, permission audit, dependency scanning, and signature verification. Every skill should pass all four before running in production.
The supply chain problem
Agent skills are executable code with access to filesystems, networks, and credentials. Unlike traditional dependencies, skills combine three attack surfaces that standard dependency scanners miss:
Executable artifacts
Scripts, binaries, and server processes that run with the agent's permissions. A backdoored skill runs as the agent.
Natural language instructions
Agent directives and prompt templates that can contain injection payloads. The skill tells the agent what to do next.
Access wiring
Credentials, API keys, and permission scopes. A compromised skill inherits every secret the agent can reach.
A Snyk audit of 3,984 agent skills found 36.82% contain at least one security flaw and 13.4% have critical issues including credential theft and data exfiltration (source: Snyk ToxicSkills, Feb 2026).
Second-order risk: downstream propagation
Skills removed from one marketplace remain discoverable through downstream registries that automatically index upstream repositories. Removal does not equal mitigation. Pluto Security researchers demonstrated this with backdoored skills distributed via ClawHub that persisted in SkillsMP after the originals were taken down (source: Pluto Security, Feb 2026).
Skill verification checklist
Every skill should pass these checks before an agent uses it in production:
1. Sandbox execution
Run the skill in an isolated container. No host filesystem access, no network egress except to declared endpoints. If the skill needs broader access, that is a finding.
$ xor scan --skills agent-config.json
Scanning 8 skill configurations...
✓ 6 skills pass sandbox constraints
✗ file-writer: requests host filesystem access
✗ api-proxy: undeclared network egress to 3 domains
Action: block 2 skills, enforce approved list
2. Permission audit
Check what the skill requests access to. Least privilege: a code formatting skill should not need network access. A search skill should not need filesystem write.
3. Dependency scanning
Scan all transitive dependencies against known vulnerability databases. Skills often bundle their own dependencies outside the project's lockfile.
4. Signature verification
Require cryptographic signatures on skill packages. Unsigned skills are untrusted by default. Signed skills can be traced to an author and revoked.
How XOR verifies skills before deployment
XOR treats skills as a supply chain category. Before any skill runs in production, it passes through the same verification pipeline used for agent-generated patches:
Scan
Dependencies checked against CVE databases and vulnerability feeds.
Sandbox
Skill executed in isolation. Permission violations trigger immediate termination.
Sign
Verified skills receive a COSE_Sign1 signature. Unsigned skills are blocked.
Monitor
Runtime behavior logged as first-class security events. Anomalies flagged.
Building skills that pass verification
If you build agent skills, these practices reduce friction with verification systems:
Declare all permissions upfront
List filesystem paths, network endpoints, and credential scopes in the skill manifest. Undeclared access is blocked by default.
Pin dependencies with lockfiles
Include a lockfile in the skill package. Floating versions introduce supply chain risk through dependency confusion.
Include content hashes
Provide SHA256 hashes for all bundled artifacts. Content-addressable verification catches tampered packages.
Sign your releases
Use COSE_Sign1 (RFC 9052) to sign skill packages. Verification systems can then trace the skill to a known author and check revocation status.
Standards and signing
Three IETF standards cover the full lifecycle of skill verification:
COSE_Sign1 (RFC 9052)
The signing envelope. Each skill package and each invocation gets a cryptographic signature tied to the author's identity.
SCITT
Supply Chain Integrity, Transparency, and Trust. Provides chain-of-custody receipts so every skill has a verifiable provenance trail.
RATS
Remote Attestation Procedures. Verifies the execution environment itself - confirming the sandbox is genuine and unmodified.
Together these standards provide non-repudiation and provenance for every skill invocation. See
standards compliance
for how XOR integrates with each.
[NEXT STEPS]
Secure your agent supply chain
FAQ
What makes an agent skill insecure?
Agent skills combine three attack surfaces: executable artifacts, natural language instructions, and access wiring. Snyk audited 3,984 skills and found 36.82% have at least one security flaw (source: Snyk ToxicSkills, Feb 2026).
How does XOR verify agent skills?
Four steps: scan (CVE databases), sandbox (isolated execution with permission checks), sign (COSE_Sign1 signature), monitor (runtime anomaly detection). Unsigned or out-of-policy skills are blocked.
What IETF standards apply to skill signing?
COSE_Sign1 (RFC 9052) for the signing envelope, SCITT for chain-of-custody receipts, and RATS for attestation of the execution environment.
Patch verification
XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.
Automated vulnerability patching
AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Benchmark Results
62.7% pass rate. $2.64 per fix. Real data from 1,664 evaluations.
Agent Cost Economics
Fix vulnerabilities for $2.64–$52 with agents. 100x cheaper than incident response. Real cost data.
Agent Configurations
13 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.
Benchmark Methodology
How CVE-Agent-Bench evaluates 13 coding agents on 128 real vulnerabilities. Deterministic, reproducible, open methodology.
Agent Environment Security
AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.
Security Economics for Agentic Patching
Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.
Validation Process
25 questions we ran against our own data before publishing. Challenges assumptions, explores implications, extends findings.
Cost Analysis
10 findings on what AI patching costs and whether it is worth buying. 1,664 evaluations analyzed.
Bug Complexity
128 vulnerabilities scored by difficulty. Floor = every agent fixes it. Ceiling = no agent can.
Agent Strategies
How different agents approach the same bug. Strategy matters as much as model capability.
Execution Metrics
Per-agent session data: turns, tool calls, tokens, and timing. See what happens inside an agent run.
Pricing Transparency
Every cost number has a source. Published pricing models, measurement methods, and provider rates.
Automated Vulnerability Patching and PR Review
Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.
Continuous Learning from Verified Agent Runs
A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.
Signed Compliance Evidence for AI Agents
A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.
Compliance Evidence and Standards Alignment
How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.
See which agents produce fixes that work
128 CVEs. 13 agents. 1,664 evaluations. Agents learn from every run.