Skip to main content
[STANDARDS]

Compliance Evidence and Standards Alignment

How XOR signed audit trails produce evidence for SOC 2, EU AI Act, PCI DSS, NIST, and other compliance frameworks.

[STANDARDS]

Standards-aligned compliance evidence

OutcomeProduce audit-ready evidence for compliance teams from every agent run.

MechanismSigned traces aligned with IETF, SCITT, and RATS standards. Produces evidence for SOC 2, EU AI Act, Cyber Resilience Act, PCI DSS, and FedRAMP.

ProofTrace fields are already emitted in XOR evaluations today.

Compliance requires evidence, not promises

Regulatory frameworks require audit trails for AI systems. Current agent logs do not meet evidentiary standards. Verifiable Vibes produces cryptographically signed traces that satisfy compliance requirements across multiple frameworks.

Framework alignment

  1. IETF: Submitted as Internet-Draft with CDDL schema specification
  2. RATS: Agent claims structured as remote attestation evidence
  3. SCITT: Traces registered on transparent supply chain ledger
  4. PCI DSS: Cryptographic audit trails produce evidence for Section 10 logging requirements
  5. SOC 2: Signed traces produce Trust Services monitoring evidence
  6. NIST CSF: Covers Identify, Protect, and Detect functions for AI systems
  7. EU AI Act: Produces evidence for Article 13 transparency and Article 31 accountability requirements

Status

XOR is submitting an open standard for signed AI agent audit logs through the IETF (the same body that standardized HTTP and TLS). We implement it today.

Built on open internet standards

Supply chain integrity

Signed records that prove agent actions are authentic and unmodified. Based on IETF SCITT.

Remote verification

Independent verification of agent behavior. Based on IETF RATS.

Secure updates

Trusted update and deployment framework. Based on IETF SUIT.

What XOR implements today

  • A complete record of every agent session and file change
  • Digital signatures so records cannot be altered
  • Test reports attached to every agent pull request

What this means for audits

Signed audit logs give compliance teams clear evidence of what an agent did, what it changed, and whether the fix passed. This reduces audit time and shortens approval cycles.

[NEXT STEPS]

Start building your compliance evidence

FAQ

Which standards does Verifiable Vibes align with?

IETF (Internet-Draft submission), RATS (Remote Attestation Procedures), SCITT (Supply Chain Integrity), PCI DSS (audit trail requirements), SOC 2 (system monitoring), NIST CSF (identify/protect/detect functions), and EU AI Act (Article 13 transparency, Article 31 accountability).

Does this help with SOC 2 compliance?

Yes. SOC 2 Trust Services Criteria require monitoring of system operations. Verifiable agent traces produce cryptographically signed evidence of what AI agents did, when, and with what outcome.

How does SCITT alignment work?

SCITT defines transparent ledgers for supply chain artifacts. Agent traces are registered as SCITT claims, providing an immutable record that can be independently verified by any party with ledger access.

Is this required for production AI deployments?

The EU AI Act requires audit trails for high-risk AI systems. SOC 2 requires system monitoring evidence. Verifiable traces produce the evidence layer that compliance teams need without requiring changes to agent code.

[RELATED TOPICS]

Patch verification

XOR writes a verifier for each vulnerability, then tests agent-generated patches against it. If the fix passes, it ships. If not, the failure feeds back into the agent harness.

Automated vulnerability patching

AI agents generate fixes for known CVEs. XOR verifies each fix and feeds outcomes back into the agent harness so future patches improve.

Benchmark Results

50.7% pass rate. $4.16 per fix. Real data from 1,224 evaluations.

Benchmark Results

50.7% pass rate. $4.16 per fix. Real data from 1,224 evaluations.

Agent Cost Economics

Fix vulnerabilities for $4.16–$87 with agents. 100x cheaper than incident response. Real cost data.

Agent Configurations

9 agent-model configurations evaluated on real CVEs. Compare Claude Code, Codex, Gemini CLI, Cursor, and OpenCode.

Benchmark Methodology

How CVE-Agent-Bench evaluates 9 coding agents on 136 real vulnerabilities. Deterministic, reproducible, open methodology.

Agent Environment Security

AI agents run with real permissions. XOR verifies tool configurations, sandbox boundaries, and credential exposure.

Security Economics for Agentic Patching

Security economics for agentic patching. ROI models backed by verified pass/fail data and business-impact triage.

Automated Vulnerability Patching and PR Review

Automated code review, fix generation, GitHub Actions hardening, safety checks, and learning feedback. One-click install on any GitHub repository.

Continuous Learning from Verified Agent Runs

A signed record of every agent run. See what the agent did, verify it independently, and feed the data back so agents improve.

Signed Compliance Evidence for AI Agents

A tamper-proof record of every AI agent action. Produces evidence for SOC 2, EU AI Act, PCI DSS, and more. Built on open standards so auditors verify independently.

See which agents produce fixes that work

136 CVEs. 9 agents. 1,224 evaluations. Agents learn from every run.