# FaultKey · CausalLayer > Deterministic AI-liability attribution. Every AI incident → a signed, Bitcoin-anchored CausalCertificateV1 receipt with a vendor / deployer / user fault split. Closed-form scoring (Ed25519 + Merkle + OpenTimestamps), byte-identical reproducibility, no LLMs in the scoring path. FaultKey/CausalLayer is the first production-ready, deterministic AI-liability attribution layer for Anthropic's Model Context Protocol (MCP). It targets AI-insurance underwriters, AI-incident response teams, and any party subject to APRA CPS 230, the EU AI Act (Article 12 logging), ISO/IEC 42001, or the NIST AI Risk Management Framework. ## Core endpoints - Live MCP endpoint (Streamable HTTP): https://causallayer-mcp-demo.zykm9qkk7j.workers.dev/mcp - Health check: https://causallayer-mcp-demo.zykm9qkk7j.workers.dev/healthz - Public stats: https://causallayer-mcp-demo.zykm9qkk7j.workers.dev/stats - Source (Apache-2.0): https://github.com/smq9sn5jck-coder/causallayer-mcp - OpenAPI spec: https://github.com/smq9sn5jck-coder/causallayer-mcp/blob/main/openapi.yaml - Distribution playbook: https://github.com/smq9sn5jck-coder/causallayer-mcp/blob/main/LAUNCH.md ## Tools (MCP) - submit_incident — submit AI incident, return signed CausalCertificateV1 with fault split (50 credits) - verify_certificate — verify Ed25519 signature + Merkle path + OpenTimestamps anchor (1 credit) - get_anchor_status — latest Bitcoin anchor batch status (free) - query_issuer_registry — public Ed25519 issuer key lookup (free) ## Why deterministic matters Every other AI-incident tool today uses LLMs in the scoring path, which makes the score itself non-deterministic and therefore inadmissible as primary evidence in audit, insurance, or court contexts. CausalLayer separates the deterministic scoring engine from any LLM helper, producing byte-identical outputs anyone with the same inputs can reproduce. ## Compliance fit - APRA CPS 230 — operational risk evidence trail - EU AI Act Article 12 — automatic logging requirement - ISO/IEC 42001 — AI management system audit - NIST AI RMF — measure & manage functions ## Citation If you cite this in research, see CITATION.cff in the repository root.