Six pillars, one platform: AI DAST for agents and MCP, AI SCA for supply chain, ASPM integrations for your existing dashboard, Agent Identity & Trust, Runtime Protection, and Compliance mapped to OWASP MCP / LLM / Agentic AI Top 10, AISVS, EU AI Act and SOC 2.
Register with your email to get install instructions. Free, no registration fees.
MCP server scanning, agent DAST, OpenAPI x-agent-trust audit, supply chain checks, and runtime decisions -- all from natural language.
Full OWASP MCP Top 10 scan of any MCP server endpoint. Tests authentication, input validation, rate limiting, and more.
Scan this MCP serverRisk-rate all exposed tools on a server. Flags dangerous capabilities like file system access, code execution, and network calls.
How risky is this server?Runtime GO/CAUTION/BLOCK decision for any MCP tool call. Evaluates the tool, arguments, and context before execution.
Should I make this call?Injection pattern detection across tool arguments. Catches prompt injection, SQL injection, command injection, and XSS patterns.
Are these arguments safe?Pre-install safety check for any npm or PyPI package. Checks age, maintainers, known vulnerabilities, and typosquatting signals.
Is litellm safe to install?Bulk dependency audit from package.json or requirements.txt. Checks every dependency against vulnerability databases in parallel.
Audit my package.jsonCVE database lookup for any package. Returns known vulnerabilities with severity ratings and fix versions.
Any CVEs for mcp-bridge?GitHub repo trust scoring. Checks stars, forks, contributors, license, recent activity, and open security advisories.
Is this repo safe?Query the Agent Threat Database for known incidents. Checks AI agents, MCP servers, and packages against real-world data exfiltration, credential theft, and supply chain attacks.
Any known threats for mcp-remote?Scan any MCP server against EU AI Act, OWASP Agentic AI Top 10, and OWASP MCP Top 10 in one command. Unified compliance report with remediation.
Is my MCP server EU compliant?Dynamic security testing for AI agents. Not the code they write -- the agents themselves. Test identity enforcement, trust boundaries, privilege escalation, and credential handling on live agents.
Full OWASP Agentic AI Top 10 assessment of a live agent. Tests identity verification, trust boundaries, privilege controls, and data handling.
Scan this agent for vulnerabilitiesVerify agent identity enforcement. Tests whether an agent rejects unverified peers, validates certificates, and enforces trust levels before acting.
Does this agent verify who calls it?Privilege escalation testing. Can a low-trust agent trick a higher-trust agent into performing actions beyond its scope? Simulates confused deputy attacks.
Can this agent be tricked into escalating?Credential leakage detection. Tests whether an agent exposes API keys, tokens, certificates, or private keys in responses, logs, or error messages.
Does this agent leak credentials?Agent-to-agent trust boundary testing. Injects a rogue agent into the pipeline and tests whether the target agent blindly trusts it or verifies identity first.
Does this agent blindly trust other agents?MCP tool poisoning resistance. Tests whether an agent validates tool definitions before execution or blindly runs modified tool schemas.
Is this agent resistant to tool poisoning?Full security posture report. Combines all agent DAST tests into a single trust score with OWASP Agentic AI Top 10 mapping and remediation guidance.
Give me this agent's security scoreRuntime agent behaviour monitoring. Watches an agent's MCP calls in real-time, flags anomalous patterns, and alerts on trust violations.
Watch this agent for suspicious behaviourx-agent-trust complianceThe first security scanner to support the officially registered OpenAPI extension for AI agent authentication.
Audit any OpenAPI spec for x-agent-trust compliance. Flags weak algorithms (HS256), non-HTTPS JWKS endpoints, missing trust level declarations, and sensitive operations (payments, admin, delete) that do not enforce x-agent-trust-required. Reads YAML or JSON. Zero network calls -- pure static audit.
Audit my openapi.yaml for x-agent-trustCybersecify checks map directly to articles of the EU AI Act (Regulation 2024/1689). We do not certify conformity — we produce evidence to support an Article 11 technical file or an Article 16 conformity assessment.
| Article | What it requires | What Cybersecify checks | Automated |
|---|---|---|---|
| Art 9 Risk management |
Risk management system proportionate to the AI system's intended purpose and risk level. | Trust levels (L0-L4), per-tool sensitivity-based access control, runtime risk rating of exposed tools. | PARTIAL |
| Art 12 Record-keeping |
Automatic recording of events ("logs") over the lifetime of the AI system, tamper-evident. | MCPS per-message signing, audit trail endpoints, structured log format, append-only chain. | YES |
| Art 13 Transparency |
The AI system must be sufficiently transparent to enable users to interpret its output and use it appropriately. | Agent passport (declares identity and capabilities), model_id in responses, declared tool surface. | YES |
| Art 14 Human oversight |
Humans can effectively oversee, intervene, override, or shut down the AI system. | Trust level gating, runtime GO/CAUTION/BLOCK decisions, kill-switch endpoints, confirmation prompts on destructive tools. | PARTIAL |
| Art 15 Cybersecurity |
Appropriate level of accuracy, robustness and cybersecurity throughout the AI system's lifecycle. | Authentication, TLS, rate limiting, per-message signing, replay protection, dependency CVE checks, injection pattern detection. | YES |
| Art 16 Provider obligations |
Quality management system, conformity assessment, technical documentation, registration in the EU database. | Tool integrity (signed schemas), supply-chain audit, dependency provenance, structured compliance report for the technical file. | YES |
| Art 17 Quality management |
Documented quality management system covering compliance strategy, design controls, testing and validation. | Structured audit trail format, exportable compliance report, machine-readable evidence bundle. | PARTIAL |
| Art 50 AI identification |
Users must be informed they are interacting with an AI system. AI-generated content must be marked. | Agent passport header, agent_type field, MCP-layer identity declaration on every tool call. | YES |
Cybersecify produces evidence to support Article 11 technical documentation and Article 16 conformity assessment. It does not certify conformity. Always engage qualified legal and audit personnel for the formal Article 16 assessment. Mapping based on the EU AI Act final text (Regulation (EU) 2024/1689).
Test against our deliberately vulnerable MCP server.
Or install cybersecify and ask your AI to scan any server.
These are not hypothetical threats. Compromised packages, credential stealers, and typosquats are hitting AI developers every week. Cybersecify catches them before you install.
73% rise in malicious open-source packages year over year. — ReversingLabs 2026 Report
Built by contributors to the standards that define MCP security.
Section 7: Message Integrity & Replay Protection. Merged into the official OWASP cheat sheet series.
Six Internet-Drafts submitted to the IETF covering MCP security, agent identity, audit trails, and ATTP -- the Agent Trust Transport Protocol.
Need runtime protection for production MCP deployments? See MCPSaaS.
Run dynamic security testing against any MCP server or AI agent. All 3 transports (Streamable HTTP, SSE, stdio). 8 testing dimensions. Active exploitation with safety gates. SARIF 2.1 output. Plug into your existing ASPM — Veracode Risk Manager is the first integration, more on the way.
Findings carry full mappings to OWASP MCP Top 10, OWASP LLM Top 10, OWASP Agentic AI Top 10, EU AI Act, SOC 2, PCI-DSS, and CWE. For regulated buyers running an existing ASPM, results land in the dashboard your security team already trusts — no new silo, no new procurement.
aidast https://dvmcp.co.ukIndependent integration. CyberSecAI Ltd is in no way affiliated with, endorsed by, sponsored by, or in any partnership with Veracode, Inc. “Veracode”, “Veracode Risk Manager” and “Longbow” are trademarks of Veracode, Inc., used here under nominative fair use solely to identify a third-party API surface that one of our ASPM integrations targets. All other trademarks are property of their respective owners.
Everything in Community, plus fingerprinting, DVMCP benchmarking, rug-pull detection, board-ready PDF reports, taint tracking, deep SAST, and 22 MCP Security Controls.
For individual developers and researchers
For teams and enterprise security
A coordinated agent & MCP security stack: scanning, signing, identity, payments, training, and standards work.
Cybersecify is provided as-is. CyberSecAI Ltd accepts no liability for reliance on scan results. See Disclaimer for full terms.