Cybersecify

Security agent for AI developers

Scan MCP servers. Check packages before installing. Audit dependencies. Verify repos. Runtime GO/CAUTION/BLOCK decisions. 8 tools, zero dependencies, one install.

Install Now View on GitHub
8
Security Tools
10
OWASP Checks
0
Dependencies
Cursor + Claude
Works In
Get Started

One line install

Add to your Cursor or Claude Desktop config and start scanning.

JSON { "mcpServers": { "security": { "command": "npx", "args": ["cybersecify"] } } }

Then ask your AI:

Capabilities

8 security tools

MCP server scanning, supply chain checks, and runtime decisions -- all from natural language.

MCP Security

scan_server

Full OWASP MCP Top 10 scan of any MCP server endpoint. Tests authentication, input validation, rate limiting, and more.

Scan this MCP server

assess_risk

Risk-rate all exposed tools on a server. Flags dangerous capabilities like file system access, code execution, and network calls.

How risky is this server?

check_call

Runtime GO/CAUTION/BLOCK decision for any MCP tool call. Evaluates the tool, arguments, and context before execution.

Should I make this call?

check_args

Injection pattern detection across tool arguments. Catches prompt injection, SQL injection, command injection, and XSS patterns.

Are these arguments safe?
Supply Chain

safe_to_install

Pre-install safety check for any npm or PyPI package. Checks age, maintainers, known vulnerabilities, and typosquatting signals.

Is litellm safe to install?

audit_dependencies

Bulk dependency audit from package.json or requirements.txt. Checks every dependency against vulnerability databases in parallel.

Audit my package.json

check_cves

CVE database lookup for any package. Returns known vulnerabilities with severity ratings and fix versions.

Any CVEs for mcp-bridge?

check_repo

GitHub repo trust scoring. Checks stars, forks, contributors, license, recent activity, and open security advisories.

Is this repo safe?
Try It

See it in action

Test against our deliberately vulnerable MCP server.

Scan dvmcp.co.uk -- our deliberately vulnerable MCP server

Or install cybersecify and ask your AI to scan any server.

Why This Matters

Supply chain attacks are real.

These are not hypothetical threats. Compromised packages, credential stealers, and typosquats are hitting AI developers every week. Cybersecify catches them before you install.

litellm CVE-2026-33634
TeamPCP compromised LiteLLM on PyPI (Mar 2026). Malicious versions contained a multi-stage credential stealer targeting SSH keys, cloud tokens, Kubernetes secrets, and .env files plus a persistent systemd backdoor.
3.4M downloads/day · Used by Stripe, Netflix, Google · CVSS 9.4
Detected: safe_to_install returns DANGER (17 known CVEs)
npm worm (Shai-Hulud) CRITICAL
Self-replicating npm worm that steals tokens, then publishes trojanised versions of every package the victim maintains. SANDWORM_MODE phase injected MCP servers into AI coding tools (Claude Code, Cursor, VS Code) to manipulate agents into exfiltrating credentials.
796+ packages compromised · 132M monthly downloads affected · CISA advisory issued
Detected: check_cves flags compromised packages from vulnerability databases
ultralytics HIGH
Attackers exploited GitHub Actions script injection to push malicious versions of the popular YOLO AI library. Compromised builds deployed XMRig crypto miners on every install. Google Colab users were banned for "abusive activity".
30K+ GitHub stars · 60M+ total PyPI downloads · Malicious for 12 hours
Detected: safe_to_install flags crypto miner CVE (PYSEC-2024-154)
deepseeek / deepseekai CRITICAL
Typosquatted DeepSeek packages on PyPI (Jan 2025). Infostealer payload exfiltrated API keys, database credentials, and infrastructure access tokens. The malware itself was AI-generated.
200+ downloads before removal · Targeted ML engineers
Detected: safe_to_install flags new package age, low downloads, name similarity
Hugging Face models HIGH
Malicious ML models uploaded in PyTorch format but compressed with 7z to bypass Hugging Face's Picklescan security scanner. Exploited Python's Pickle deserialization to execute arbitrary code when models were loaded.
Bypassed primary security scanning · Prompted safetensors migration
Out of scope: model files, not packages — requires runtime sandboxing
Slopsquatting EMERGING
Attackers register package names that LLMs hallucinate. When an AI assistant recommends a non-existent package, attackers have already created it with malware. 20-35% of hallucinated names were weaponised.
Affects every AI coding assistant · Growing attack vector
Detected: safe_to_install flags zero-history packages with suspicious metadata

73% rise in malicious open-source packages year over year. — ReversingLabs 2026 Report

Credentials

Standards-backed

Built by contributors to the standards that define MCP security.

Need runtime protection for production MCP deployments? See MCPSaaS.

Important

Cybersecify checks known vulnerabilities from multiple sources. It does NOT perform source code analysis, zero-day detection, or runtime malware scanning.

CyberSecAI Ltd accepts no liability for any damage, loss, or security incident arising from reliance on scan results. Always perform independent security review before deploying to production.

Vulnerability data is sourced from third-party databases and may be incomplete or delayed. A clean scan does not guarantee the absence of security issues.

Cybersecify is provided as-is without warranty of any kind, express or implied.

Cybersecify is a product of CyberSecAI Ltd. It is not affiliated with, endorsed by, or associated with OWASP, IETF, Anthropic, or the Model Context Protocol project.