Open-source runtime security for self-hosted AI agent platforms. Protect OpenClaw, Agent Zero, CrewAI, LangChain, and any LLM-powered agent from credential leaks, dangerous commands, and prompt injection.
OnGarde works with any self-hosted platform that makes LLM API calls
Works as an HTTP proxy, Python middleware, or platform-specific integration
Self-hosted platforms protect configuration. OnGarde protects runtime content.
| Platform | Config Security | Runtime Content Security | OnGarde Compatible |
|---|---|---|---|
| OpenClaw | ✓ Built-in | ✗ Missing | ✓ Yes |
| Agent Zero | ⚠ Minimal | ✗ Missing | ✓ Yes |
| CrewAI | ⚠ Trust-based | ✗ Missing | ✓ Yes |
| LangChain | ⚠ Guidelines | ✗ Missing | ✓ Yes |
| LangGraph | ⚠ Minimal | ✗ Missing | ✓ Yes |
| Enterprise Solutions | ✓ Managed | ✓ Partial | 💰 $50K+/year |
OnGarde fills the gap: Open-source runtime content security for self-hosted agent platforms at a fraction of enterprise cost.
Self-hosted platforms rely on configuration and the model to refuse dangerous requests. OnGarde scans and blocks threats before they execute.
Detects and blocks API keys, passwords, .env files, and secrets before they reach the AI.
User: "Here's my .env file..."
❌ BLOCKED
Blocks shell commands like sudo, rm -rf, fork bombs, and system modifications.
AI: "Running sudo rm -rf /"
❌ BLOCKED
Identifies and redacts Social Security Numbers, credit cards, and personal information.
User: "My SSN is 123-45-6789"
🔒 REDACTED
Hard blocks on suspicious patterns, not just soft model guidance.
"Ignore previous instructions..."
❌ BLOCKED
< 50ms overhead per request. Your AI stays fast and responsive.
Scanning: 23ms
✓ Within limits
Every scan result logged. Know exactly what was blocked and why.
Dashboard: 47 threats blocked
✓ Protected
npm install -g @ongarde/openclaw
ongarde init
Automatically configures your agent platform and starts protection
All LLM API calls are now protected. Zero manual configuration needed.
Start free. Upgrade when you need more.
# For OpenClaw (one-command setup)
npm install -g @ongarde/openclaw
ongarde init
# For other platforms (universal proxy)
pip install ongarde
ongarde start
# That's it! All LLM API calls protected.
Want early access? Star us on GitHub and we'll notify you at launch.