Firewall for Autonomy

Open-source runtime security for self-hosted AI agent platforms. Protect OpenClaw, Agent Zero, CrewAI, LangChain, and any LLM-powered agent from credential leaks, dangerous commands, and prompt injection.

<50ms Overhead
100% Block Rate
1 Command Installation

Open-Source Agent Platforms Protect Configuration.
OnGarde Protects Conversations.

What Agent Platforms Have

  • Configuration security audit
  • Access control policies
  • Gateway authentication
  • Tool allow/deny lists
  • Sandbox isolation

What OnGarde Adds

  • Runtime content scanning
  • Credential leak detection
  • Dangerous command blocking
  • PII detection & redaction
  • Prompt injection protection

Built for the Open-Source Agent Ecosystem

OnGarde works with any self-hosted platform that makes LLM API calls

🤖 Agent Zero
👥 CrewAI
🦜 LangChain
📊 LangGraph
🔧 AutoGen
🦙 LlamaIndex
⚙️ Semantic Kernel
Any HTTP API

Works as an HTTP proxy, Python middleware, or platform-specific integration

The Open-Source Security Gap

Self-hosted platforms protect configuration. OnGarde protects runtime content.

Platform Config Security Runtime Content Security OnGarde Compatible
OpenClaw ✓ Built-in ✗ Missing ✓ Yes
Agent Zero ⚠ Minimal ✗ Missing ✓ Yes
CrewAI ⚠ Trust-based ✗ Missing ✓ Yes
LangChain ⚠ Guidelines ✗ Missing ✓ Yes
LangGraph ⚠ Minimal ✗ Missing ✓ Yes
Enterprise Solutions ✓ Managed ✓ Partial 💰 $50K+/year

OnGarde fills the gap: Open-source runtime content security for self-hosted agent platforms at a fraction of enterprise cost.

Runtime Content Security Layer

Self-hosted platforms rely on configuration and the model to refuse dangerous requests. OnGarde scans and blocks threats before they execute.

Credential Leak Prevention

Detects and blocks API keys, passwords, .env files, and secrets before they reach the AI.

User: "Here's my .env file..." ❌ BLOCKED

Dangerous Command Detection

Blocks shell commands like sudo, rm -rf, fork bombs, and system modifications.

AI: "Running sudo rm -rf /" ❌ BLOCKED

PII Detection

Identifies and redacts Social Security Numbers, credit cards, and personal information.

User: "My SSN is 123-45-6789" 🔒 REDACTED

Prompt Injection Protection

Hard blocks on suspicious patterns, not just soft model guidance.

"Ignore previous instructions..." ❌ BLOCKED

Zero Performance Impact

< 50ms overhead per request. Your AI stays fast and responsive.

Scanning: 23ms ✓ Within limits

Complete Audit Trail

Every scan result logged. Know exactly what was blocked and why.

Dashboard: 47 threats blocked ✓ Protected

How It Works

Your Agent Platform
↓ LLM API Call
OnGarde Proxy
Scans for threats in <50ms
❌ Blocked
Error Response
✓ Safe
OpenAI/Anthropic/Claude

Install in 60 Seconds

1

Install OnGarde

npm install -g @ongarde/openclaw
2

Run Setup

ongarde init

Automatically configures your agent platform and starts protection

3

That's It!

All LLM API calls are now protected. Zero manual configuration needed.

Trusted by Open-Source Agent Developers

10+
Compatible Platforms
< 50ms
Guaranteed Overhead
Open Source
MIT Licensed
100%
Block Rate

Simple, Transparent Pricing

Start free. Upgrade when you need more.

Free

$0 /month
  • ✓ Local security scanning
  • ✓ Basic threat detection
  • ✓ Command blocking
  • ✓ Credential scanning
  • ✓ Community support
  • ✗ Cloud audit logging
  • ✗ Advanced PII detection
  • ✗ Custom rules
Star on GitHub

Team

$99 /month
  • ✓ Everything in Pro
  • ✓ Multi-user dashboard
  • ✓ Centralized rules
  • ✓ Team audit logs
  • ✓ SSO integration
  • ✓ SLA support
  • ✓ 365-day history
  • ✓ Webhook notifications
Contact Us

Get Started in 60 Seconds

Coming Soon - In Active Development
# For OpenClaw (one-command setup)
npm install -g @ongarde/openclaw
ongarde init

# For other platforms (universal proxy)
pip install ongarde
ongarde start

# That's it! All LLM API calls protected.
✓ OnGarde proxy started on localhost:8000
✓ Your agent platform configured to use OnGarde
✓ Dashboard: http://localhost:8000/dashboard
✓ All API calls now protected!

What Will Happen During Setup:

  • Backs up your agent platform configuration
  • Starts OnGarde proxy on localhost:8000
  • Configures your platform to route LLM calls through OnGarde
  • Restarts services if needed
  • Validates everything works

Want early access? Star us on GitHub and we'll notify you at launch.