BoundaryAI — Deterministic Action Enforcement for AI Agents
BoundaryAI is the action enforcement layer that sits outside the LLM. We intercept agent behaviors — tool calls, outbound requests, spawns, file writes — at the network and syscall boundary, applying deterministic rules before execution. No model weights, no prompt inspection, no semantic guessing.
What we block
- Indirect prompt injection through retrieved documents and tool outputs
- SSRF and outbound requests to internal/cloud-metadata addresses
- QUIC and covert-channel tunneling from agent processes
- Agent spawn denial-of-service (uncontrolled subprocess fan-out)
- AWS / GCP / Azure instance-metadata exfiltration
- MCP tool collision and tool-name shadowing attacks
- PDF and document-borne instruction injection
- LangGraph / agent checkpoint tampering and state poisoning
How it works
- Network layer — every outbound request from an agent process is evaluated against deterministic egress policy before the socket connects.
- Tool layer — every tool / MCP / function call is matched against an allow-list of typed schemas with per-arg constraints.
- Syscall layer — file writes, process spawns, and exec calls are intercepted at the OS boundary via signed action tokens.
Performance
Engine evaluation: 4.6ms p50. HTTP end-to-end: 98ms p50. Measured on v0.6.6 production benchmarks.