The authorization layer for AI agents
Unix got file permissions. The web got SSL. Mobile got permission dialogs. Autonomous vehicles got safety-critical overrides. Every computing platform eventually separates can from may. AI agents don't have that layer yet. We're building it.
Why we exist
We built VulnZap first — a security layer for LLM-generated code. It worked, and it taught us something we didn't expect. The risk with AI agents isn't what they produce. It's the authority they exercise without accountability.
An agent that generates bad code is a nuisance. An agent that wires money, sends emails, and deletes data without authorization is a liability. We watched it happen — a coding agent deleting a production database after being told eleven times to stop. Every month brings a new incident: agents issuing refunds without limits, modifying infrastructure without approval, sending emails without review. Every one of those agents was authenticated. None were authorized. That distinction is why Veto exists.
What Veto is
A runtime authorization kernel that sits between the LLM and the tool. Every tool call is intercepted before execution and evaluated against explicit policy. The agent is unaware it's being governed. Your tools are unchanged.
Policies are declarative YAML, version-controlled alongside your code. They evaluate tool arguments directly — block transfers over a threshold, restrict file paths, require human approval for external emails. Static rules handle hard boundaries. Optional LLM-backed semantic checks handle ambiguous cases. Both in the same runtime.
The SDK runs locally with no network dependency. Decisions are made in-process. The managed cloud adds a dashboard, team approvals, audit retention, and an MCP gateway that works with Claude, Cursor, Windsurf, and any MCP client out of the box.
Integration is one import and one function call. See the quick start.
Open source
Authorization is infrastructure. It needs to be inspectable, portable, and not locked to a vendor. The SDK, policy engine, and CLI are Apache-2.0 with 3,000+ monthly installs across npm and PyPI. TypeScript and Python SDKs. No cloud required.
You own your policies. You own your data. The core authorization logic is identical whether you self-host or use the cloud. We earn revenue by making the operational layer — dashboard, team approvals, audit retention, compliance reporting, MCP gateway — worth paying for.
Team
Founder / CEO
Left school at 16 to build. First company at 14, scaled to 30 people. Designed the product and built the policy engine.
Cofounder / CTO
Built the runtime, API, and infrastructure from scratch. Systems, security, and everything that runs in production. 34 releases shipped.
COO / Founding GTM
Go-to-market and compliance. Built RiskLoggr for operational risk classification.
What we believe
Authorization is not identity
Knowing who the agent is doesn't tell you what it should be allowed to do. Identity is a prerequisite. Authorization at the tool-call boundary is the actual enforcement.
Deterministic enforcement, adaptive judgment
Hard boundaries where certainty exists. LLM-backed semantic checks where it doesn't. You don't build trust on probability alone, and you don't handle ambiguity with static rules alone.
The agent should be unaware
Governance wraps tools transparently. No prompt modifications. No behavior changes for the AI. The authorization layer is invisible to the model and explicit to the operator.
Can is not may
Whether an agent can perform an action tells you nothing about whether it should be allowed to. Systems that conflate capability with authority admit privilege escalation by default. Veto enforces the distinction.
Every platform gets its trust layer. This one is ours.