Security

The Authorization Gap in AI Agents

AI agents can authenticate, but can they authorize? Understanding the critical security gap between authentication and authorization in autonomous AI systems.

Veto TeamMarch 15, 20268 min

AI agents are becoming increasingly autonomous. They can browse the web, execute code, send emails, and modify databases. But there's a critical gap in how we secure them.

The Authentication-Authorization Gap

Most AI agent frameworks handle authentication well. Your agent has an API key, it connects to services, and it can take actions on your behalf. But authentication only answers one question: "Who is this agent?"

Authorization answers a different question: "What is this agent allowed to do?"

When a coding agent deletes a production database after being told to stop, that's an authorization failure. When a financial agent transfers funds without approval, that's an authorization failure. When a browser agent extracts sensitive data and sends it externally, that's an authorization failure.

Why This Gap Exists

The gap exists because we've been treating agents like users. We give them credentials and assume they'll behave. But agents aren't users—they don't have judgment, they don't understand context, and they can be tricked.

Traditional authorization (RBAC, ABAC) assumes the requester can reason about permissions. An agent might have access to a tool, but it doesn't understand the implications of using that tool in a specific context.

The Three Layers of Agent Authorization

A complete agent authorization system operates at three distinct layers:

  • Tool Discovery — Which tools can the agent see?
  • Tool Execution — When can the agent use each tool?
  • Argument Validation — What parameters are allowed?

Most frameworks only handle the first layer. Veto addresses all three.

Tool Discovery Filtering

Before the LLM even sees available tools, you can filter based on runtime context. This prevents the model from attempting actions it shouldn't know about:

tool_filtering.pypython
from langchain.agents.middleware import wrap_model_call

@wrap_model_call
def filter_tools_by_role(request, handler):
    """Filter tools before the LLM sees them."""
    user_role = request.runtime.context.get("role", "viewer")

    role_tools = {
        "admin": request.tools,  # All tools
        "analyst": [t for t in request.tools if not t.name.startswith("delete_")],
        "viewer": [t for t in request.tools if t.name.startswith("read_")]
    }

    filtered = role_tools.get(user_role, [])
    return handler(request.override(tools=filtered))

Tool Execution Guardrails

When the agent attempts to use a tool, intercept the call and enforce policies:

execution_guardrails.pypython
from veto import Veto, Policy, Decision
from langchain.agents.middleware import wrap_tool_call

veto = Veto(api_key="veto_live_xxx")

@wrap_tool_call
def authorize_tool_call(request, handler):
    """Authorization middleware for tool execution."""
    tool_name = request.tool_call["name"]
    args = request.tool_call["args"]

    # Check policy with Veto
    decision = veto.validate(
        tool=tool_name,
        arguments=args,
        context=request.config.get("configurable", {})
    )

    if decision == Decision.DENY:
        return ToolMessage(
            content=f"Blocked: {tool_name} not authorized for this operation",
            tool_call_id=request.tool_call["id"]
        )

    if decision == Decision.APPROVAL_REQUIRED:
        # Route to human approval queue
        return request.interrupt_for_approval()

    return handler(request)

Argument-Level Validation

Even authorized tools can be misused. Validate arguments against schemas and constraints:

argument_validation.pypython
from veto import Veto, Constraint
from pydantic import BaseModel, Field, validator

class TransferRequest(BaseModel):
    """Validated arguments for fund transfer."""
    amount: float = Field(gt=0, le=100000)
    destination: str
    reason: str

    @validator("destination")
    def validate_destination(cls, v):
        if v.endswith("@competitor.com"):
            raise ValueError("Transfers to competitor domains blocked")
        return v

# Register with Veto
veto.register_tool(
    name="transfer_funds",
    schema=TransferRequest,
    constraints=[
        Constraint.max_amount(10000, per="transaction"),
        Constraint.require_approval_if(amount_gt=5000),
        Constraint.rate_limit(5, per="hour")
    ]
)

Real-World Architecture

Here's how a complete agent authorization stack looks in production:

production_auth_stack.pypython
from veto import Veto, Policy, Decision
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent
from langgraph.checkpoint.postgres import PostgresSaver

# 1. Initialize Veto with your policies
veto = Veto.init(
    api_key="veto_live_xxx",
    project="financial-agent",
    environment="production"
)

# 2. Configure state persistence for audit trails
checkpointer = PostgresSaver(connection_string)

# 3. Create agent with all authorization layers
agent = create_react_agent(
    model=ChatOpenAI(model="gpt-4o"),
    tools=[transfer_funds, read_ledger, send_report],
    middleware=[
        filter_tools_by_role,      # Layer 1: Discovery
        authorize_tool_call,       # Layer 2: Execution
        validate_arguments,        # Layer 3: Arguments
    ],
    checkpointer=checkpointer     # Audit persistence
)

# 4. Invoke with user context
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Transfer $5000 to vendor"}]},
    config={
        "configurable": {
            "user_id": "user-123",
            "role": "analyst",
            "department": "finance"
        }
    }
)

The Path Forward

As AI agents become more capable, the authorization gap becomes more dangerous. We need infrastructure that treats agents as what they are: powerful tools that need guardrails, not trusted users that need permissions.

The good news? You can start closing this gap today with just a few lines of code.Check out our Python SDK.

Related posts

Ready to secure your agents?