Python Framework

LangChain Agent Guardrails with Veto

Runtime authorization for LangChain agents and LangGraph workflows. Wrap tools with guardrails that block dangerous actions, enforce policies, and require human approval for sensitive operations.

LangChain authorization and security

LangChain authorization controls what actions your agents can perform at runtime. Unlike prompt-based instructions, Veto intercepts tool calls through middleware and validates them against your policies before execution. Works with LangChain agents, LangGraph ToolNodes, and custom tool implementations.

Why LangChain agents need authorization

LangChain's power comes from its tool abstractions. Agents can invoke any tool you give them. That flexibility becomes a liability when tools have side effects like sending emails, processing payments, or modifying databases.

Tool abstractions

LangChain's @tool decorator and StructuredTool make it easy to expose any Python function to agents. Each tool is a potential attack surface if not properly constrained.

Multiple agent types

ReAct agents, Plan-and-Execute agents, and LangGraph state machines each have different execution patterns. Authorization must work across all of them.

Unpredictable behavior

LLMs can hallucinate tool arguments, chain unexpected tool sequences, or retry blocked operations. Runtime guardrails catch what prompts miss.

Production requirements

SOC2, HIPAA, and financial regulations require audit trails and access controls. LangChain provides neither out of the box.

Quick start with LangChain

Wrap your LangChain tools with Veto in two lines. The middleware intercepts every tool call, validates it against your policies, and either allows, denies, or routes to human approval.

langchain_agent.pypython
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain_core.tools import tool

from veto import Veto, VetoOptions
from veto.integrations.langchain import VetoMiddleware

# Define your LangChain tools
@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient."""
    # Your email sending logic
    return f"Email sent to {to}"

@tool
def process_payment(amount: float, recipient: str) -> str:
    """Process a payment transaction."""
    # Your payment processing logic
    return f"Payment of ${amount} processed"

# Initialize Veto
veto = await Veto.init(VetoOptions(
    api_key="veto_live_xxx",  # Or set VETO_API_KEY env var
))

# Create the middleware
middleware = VetoMiddleware(
    veto,
    on_deny=lambda name, args, reason: print(f"Blocked {name}: {reason}"),
)

# Create agent with guardrails
llm = ChatOpenAI(model="gpt-4o")
tools = [send_email, process_payment]

agent = create_react_agent(llm, tools)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    middleware=[middleware],  # Add Veto middleware
)

# Run the agent - tool calls are now validated
result = await agent_executor.ainvoke({
    "input": "Send a $5000 payment to vendor@example.com"
})

LangGraph ToolNode integration

For LangGraph workflows, wrap your ToolNode with Veto to guard tool execution at the graph level. This works with state machines, multi-agent graphs, and any LangGraph topology.

langgraph_agent.pypython
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

from veto import Veto, VetoOptions
from veto.integrations.langchain import create_veto_tool_node

# Initialize Veto
veto = await Veto.init(VetoOptions(api_key="veto_live_xxx"))

# Create your tools and ToolNode
tools = [send_email, process_payment, query_database]
tool_node = ToolNode(tools)

# Wrap with Veto for authorization
veto_tool_node = create_veto_tool_node(veto, tool_node)

# Build your LangGraph
workflow = StateGraph(MessagesState)
workflow.add_node("tools", veto_tool_node)
workflow.add_node("agent", agent_node)
# ... rest of your graph definition

Tool call authorization middleware

LangChain's @wrap_tool_call decorator intercepts tool execution before it reaches your function. This enables authorization checks that block unauthorized access before any side effects occur.

The middleware receives the tool call request with name, arguments, and runtime config. Return a ToolMessage to deny, or call the handler to allow.

authorization_middleware.pypython
from langchain.tools import tool
from langchain.agents.middleware import wrap_tool_call
from langchain.tools.tool_node import ToolCallRequest
from langchain.messages import ToolMessage
from typing import Callable

# Permission store (in production, use a database)
PERMISSIONS = {
    "admin": ["delete_data", "write_data", "read_data"],
    "editor": ["write_data", "read_data"],
    "viewer": ["read_data"]
}

@wrap_tool_call
def authorize_tool_call(
    request: ToolCallRequest,
    handler: Callable[[ToolCallRequest], ToolMessage],
) -> ToolMessage:
    """Authorization middleware that checks permissions before tool execution."""
    tool_name = request.tool_call["name"]
    user_permissions = request.config.get("configurable", {}).get("permissions", [])

    if tool_name not in user_permissions:
        return ToolMessage(
            content=f"Access denied: You don't have permission to use '{tool_name}'",
            name=tool_name,
            tool_call_id=request.tool_call["id"]
        )

    print(f"[Auth] Authorized tool call: {tool_name}")
    return handler(request)

# Using with an agent
from langchain.agents import create_agent

agent = create_agent(
    model="gpt-4",
    tools=[read_data, write_data, delete_data],
    middleware=[authorize_tool_call],
)

The middleware pattern works for any LangChain agent including ReAct agents, function-calling agents, and custom implementations. Authorization runs before the tool function executes, preventing unauthorized side effects entirely.

Runtime context-based tool filtering

Filter available tools before the LLM sees them using the@wrap_model_call decorator. This is more secure than post-hoc authorization because the model never attempts unauthorized actions.

role_based_filtering.pypython
from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable

@dataclass
class UserContext:
    user_id: str
    role: str
    permissions: list[str]

@wrap_model_call
def role_based_tool_filter(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Filter tools based on user role from runtime context."""
    context: UserContext = request.runtime.context
    role = context.role

    # Define role-based tool access
    role_tools = {
        "admin": request.tools,  # All tools
        "editor": [t for t in request.tools if t.name != "delete_data"],
        "viewer": [t for t in request.tools if t.name.startswith("read_")]
    }

    filtered_tools = role_tools.get(role, [])
    request = request.override(tools=filtered_tools)

    print(f"[RBAC] User {context.user_id} with role '{role}' has access to tools: {[t.name for t in filtered_tools]}")

    return handler(request)

# Create agent with context schema
agent = create_agent(
    model="gpt-4",
    tools=[read_data, write_data, delete_data],
    middleware=[role_based_tool_filter],
    context_schema=UserContext
)

# Invoke with user context
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Delete all records"}]},
    context=UserContext(user_id="user-123", role="editor", permissions=["read", "write"])
)

Runtime context is injected server-side and cannot be manipulated by prompt injection or user input. This provides a strong security boundary for role-based access control (RBAC) in multi-tenant deployments.

Human-in-the-loop with interrupt()

LangGraph's interrupt() function pauses agent execution for human approval. The agent state persists across the pause/resume cycle using a checkpointer.

hitl_approval.pypython
from langchain.tools import tool
from langgraph.types import interrupt, Command
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient. Requires approval."""
    # Pause for human approval
    response = interrupt({
        "action": "send_email",
        "to": to,
        "subject": subject,
        "body": body,
        "message": "Approve sending this email?"
    })

    if response.get("action") == "approve":
        # Resume with potentially modified values
        final_to = response.get("to", to)
        final_subject = response.get("subject", subject)
        final_body = response.get("body", body)
        return f"Email sent to {final_to} with subject '{final_subject}'"
    return "Email cancelled by user"

@tool
def transfer_funds(from_account: str, to_account: str, amount: float) -> str:
    """Transfer funds between accounts. Requires approval."""
    response = interrupt({
        "action": "transfer_funds",
        "from": from_account,
        "to": to_account,
        "amount": amount,
        "risk_level": "high" if amount > 10000 else "medium"
    })

    if response.get("approved"):
        return f"Transferred ${response.get('amount', amount)} from {from_account} to {to_account}"
    return "Transfer cancelled"

# Create agent with checkpointer for state persistence
checkpointer = InMemorySaver()
agent = create_react_agent(
    model="openai:gpt-4",
    tools=[send_email, transfer_funds],
    checkpointer=checkpointer
)

# Invoke and handle interrupt
config = {"configurable": {"thread_id": "session-123"}}
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Send email to john@example.com about the meeting"}]},
    config
)

# Check if there's an interrupt
if result.get("__interrupt__"):
    print(f"Approval needed: {result['__interrupt__']}")
    # Resume with approval
    result = agent.invoke(
        Command(resume={"action": "approve"}),
        config
    )

The interrupt pattern is essential for high-risk operations like financial transactions, external communications, or infrastructure changes. Approvers can accept, edit values, or cancel the operation entirely.

Custom ToolNode with authorization wrapper

Build a custom ToolNode that wraps tool execution with authorization logic. This enables fine-grained permission checks per tool call with custom denial messages and audit logging.

authorized_tool_node.pypython
from langchain_core.tools import tool
from langchain_core.messages import ToolMessage, AIMessage
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState, START, END
from typing import Literal

@tool
def read_database(query: str) -> str:
    """Execute a read-only database query."""
    return f"Query result: {query}"

@tool
def write_database(table: str, data: dict) -> str:
    """Write data to database table."""
    return f"Wrote to {table}: {data}"

@tool
def delete_records(table: str, condition: str) -> str:
    """Delete records from table."""
    return f"Deleted from {table} where {condition}"

class AuthorizedToolNode:
    """ToolNode wrapper with built-in authorization."""

    def __init__(self, tools, permission_checker=None):
        self.tool_node = ToolNode(tools)
        self.tools_by_name = {t.name: t for t in tools}
        self.permission_checker = permission_checker or self.default_permission_check

    def default_permission_check(self, tool_name: str, user_context: dict) -> bool:
        """Default RBAC permission check."""
        role_permissions = {
            "admin": {"read_database", "write_database", "delete_records"},
            "editor": {"read_database", "write_database"},
            "viewer": {"read_database"}
        }
        role = user_context.get("role", "viewer")
        return tool_name in role_permissions.get(role, set())

    def __call__(self, state: MessagesState, config: dict = None) -> dict:
        """Execute tools with authorization check."""
        messages = state["messages"]
        last_message = messages[-1]

        if not isinstance(last_message, AIMessage) or not last_message.tool_calls:
            return {"messages": []}

        user_context = config.get("configurable", {}).get("user_context", {})
        tool_messages = []

        for tool_call in last_message.tool_calls:
            tool_name = tool_call["name"]

            # Authorization check
            if not self.permission_checker(tool_name, user_context):
                tool_messages.append(ToolMessage(
                    content=f"Authorization denied for tool '{tool_name}'. Contact administrator.",
                    tool_call_id=tool_call["id"],
                    name=tool_name
                ))
                continue

            # Execute authorized tool
            result = self.tool_node.invoke({"messages": [AIMessage(content="", tool_calls=[tool_call])]})
            tool_messages.extend(result.get("messages", []))

        return {"messages": tool_messages}

# Use in a graph
tools = [read_database, write_database, delete_records]
authorized_tool_node = AuthorizedToolNode(tools)

builder = StateGraph(MessagesState)
builder.add_node("tools", authorized_tool_node)
builder.add_edge(START, "tools")
builder.add_edge("tools", END)

graph = builder.compile()

# Invoke with user context
result = graph.invoke(
    {"messages": [AIMessage(content="", tool_calls=[
        {"id": "1", "name": "delete_records", "args": {"table": "users", "condition": "id=1"}}
    ])]},
    config={"configurable": {"user_context": {"role": "editor", "user_id": "user-123"}}}
)

This pattern allows reusing the same tool set across different user roles while enforcing access control. The permission checker can be customized to integrate with your existing identity provider or permission management system.

Common LangChain guardrails

Policies tailored to common LangChain agent patterns. Define these in your Veto dashboard and they apply across all your agents.

Email tool authorization

Block emails to external domains. Require approval for bulk sends. Validate attachments. Audit all outgoing messages.

Policy: send_email blocked if to contains "@competitor.com"

Payment transaction limits

Enforce daily and per-transaction limits. Require human approval above thresholds. Block payments to new recipients.

Policy: process_payment requires approval if amount > 1000

Database query filtering

Block DELETE and DROP statements. Redact PII from results. Limit query complexity and execution time.

Policy: query_database blocked if query contains "DELETE" or "DROP"

API rate limiting

Per-tool rate limits. Daily quotas. Burst protection. Essential for agents calling external APIs with costs.

Policy: Max 10 calls to openai_api per minute

Getting started

1

Install the Veto SDK

pip install veto
2

Create a project and get your API key

Sign up at veto.so, create a project, and copy your API key from the dashboard.

3

Define your policies

Create policies for your tools in the dashboard. Use constraints on arguments, require approval for sensitive operations, or deny certain patterns.

4

Add VetoMiddleware to your agent

Import VetoMiddleware and add it to your LangChain agent's middleware list. All tool calls now go through Veto for authorization.

Features for LangChain agents

Zero code changes

Middleware integration means your agent code doesn't change. Add Veto to the middleware list and all tools are protected.

Human-in-the-loop

Route sensitive tool calls to human approval queues. Approvers get Slack or email notifications with one-click allow/deny.

Team policies

Policies are managed centrally in the dashboard. Update rules without redeploying agents. Changes apply immediately.

Full audit trail

Every tool call logged with arguments, decision, and timestamp. Export for compliance. Queryable via API or dashboard.

Related integrations

Frequently asked questions

How does Veto integrate with LangChain agents?
Veto provides a VetoMiddleware class that integrates with LangChain's middleware system. Add it to your AgentExecutor's middleware parameter. Every tool call is intercepted and validated against your policies before execution. Works with ReAct agents, Plan-and-Execute agents, and custom agent implementations.
Does Veto work with LangGraph workflows?
Yes. Use create_veto_tool_node to wrap your LangGraph ToolNode. This adds authorization at the graph level, working with any LangGraph topology including state machines, multi-agent graphs, and hierarchical workflows. Tool calls are validated before the node executes them.
What happens when a tool call is denied?
By default, Veto returns a ToolMessage to the agent explaining the denial. The agent can then try a different approach. You can also configure throw_on_deny to raise an exception instead, or route to human approval for manual review. The denial is always logged with full context.
Can I use Veto with custom LangChain tools?
Yes. Any tool decorated with @tool, StructuredTool instances, or custom tool classes work with Veto. The middleware extracts the tool name and arguments from the tool call request. Define policies in your dashboard by matching tool names and constraining argument values.

Guardrails for your LangChain agents in minutes.