AutoGen Agent Guardrails with Veto
Runtime authorization for Microsoft AutoGen multi-agent systems. Control tool access per agent, secure GroupChat communication, and enforce policies on inter-agent tool delegation.
AutoGen multi-agent authorization with Veto
Microsoft AutoGen is a framework for building multi-agent AI systems where agents collaborate through conversation. In a GroupChat, agents can request actions from each other, creating chains of tool calls that are difficult to audit. Veto intercepts tool calls at the execution boundary, ensuring that no agent -- regardless of who asked it -- can execute unauthorized actions.
Why AutoGen agents need authorization
AutoGen's multi-agent architecture introduces risks that single-agent systems don't have. In a GroupChat, Agent A can ask Agent B to perform an action. Agent B can delegate to Agent C. The chain of delegation makes it hard to trace who authorized what. Without runtime guardrails, a compromised or manipulated agent can convince others to execute dangerous operations.
Inter-agent manipulation
Attackers can inject instructions into agent communication channels, convincing one agent to request dangerous actions from another. The receiving agent has no way to verify the legitimacy of the request.
Tool scope creep
Agents with broad tool access can be convinced to use tools outside their intended purpose. A "researcher" agent with database access might be manipulated into running mutation queries.
Delegation chains
In GroupChat, tool calls can be delegated through multiple agents. Each hop in the chain is an opportunity for privilege escalation if tools are not individually guarded.
Session runaway
Multi-agent loops can generate hundreds of tool calls in a single session. Without rate limits, a stuck agent loop can exhaust API quotas, modify large datasets, or send thousands of emails.
Quick start
Wrap your tools with Veto before passing them to AutoGen's AssistantAgent. The agent calls tools normally. Authorization happens at the tool execution boundary.
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from veto import Veto
veto = await Veto.init()
# Define tools as normal functions
async def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a recipient."""
return f"Email sent to {to}"
async def query_database(sql: str) -> str:
"""Run a read-only SQL query."""
return f"Query returned 42 rows"
async def delete_records(table: str, where: str) -> str:
"""Delete records from a database table."""
return f"Deleted matching records from {table}"
# Wrap tools with Veto before passing to agents
safe_tools = veto.wrap([send_email, query_database, delete_records])
model = OpenAIChatCompletionClient(model="gpt-5.4")
agent = AssistantAgent(
name="data_assistant",
model_client=model,
tools=safe_tools, # Guarded tools
system_message="You help users query data and send reports.",
)Scoped tools in GroupChat
In multi-agent GroupChat, each agent should have access to only the tools it needs. Use Veto's scope parameter to assign role-based policies to each agent's tool set. A researcher cannot send emails. A writer cannot delete data. Policies are enforced regardless of what other agents request.
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from veto import Veto
veto = await Veto.init()
model = OpenAIChatCompletionClient(model="gpt-5.4")
# Each agent gets its own scoped tool set
researcher_tools = veto.wrap(
[search_web, read_document],
scope="researcher" # Policy scope limits what this agent can access
)
writer_tools = veto.wrap(
[write_file, send_email],
scope="writer"
)
reviewer_tools = veto.wrap(
[approve_document, reject_document],
scope="reviewer"
)
researcher = AssistantAgent(
name="researcher",
model_client=model,
tools=researcher_tools,
system_message="Research topics and gather information.",
)
writer = AssistantAgent(
name="writer",
model_client=model,
tools=writer_tools,
system_message="Write reports based on research findings.",
)
reviewer = AssistantAgent(
name="reviewer",
model_client=model,
tools=reviewer_tools,
system_message="Review and approve or reject documents.",
)
termination = TextMentionTermination("APPROVED")
team = RoundRobinGroupChat(
[researcher, writer, reviewer],
termination_condition=termination,
)
result = await team.run(task="Research Q3 sales and write a report")Securing tool delegation
The core risk in multi-agent systems is delegation: Agent A asks Agent B to perform an action. If Agent B blindly executes, you have a privilege escalation path. Veto solves this by enforcing policies at the tool level. It does not matter which agent requests the tool call or which agent executes it. The tool itself is guarded.
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from veto import Veto
veto = await Veto.init()
# The delegation problem: Agent A asks Agent B to call a tool
# that Agent A itself is not authorized to use.
# Veto enforces policies at the tool boundary, not the agent boundary.
# Even if Agent A convinces Agent B to call delete_production_db,
# the tool itself checks authorization.
async def delete_production_db(table: str) -> str:
"""Delete a table from the production database."""
return f"Deleted {table}"
# Wrap with strict policy: only "admin" scope can use this tool
admin_tools = veto.wrap(
[delete_production_db],
scope="admin"
)
# Even if a non-admin agent delegates to an admin agent,
# the tool call is validated against the calling context
worker = AssistantAgent(
name="worker",
model_client=OpenAIChatCompletionClient(model="gpt-5.4"),
tools=veto.wrap([search_web, read_file], scope="worker"),
system_message="You handle research tasks. Never modify production data.",
)
admin = AssistantAgent(
name="admin",
model_client=OpenAIChatCompletionClient(model="gpt-5.4"),
tools=admin_tools,
system_message="You handle admin tasks when explicitly approved.",
)FunctionTool pattern
AutoGen 0.4+ uses FunctionTool for explicit tool definitions with descriptions. Wrap the underlying function with Veto, then create the FunctionTool from the wrapped handler.
from autogen_core.tools import FunctionTool
from veto import Veto
veto = await Veto.init()
# AutoGen 0.4+ FunctionTool pattern
async def calculate_risk(portfolio_id: str, scenario: str) -> str:
"""Calculate risk metrics for a portfolio under a scenario."""
return f"VaR: $1.2M, CVaR: $1.8M for {portfolio_id}"
# Wrap the raw function, then create FunctionTool
safe_calculate = veto.wrap([calculate_risk])[0]
risk_tool = FunctionTool(
safe_calculate.handler,
description="Calculate portfolio risk metrics",
name="calculate_risk",
)Standalone guard
Use veto.guard() for pre-flight validation with custom context. Pass the calling agent's name and session ID to enable per-agent and per-session policies.
from veto import Veto
veto = await Veto.init()
# Pre-flight check without wrapping
async def custom_tool_handler(agent_name: str, tool_name: str, args: dict):
decision = await veto.guard(
tool_name,
args,
context={
"agent": agent_name,
"session_id": current_session,
}
)
if decision.decision == "deny":
return f"Action blocked: {decision.reason}"
if decision.decision == "require_approval":
return f"Pending human approval (id: {decision.approval_id})"
return await execute_tool(tool_name, args)Policy rules
Define per-agent and global policies in YAML. Scope-based rules restrict tools by agent role. Global rules apply across all agents in the session.
version: "1.0"
name: AutoGen multi-agent policies
rules:
- id: researcher-read-only
action: block
scope: researcher
tools: [write_file, send_email, delete_records]
message: "Researcher agent is read-only"
- id: writer-no-delete
action: block
scope: writer
tools: [delete_records, delete_production_db]
message: "Writer cannot delete data"
- id: admin-approval-required
action: require_approval
scope: admin
tools: [delete_production_db]
message: "Production deletions require human approval"
- id: block-external-emails
action: block
tools: [send_email]
conditions:
- field: arguments.to
operator: not_matches
value: "@company\.com$"
message: "External emails blocked in multi-agent workflows"
- id: rate-limit-all-agents
action: block
tools: ["*"]
conditions:
- field: context.session_tool_count
operator: greater_than
value: 50
message: "Session tool call limit exceeded"How Veto protects AutoGen agents
Wrap tools per agent
Each agent gets its own set of wrapped tools with a scope identifier. The scope maps to policy rules that define what that agent role can do.
GroupChat runs normally
Agents communicate, delegate tasks, and decide which tools to call. AutoGen's GroupChatManager handles speaker selection as usual.
Tool calls intercepted
When any agent calls a tool, Veto evaluates the call against the agent's scope policies and global rules. The agent's identity and arguments are both checked.
Enforcement and audit
Allowed calls execute. Blocked calls return an error message to the agent. All decisions are logged with agent name, tool, arguments, and outcome.
Frequently asked questions
Does Veto work with AutoGen 0.4 (AgentChat)?
How does Veto handle GroupChat speaker selection?
Can one agent bypass another agent's tool restrictions?
How do I prevent runaway multi-agent loops?
Does Veto add latency to AutoGen conversations?
Related integrations
CrewAI multi-agent authorization
LangChainLangChain agent guardrails
Python SDKNative Python SDK for agent authorization
Secure your AutoGen multi-agent systems in minutes.