Framework IntegrationTypeScript

Vercel AI SDK Guardrails with Veto

Runtime authorization for streaming agents. Intercept tool calls, enforce policies, and control your Vercel AI SDK agents without modifying their behavior.

What are Vercel AI SDK guardrails?

Vercel AI SDK guardrails are runtime controls that intercept tool calls made by AI agents built with the Vercel AI SDK. They provide streaming authorization, allowing you to evaluate and enforce policies on tool calls in real-time as agents execute workflows with streaming responses.

Streaming agents with Vercel AI SDK

The Vercel AI SDK provides a powerful abstraction for building AI agents with streaming responses, tool calling, and multi-step workflows. It supports OpenAI, Anthropic, Google, and other providers through a unified API. Agents can call tools, chain operations, and stream results back to users in real-time.

With streaming agents, authorization needs to happen fast. Every tool call during a streaming response must be evaluated before execution. Veto's in-process SDK evaluates policies in under 10ms, keeping your agents responsive while maintaining control.

Quick start

Wrap your tools with Veto's authorization layer. The agent's code doesn't change.

index.ts
TypeScript
import { generateText, tool } from 'ai';
import { z } from 'zod';
import { Veto } from 'veto-sdk';

// Initialize Veto with your API key
const veto = new Veto({
  apiKey: process.env.VETO_API_KEY,
  projectId: 'proj_abc123',
});

// Define your tools with Veto authorization
const deleteFileTool = tool({
  description: 'Delete a file from the filesystem',
  parameters: z.object({
    path: z.string().describe('Path to the file to delete'),
  }),
  execute: async ({ path }) => {
    // Authorize the tool call before execution
    const decision = await veto.authorize({
      tool: 'delete_file',
      arguments: { path },
    });

    if (decision.status === 'denied') {
      throw new Error(`Denied: ${decision.reason}`);
    }

    if (decision.status === 'approval_required') {
      // Return immediately, let approval workflow handle it
      return { status: 'pending_approval', approvalId: decision.approvalId };
    }

    // Authorized - execute the actual operation
    await fs.unlink(path);
    return { status: 'deleted', path };
  },
});

// Use with Vercel AI SDK
const result = await generateText({
  model: openai('gpt-4'),
  tools: { delete_file: deleteFileTool },
  prompt: 'Delete the old log files in /var/logs',
});

Streaming authorization patterns

When agents stream responses with multiple tool calls, authorization must keep pace. Veto provides patterns optimized for streaming workflows.

Real-time evaluation

Policy evaluation happens in-process. No network latency for local mode. Decisions in under 10ms per tool call.

Streaming responses

Denied tool calls return error messages that stream to the client. The agent can recover or retry with modified arguments.

Approval routing

Route sensitive operations to human approval during streaming. The response pauses until approved or denied.

Context-aware policies

Policies can reference conversation context, user identity, and session state for dynamic authorization decisions.

Multi-step agent authorization

For agents using maxSteps to chain tool calls, Veto authorizes each step independently.

agent.ts
TypeScript
import { generateText, stepCountIs } from 'ai';
import { createVetoTools } from 'veto-sdk/vercel-ai';

const veto = new Veto({ apiKey: process.env.VETO_API_KEY });

// Wrap all tools with authorization
const authorizedTools = createVetoTools(veto, {
  search_web: searchWebTool,
  read_file: readFileTool,
  write_file: writeFileTool,
  send_email: sendEmailTool,
}, {
  // Policy configuration
  policyId: 'pol_production',
  onDenied: (tool, args, decision) => {
    console.log(`Denied ${tool}: ${decision.reason}`);
    return { error: 'Operation not permitted', reason: decision.reason };
  },
  onApprovalRequired: async (tool, args, decision) => {
    // Notify user and wait for approval
    await notifyApprovalQueue(decision.approvalId);
    return { pending: true, approvalId: decision.approvalId };
  },
});

// Multi-step agent with authorization at each step
const result = await generateText({
  model: openai('gpt-4'),
  tools: authorizedTools,
  maxSteps: 10,
  stopWhen: stepCountIs(10),
  prompt: 'Research the latest AI news and send a summary email to the team',
});

Related integrations

Frequently asked questions

How do Vercel AI SDK guardrails work?
Veto wraps your tool definitions with an authorization layer. When the agent calls a tool, Veto intercepts the call, evaluates it against your policies, and either allows, denies, or routes to approval. The agent's reasoning process is unchanged—it just receives success or error responses from tools.
Does authorization slow down streaming responses?
No. Veto's in-process SDK evaluates policies in under 10ms. For most agents, this is negligible compared to LLM inference time. Cloud mode adds minimal latency for features like team approvals and audit retention, but doesn't block the critical path.
Can I use guardrails with Vercel AI SDK's useChat and useCompletion hooks?
Yes. Guardrails operate at the tool level, which works with all Vercel AI SDK patterns. Whether you use generateText, streamText, useChat, or custom hooks, tool calls are intercepted and authorized the same way.
What happens when a tool call is denied during streaming?
The tool returns an error response that streams to the client. The agent can see the denial reason and either retry with different arguments, try an alternative approach, or inform the user. All denials are logged with full context for audit trails.

Ship Vercel AI SDK agents with confidence.