Use Cases/Insurance Agents

Insurance AI Agent Guardrails

Runtime authorization for insurance AI agents. Control claims processing, enforce underwriting limits, prevent fraud, and maintain regulatory compliance without modifying your agent's code.

Insurance AI agent security and authorization

Insurance AI agent guardrails are runtime controls that intercept and evaluate tool calls made by autonomous AI agents in insurance workflows. Unlike prompt-based instructions, Veto guardrails enforce authorization policies independently of the agent's reasoning, providing deterministic control over claims processing, underwriting decisions, and policy issuance.

The insurance AI risk landscape

Insurance companies are rapidly deploying AI agents to process claims, assess risk, handle underwriting, and interact with policyholders. These agents have access to sensitive PII, financial systems, and decision-making authority that can result in significant financial exposure and regulatory liability.

A claims processing agent that approves fraudulent claims costs money. An underwriting agent that offers rates below guidelines creates actuarial risk. A customer service agent that exposes PII violates compliance requirements. The insurance industry faces unique challenges where AI decisions carry both financial and regulatory weight.

Claims fraud

AI agents may approve fraudulent claims or miss red flags that human reviewers would catch.

PII exposure

Customer data flowing through AI agents without proper access controls creates compliance risk.

Regulatory violations

NAIC, state regulations, and federal requirements demand audit trails and approval workflows.

Real-world insurance scenarios

Insurance AI agents handle high-stakes decisions across claims, underwriting, and customer service. Each scenario requires specific guardrails to prevent errors, fraud, and compliance violations.

Claims processing guardrails

Claims AI agents review submissions, assess damages, and approve payments. Guardrails enforce approval thresholds, route high-value claims to human review, and block payouts that exceed policy limits or violate coverage terms.

Payment limitsCoverage validationFraud scoringHuman escalation

Underwriting AI authorization

Underwriting agents assess risk and quote premiums. Guardrails prevent quotes below minimum thresholds, enforce risk class restrictions, and require human approval for non-standard risks or large policy values.

Premium floorsRisk class limitsCoverage capsApproval routing

Fraud detection limits

Fraud detection agents flag suspicious claims but cannot block payments unilaterally. Guardrails enforce review requirements, prevent automatic denials without human confirmation, and ensure due process for policyholders.

Mandatory reviewEscalation rulesAppeal routingAudit logging

Policy issuance controls

Policy issuance agents generate contracts and bind coverage. Guardrails prevent issuing policies outside authority limits, enforce document review requirements, and validate that all required disclosures are included.

Authority limitsDisclosure checksDocument validationSignature workflow

Claims approval policies

Define authorization rules for claims processing with simple YAML policies. Guardrails execute before the agent's tool calls, ensuring consistent enforcement regardless of the AI model's reasoning.

veto/policies.yamlyaml
rules:
  - name: claims_approval_threshold
    description: Require human approval for claims over $25,000
    tool: approve_claim
    when: args.amount > 25000
    action: require_approval
    message: "Claims over $25,000 require supervisor review"
    approvers: ["claims-supervisor@insurance.com"]

  - name: block_payment_limit_exceeded
    description: Block payments that exceed policy coverage
    tool: process_payment
    when: args.amount > args.policy_coverage_limit
    action: deny
    message: "Payment amount exceeds policy coverage limit"

  - name: fraud_flag_review
    description: Route flagged claims to SIU review
    tool: approve_claim
    when: args.fraud_score > 0.7
    action: require_approval
    message: "High fraud score requires SIU review"
    approvers: ["siu-team@insurance.com"]

  - name: pii_access_logging
    description: Log all PII access for compliance
    tool: get_customer_data
    action: allow
    log: true
    log_fields: ["args.customer_id", "context.agent_id"]

  - name: prevent_automatic_denial
    description: Prevent automatic claim denials
    tool: deny_claim
    when: args.automated == true
    action: require_approval
    message: "Automated denials require human review"

  - name: policy_issuance_authority
    description: Enforce underwriting authority limits
    tool: issue_policy
    when: args.premium > 100000
    action: require_approval
    message: "Policies over $100k require senior underwriter approval"
    approvers: ["senior-underwriter@insurance.com"]

Policies are version-controlled with your code, reviewed in pull requests, and deployed through your existing CI/CD pipeline. No database of rules to manage, no UI to configure.

Insurance regulatory compliance

Insurance AI agents operate under strict regulatory oversight. Veto guardrails help enforce compliance requirements from state insurance departments, NAIC model regulations, and federal standards.

NAIC requirements

The National Association of Insurance Commissioners has issued AI governance principles requiring transparency, accountability, and oversight for AI in insurance decisions.

  • -Algorithmic accountability documentation
  • -Human oversight for high-impact decisions
  • -Audit trails for regulatory review

State regulations

States like Colorado, New York, and California have enacted specific requirements for AI in insurance underwriting and claims.

  • -Unfair discrimination testing
  • -Consumer notice requirements
  • -Adverse action explanations

Data privacy

Insurance agents handle sensitive PII including health information, financial data, and demographic details subject to multiple privacy frameworks.

  • -Access logging for all PII operations
  • -Data minimization enforcement
  • -Cross-border data transfer controls

Fair claims handling

Unfair claims practices acts require prompt investigation, fair settlement, and proper documentation of claim decisions.

  • -Timeline enforcement for responses
  • -Mandatory documentation of decisions
  • -Appeal routing for denials

Disparate impact analysis for NAIC compliance

The NAIC Model Bulletin (adopted by 24+ states) and Colorado SB21-169 require insurers to prove their AI systems do not produce unfairly discriminatory outcomes. The four-fifths (80%) rule is the standard test: if a protected group's selection rate falls below 80% of the reference group's rate, potential disparate impact exists and requires investigation.

disparate_impact_analysis.pypython
import pandas as pd
import numpy as np
from scipy import stats

class InsuranceAICompliance:
    """Compliance testing framework for insurance AI systems."""

    def __init__(self, decision_threshold=0.8):
        self.decision_threshold = decision_threshold  # 80% rule
        self.protected_characteristics = [
            'race', 'national_origin', 'sex', 'religion',
            'sexual_orientation', 'disability', 'gender_identity'
        ]

    def disparate_impact_analysis(self, df, outcome_col, protected_col, reference_group):
        """
        Calculate disparate impact ratio using the four-fifths rule.

        Required by Colorado SB21-169 and NAIC Model Bulletin.
        """
        results = {}
        groups = df[protected_col].unique()

        # Calculate selection rate for reference group
        ref_mask = df[protected_col] == reference_group
        ref_selection_rate = df.loc[ref_mask, outcome_col].mean()

        for group in groups:
            if group == reference_group:
                continue

            group_mask = df[protected_col] == group
            group_selection_rate = df.loc[group_mask, outcome_col].mean()

            # Disparate impact ratio
            impact_ratio = group_selection_rate / ref_selection_rate if ref_selection_rate > 0 else 0

            # Statistical significance test (chi-square)
            contingency = pd.crosstab(
                df[protected_col] == group,
                df[outcome_col]
            )
            chi2, p_value, dof, expected = stats.chi2_contingency(contingency)

            results[group] = {
                'selection_rate': group_selection_rate,
                'impact_ratio': impact_ratio,
                'compliant': impact_ratio >= self.decision_threshold,
                'statistical_significance': p_value < 0.05,
                'p_value': p_value
            }

        return results

    def bias_audit_report(self, df, outcome_cols):
        """Generate comprehensive bias audit report for regulatory filing."""
        report = {
            'audit_date': pd.Timestamp.now().isoformat(),
            'total_decisions': len(df),
            'findings': [],
            'recommendations': []
        }

        for outcome in outcome_cols:
            for protected in self.protected_characteristics:
                if protected not in df.columns:
                    continue

                results = self.disparate_impact_analysis(
                    df, outcome, protected,
                    df[protected].mode()[0]
                )

                for group, metrics in results.items():
                    if not metrics['compliant']:
                        report['findings'].append({
                            'protected_class': protected,
                            'group': group,
                            'outcome': outcome,
                            'impact_ratio': metrics['impact_ratio'],
                            'finding': f'Potential disparate impact (ratio: {metrics["impact_ratio"]:.2f})'
                        })

        return report

Run this analysis quarterly and document all findings for regulatory review. Colorado requires annual CRO attestations confirming AI systems do not unfairly discriminate.

NAIC AI governance documentation

NY DFS Circular Letter No. 7 (2024) requires insurers to maintain internal documentation explaining AI model functionality, input data, assumptions, and how outputs influence decisions. The NAIC Model Bulletin mandates a formal AI Systems (AIS) Program with documented policies, oversight committees, and audit schedules.

ai_governance_framework.tstypescript
interface AISystemProgram {
  // Core governance documentation
  governanceFramework: {
    policies: GovernancePolicy[];
    oversightCommittee: CommitteeMember[];
    riskManagementProcess: string;
    internalAuditSchedule: AuditSchedule;
  };

  // AI model registry
  modelRegistry: AIModelEntry[];

  // Third-party vendor management
  thirdPartyVendors: VendorAssessment[];

  // Testing and validation records
  biasTestingResults: BiasTestResult[];
  validationReports: ValidationReport[];
}

interface AIModelEntry {
  modelId: string;
  modelName: string;
  version: string;
  purpose: 'underwriting' | 'pricing' | 'claims' | 'fraud_detection' | 'marketing';
  riskLevel: 'high' | 'medium' | 'low';

  // Data inputs (ECDIS tracking required)
  dataSources: DataSource[];
  features: Feature[];

  // Validation status
  validationStatus: 'validated' | 'pending' | 'failed';
  lastValidationDate: Date;
  validationResult: ValidationResult;

  // Bias testing per Colorado SB21-169
  lastBiasTestDate: Date;
  biasTestResults: BiasTestResult;

  // Explainability method
  explanationMethod: 'SHAP' | 'LIME' | 'counterfactual' | 'rule-based';
  adverseActionReasons: string[];

  // Ownership and approval
  owner: string;
  approvedBy: string;
  approvalDate: Date;
}

interface DataSource {
  name: string;
  type: 'traditional' | 'ECDIS'; // External Consumer Data and Information Sources
  vendor?: string;
  dataQualityScore: number;
  complianceStatus: 'compliant' | 'under_review' | 'non_compliant';
}

// Example: High-risk underwriting model registry entry
const underwritingModel: AIModelEntry = {
  modelId: 'UW-2024-001',
  modelName: 'Risk Assessment Classifier',
  version: '3.2.1',
  purpose: 'underwriting',
  riskLevel: 'high',
  dataSources: [
    { name: 'MIB Database', type: 'traditional', dataQualityScore: 0.95, complianceStatus: 'compliant' },
    { name: 'Telematics Feed', type: 'ECDIS', vendor: 'DriveScore Inc.', dataQualityScore: 0.87, complianceStatus: 'compliant' }
  ],
  features: [],
  validationStatus: 'validated',
  lastValidationDate: new Date('2025-01-15'),
  validationResult: { passed: true, notes: 'Independent validation by actuarial team' },
  lastBiasTestDate: new Date('2025-02-01'),
  biasTestResults: { overallCompliance: true, findings: [] },
  explanationMethod: 'SHAP',
  adverseActionReasons: [
    'Driving history indicates elevated risk',
    'Vehicle type not eligible for preferred rates'
  ],
  owner: 'Underwriting Analytics Team',
  approvedBy: 'Chief Actuary, FCAS MAAA',
  approvalDate: new Date('2024-12-01')
};

Human-in-the-loop for adverse decisions

California SB 1120, Florida HB 527, and Arizona HB 2175 now legally prohibit AI from being the sole decision-maker for claim denials. These states require licensed professional review and certification for adverse determinations. Your guardrails must enforce human review workflows with documented certification statements.

adverse_decision_workflow.tstypescript
interface AdverseDecisionRequest {
  claimId: string;
  policyId: string;
  aiRecommendation: 'deny' | 'approve' | 'investigate';
  aiConfidence: number;
  supportingData: ClaimData;
  denialReasons: string[];
}

interface HumanReviewer {
  id: string;
  name: string;
  licenseNumber: string;
  licenseState: string;
  qualifications: string[];
}

class AdverseDecisionWorkflow {
  /**
   * Process adverse decisions with mandatory human review.
   * Required by CA SB 1120, FL HB 527, AZ HB 2175.
   */
  async processAdverseDecision(request: AdverseDecisionRequest): Promise<DecisionResult> {
    // AI provides recommendation only
    const aiAssessment = await this.aiModel.evaluate(request.supportingData);

    // Assign qualified human reviewer (licensed in applicable state)
    const reviewer = await this.assignQualifiedReviewer(
      request.supportingData.policyState,
      request.aiRecommendation
    );

    // Human reviewer makes final determination
    const humanDecision = await reviewer.review({
      aiRecommendation: aiAssessment,
      claimData: request.supportingData,
      denialReasons: request.denialReasons
    });

    // Generate compliance documentation
    return {
      finalDecision: humanDecision.decision,
      humanReviewer: reviewer.id,
      reviewerLicense: reviewer.licenseNumber,
      certificationStatement: 'AI was not the sole decision-maker in this adverse determination',
      reviewTimestamp: new Date(),
      aiRecommendationRecorded: aiAssessment.recommendation,
      humanOverride: humanDecision.decision !== aiAssessment.recommendation,
      auditTrail: {
        aiAssessmentDate: aiAssessment.timestamp,
        humanReviewDate: humanDecision.timestamp,
        totalProcessingTime: this.calculateDuration(aiAssessment.timestamp, humanDecision.timestamp)
      }
    };
  }

  private async assignQualifiedReviewer(state: string, decisionType: string): Promise<HumanReviewer> {
    // Look up licensed professionals for the policy state
    const reviewers = await this.reviewerRegistry.findByState(state);

    // Filter by qualification requirements
    const qualified = reviewers.filter(r =>
      decisionType === 'deny' && state === 'CA'
        ? r.qualifications.includes('licensed_clinician')
        : r.qualifications.includes('claims_adjuster')
    );

    return this.loadBalancer.assign(qualified);
  }
}

The certification statement "AI was not the sole decision-maker" is now legally required in multiple states. Maintain audit trails with both AI recommendations and human final decisions for regulatory review and litigation defense.

Frequently asked questions

How do guardrails help with claims AI compliance?
Guardrails enforce approval thresholds, create audit trails for every decision, and route high-value or flagged claims to human reviewers. This provides the documentation and oversight that regulators expect while maintaining the efficiency gains from AI automation.
Can guardrails prevent discriminatory underwriting practices?
Guardrails can enforce that underwriting decisions include required factors, exclude prohibited factors, and route non-standard cases for human review. While guardrails cannot detect all forms of discrimination, they provide deterministic controls that complement fairness testing and bias monitoring systems.
What about PII and sensitive customer data?
Guardrails log all PII access with full context, enforce data minimization by blocking unnecessary field retrieval, and can redact sensitive information from agent outputs. Every data access is recorded with timestamp, agent ID, and purpose, creating the audit trail required for compliance reviews.
How do approval workflows work for claims?
When a guardrail requires approval, the tool call is paused and routed to designated approvers via email, Slack, or the Veto dashboard. The agent waits for a response before continuing. Approvers can see full context including the tool arguments, claim details, and the rule that triggered the review.
Do guardrails work with existing claims management systems?
Yes. The Veto SDK wraps your existing tool implementations. Your claims system API calls, database queries, and document operations stay the same. Guardrails intercept at the tool boundary, requiring no changes to your underlying systems or data models.

Related use cases

Insurance AI that operates within bounds.