Security/HIPAA

HIPAA Compliance for AI Agents

An AI agent with read access to an EHR and write access to an email API has everything it needs to cause a reportable HIPAA breach. Runtime authorization prevents that scenario from ever executing.

Last updated: April 2026

HIPAA and AI agents

The Health Insurance Portability and Accountability Act (HIPAA) requires covered entities and their business associates to implement administrative, physical, and technical safeguards to protect the confidentiality, integrity, and availability of protected health information (PHI). When an AI agent accesses, processes, transmits, or stores PHI, every tool call that touches patient data must comply with the HIPAA Security Rule (45 CFR Part 164, Subpart C) and Privacy Rule (45 CFR Part 164, Subpart E). An unauthorized disclosure by an AI agent is a breach—the same as if a human employee emailed patient records to the wrong person.

Why AI agents are a HIPAA risk

Healthcare organizations are deploying AI agents for clinical documentation, patient communication, claims processing, prior authorization, and administrative tasks. Each of these workflows involves PHI. The risk isn't that agents will deliberately violate HIPAA—it's that their non-deterministic behavior creates unpredictable data flows.

PHI in agent outputs

An agent summarizing patient records might include PHI in a response sent to an unauthorized system. Without output redaction, patient names, diagnoses, and treatment details can leak through API calls, logs, or generated documents.

Excessive EHR access

An agent with broad EHR access might query records beyond the minimum necessary for its task. HIPAA's minimum necessary standard (45 CFR 164.502(b)) requires limiting PHI access to what's needed for the specific function.

Cross-system data flows

Agents that integrate multiple systems (EHR, billing, communication) can move PHI between systems in ways that weren't anticipated during the privacy impact assessment. Each data flow requires authorization.

Prompt injection and PHI

A prompt injection attack could cause an agent to exfiltrate PHI by embedding it in outbound API calls. Without tool-call authorization, the agent follows the injected instruction and creates a reportable breach.

HIPAA rule mapping

The HIPAA Security Rule specifies technical safeguards in 45 CFR 164.312 and administrative requirements in 45 CFR 164.530. Here's how each applies to AI agents and how Veto's runtime authorization satisfies the requirement.

164.312(a)(1)

Access control

Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights.

Veto implementation

  • Per-agent access policies: Each agent has a YAML policy defining exactly which EHR endpoints, patient record fields, and operations it can access. Access denied by default.
  • Minimum necessary enforcement: Policies restrict queries to specific record types, date ranges, and patient contexts. An agent processing a billing claim only accesses billing-relevant fields.
  • Unique agent identification: 164.312(a)(2)(i) requires unique user identification. Each agent has a unique API key and identity, enabling per-agent access tracking.
  • Automatic logoff equivalent: 164.312(a)(2)(iii) requires automatic logoff. Agent sessions are stateless — each tool call is independently authorized. No persistent sessions to expire.
164.312(b)

Audit controls

Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.

Veto implementation

  • Comprehensive decision logging: Every tool call is logged: agent identity, tool name, arguments (with PHI redacted from logs), policy matched, outcome, and timestamp.
  • Tamper-evident logs: Decision logs are append-only. Historical entries cannot be modified or deleted by agents or operators.
  • Configurable retention: Log retention policies can be set to meet HIPAA's six-year documentation requirement (45 CFR 164.530(j)).
164.312(c)(1)

Integrity controls

Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.

Veto implementation

  • Write operation controls: Policies can allow read-only access to PHI while blocking all write, update, and delete operations. An agent can query records but cannot modify them.
  • Argument validation: When write access is permitted, policies validate the specific fields and values being written. Prevent agents from modifying diagnosis codes, medication lists, or other critical fields.
  • Human approval for modifications: Sensitive record modifications can be routed to human approval before execution, ensuring clinical oversight of AI-initiated changes.
164.312(d)

Person or entity authentication

Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.

Veto implementation

  • API key authentication: Each agent authenticates with a unique API key scoped to a specific project and organization. Identity is verified on every request.
  • Agent identity in authorization: Authorization policies are evaluated per-agent. The same tool call may be allowed for a clinical documentation agent but denied for a billing agent.
164.312(e)(1)

Transmission security

Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.

Veto implementation

  • Output redaction: Before any outbound tool call, the authorization layer can strip PHI from arguments. An agent can reference a patient encounter in its reasoning but the outbound API call contains only de-identified data.
  • Destination whitelisting: Policies restrict which external systems the agent can transmit data to. PHI can only flow to authorized endpoints — no sending patient data to third-party APIs.
  • In-process evaluation: The Veto SDK runs locally — PHI never leaves your infrastructure for policy evaluation. No data is transmitted to Veto's servers for authorization decisions.
164.530(j)

Documentation and retention

A covered entity must maintain the policies and procedures required by HIPAA in written or electronic form, and retain documentation for six years from the date of creation or the date when it was last in effect.

Veto implementation

  • Policy-as-code: YAML policies stored in version control serve as the written documentation of access controls. Every version is retained in git history indefinitely.
  • Decision log retention: Authorization decision logs are retained per your configured retention policy. Set to six years or longer to meet HIPAA requirements.
  • Change documentation: Git commit history documents every policy change with author, date, and reason — satisfying the requirement to document policy changes.

PHI protection with output redaction

Output redaction is the most critical HIPAA control for AI agents. Even when an agent needs to read PHI to perform its task, the authorization layer ensures PHI is stripped from any outbound action.

How output redaction works

1Agent reads patient record from EHR to answer a clinical question
2Agent formulates a response and initiates an outbound tool call (email, API, message)
3Veto intercepts the tool call and inspects arguments for PHI patterns (names, MRNs, dates of birth, diagnoses, SSNs)
4PHI is redacted or the tool call is blocked entirely, depending on policy
5Decision logged (without PHI) for audit trail. Agent receives sanitized or blocked response.

Business Associate Agreements

Under HIPAA, any entity that creates, receives, maintains, or transmits PHI on behalf of a covered entity is a business associate and must execute a BAA (45 CFR 164.502(e), 164.504(e)).

BAA implications for AI agent authorization

Veto SDK (local mode)

The SDK runs entirely within your infrastructure. PHI never leaves your environment for policy evaluation. In local-only mode, Veto does not access, store, or transmit PHI and does not require a BAA.

Veto Cloud (dashboard, approvals)

If you use cloud features and decision logs contain PHI-adjacent metadata (patient encounter IDs, timestamps of PHI access), a BAA may be required. Contact us for BAA execution. Decision logs can be configured to redact all PHI-identifiable fields before transmission to cloud.

LLM provider BAAs

Your BAA with the LLM provider (OpenAI, Anthropic, etc.) covers the model's processing of PHI. Veto's authorization layer is independent — it controls what the agent does with PHI, not what the model processes. Both layers are required.

Breach notification and AI agents

Under the HIPAA Breach Notification Rule (45 CFR 164.400-414), unauthorized acquisition, access, use, or disclosure of PHI is presumed to be a breach unless the covered entity demonstrates a low probability that PHI was compromised. An AI agent that exfiltrates PHI via an unauthorized tool call triggers the same notification obligations as any other breach.

Without runtime authorization

Agent exfiltrates PHI. You discover it in application logs days later. Must notify HHS within 60 days, affected individuals without unreasonable delay, and potentially media if 500+ individuals affected. Average cost: $4.88M per breach (IBM, 2024).

With runtime authorization

Agent attempts to transmit PHI to unauthorized destination. Tool call is blocked at the authorization boundary. PHI never leaves the authorized perimeter. Decision logged for investigation. No breach occurred. No notification required.

Frequently asked questions

Do AI agents need to be covered under a BAA?
The AI agent itself isn't a business associate — it's a software component. However, the services that power the agent may be business associates. Your LLM provider (if processing PHI), your cloud infrastructure provider, and any SaaS tools the agent uses that handle PHI all need BAAs. Veto's SDK runs locally and doesn't require a BAA in local-only mode. Cloud features may require a BAA if decision logs contain PHI-adjacent data.
How does Veto enforce the minimum necessary standard?
The minimum necessary standard (45 CFR 164.502(b)) requires limiting PHI access to the minimum needed for the intended purpose. Veto policies enforce this by restricting which EHR endpoints, record types, and fields each agent can access. A billing agent can access billing codes and dates of service but not clinical notes. A documentation agent can access notes for the current encounter but not historical records.
What happens if an agent accidentally accesses PHI it shouldn't?
With Veto, this can't happen at the tool-call level. If the agent's tool call includes a query for records outside its authorized scope, the authorization layer blocks the call before it executes. The decision log records the attempted access for investigation. If the agent already has PHI in its context (from a previous authorized read), output redaction prevents it from transmitting that PHI in subsequent tool calls.
Does Veto store PHI in its decision logs?
By default, decision logs record tool names, argument keys, policy matched, and outcome — not the full argument values. For healthcare deployments, you can configure additional redaction to strip any PHI-identifiable fields from logs. In local-only SDK mode, logs stay within your infrastructure entirely.
How does output redaction detect PHI?
Veto's output redaction inspects tool-call arguments for PHI patterns: names, medical record numbers, dates of birth, Social Security numbers, diagnosis codes, and other HIPAA identifiers. Pattern matching is configurable — you define which PHI categories to detect and whether to redact or block the entire tool call. This operates at the authorization boundary, before the tool call executes.
What's the penalty for a HIPAA breach caused by an AI agent?
The same as any other HIPAA breach. Penalties range from $141 to $2,134,831 per violation depending on the level of negligence, with an annual maximum of $2,134,831 per violation category. Criminal penalties can include fines up to $250,000 and imprisonment. The OCR does not distinguish between human-caused and AI-caused breaches — unauthorized disclosure is unauthorized disclosure.

Related compliance resources

A breach from an AI agent is still a breach. Prevent it at the source.