Is Your AI Tool GDPR or HIPAA Compliant? How to Check
A practical checklist for non-legal professionals who need to figure out whether the AI tools their team uses can actually handle regulated data.
Published March 14, 2026
30-Second Briefing
If your team uses AI tools that touch personal data, you may already have compliance obligations under GDPR, HIPAA, or both. Most AI vendors claim compliance without providing the documentation to prove it. This guide walks through how to check whether your AI tools actually meet the requirements.
Who This Guide Is For
This applies to anyone who works at a company that handles health data, financial data, or data belonging to people in the European Union. That covers most mid-to-large companies, and a surprising number of small ones.
If your team uses ChatGPT, Claude, Gemini, Copilot, or any other AI tool and someone on the team has ever pasted in customer information, patient data, employee records, or anything personally identifiable, the compliance question is not theoretical. It is already relevant.
This guide is not legal advice. It is a practical framework for understanding what GDPR and HIPAA compliance actually mean when applied to AI tools, and how to check whether the tools your company uses meet those requirements.
What GDPR Compliance Means for an AI Tool
The General Data Protection Regulation (GDPR) governs how personal data of EU residents is collected, processed, and stored. It applies to any organization that handles EU personal data, regardless of where that organization is based.
When your team uses an AI tool with data that includes names, email addresses, IP addresses, or any other identifier tied to an EU resident, the tool's vendor becomes a data processor. That triggers specific GDPR requirements.
The Key GDPR Requirements for AI Tools
- Data Processing Agreement (DPA). The vendor must offer a signed DPA that specifies how they process personal data, what they do with it, how long they retain it, and what happens if there is a breach. Without a DPA, using the tool with EU personal data is non-compliant. Period.
- Lawful basis for processing. Your organization must have a valid legal basis (consent, legitimate interest, contractual necessity, etc.) for sending personal data to the AI tool. "We wanted to summarize some emails" is not a legal basis.
- Data residency or adequate transfer safeguards. If data is transferred outside the EU, the vendor must provide adequate safeguards. This typically means EU data residency options or reliance on mechanisms like Standard Contractual Clauses (SCCs) or the EU-U.S. Data Privacy Framework, which remains under legal scrutiny.
- Training data opt-out. If the vendor uses customer inputs to train or improve their AI models, that constitutes additional data processing. GDPR requires transparency about this. The tool must offer a clear opt-out mechanism, and for enterprise use, the default should be opt-out.
- Data subject rights support. The vendor must support your ability to fulfill data subject requests: access, deletion, rectification, and portability. If someone requests deletion of their data and it was processed through an AI tool, the vendor needs to be able to comply.
- Data Protection Impact Assessment (DPIA). GDPR requires a DPIA before processing that is likely to result in high risk to individuals' rights and freedoms. This applies to certain AI use cases involving personal data, particularly those involving systematic profiling, sensitive data, or large-scale processing. The vendor should provide enough documentation about their system for your organization to assess whether a DPIA is required and to complete one if so.
GDPR Compliance Checklist for AI Tools
- The vendor offers a signed Data Processing Agreement (DPA).
- EU data residency is available, or adequate transfer safeguards (SCCs, DPF) are documented.
- Training on customer data is opt-out by default, or not done at all.
- Data subject rights (access, deletion, portability) are supported.
- Data retention periods are documented and reasonable.
- The vendor provides sufficient documentation to complete a DPIA.
- Sub-processors are listed and their compliance is documented.
- Breach notification processes and timelines are specified in the DPA.
What HIPAA Compliance Means for an AI Tool
The Health Insurance Portability and Accountability Act (HIPAA) governs how Protected Health Information (PHI) is handled in the United States. It applies to covered entities (healthcare providers, health plans, healthcare clearinghouses) and their business associates (vendors that handle PHI on their behalf).
When a covered entity uses an AI tool that touches electronic Protected Health Information (ePHI), the AI vendor becomes a business associate. That requires a specific legal agreement and specific technical safeguards.
The critical rule: If an AI vendor will not sign a Business Associate Agreement (BAA), you cannot legally use their tool to process patient data or ePHI as a covered entity or business associate. The absence of a BAA makes ePHI processing a HIPAA violation for covered entities and their business associates.
The Key HIPAA Requirements for AI Tools
- Business Associate Agreement (BAA). The vendor must sign a BAA with your organization. This legally binds them to HIPAA standards for data handling, breach notification, and security.
- Encryption at rest and in transit. All ePHI must be encrypted using industry-standard protocols (typically AES-256) both when stored and when transmitted.
- Access controls. Role-based access control (RBAC), multi-factor authentication (MFA), and session timeout features must be enforced.
- Audit logging. The vendor must log and monitor all access and activity involving ePHI, and those logs must be available for compliance auditing.
- No training on PHI. The vendor must not use patient data to train or improve their AI models. If inputs are used for training, that is an unauthorized disclosure of PHI.
- Breach notification. The vendor must notify the covered entity of any breach of unsecured PHI within the timeframes specified in the HIPAA Breach Notification Rule.
HIPAA Compliance Checklist for AI Tools
- The vendor signs a Business Associate Agreement (BAA).
- ePHI is encrypted at rest and in transit (AES-256 or equivalent).
- Role-based access controls and MFA are enforced.
- Audit logs track all access to ePHI and are available for review.
- The vendor does not use ePHI for model training.
- Breach notification timelines and processes are documented in the BAA.
- The vendor holds relevant security certifications (SOC 2 Type II, HITRUST, etc.).
- Data retention and disposal policies for ePHI are specified.
Where the Major AI Tools Stand
Compliance status changes. Check the vendor's trust center or compliance page directly before making decisions. The following reflects publicly available information as of early 2026.
| Tool | GDPR DPA | EU Data Residency | HIPAA BAA | Training Opt-Out |
|---|---|---|---|---|
| ChatGPT (Free/Plus) | Limited | No | No | Manual opt-out available |
| ChatGPT Enterprise/Team | Yes | Available | Yes (Enterprise tier) | Data excluded from training by default |
| Claude (Consumer) | DPA via Anthropic | API only (via AWS/Azure/GCP) | No (consumer product) | Opt-out available |
| Claude (API/Enterprise) | Yes | Yes (via cloud partners) | Yes (via AWS Bedrock, Azure, GCP with BAA) | Not used for training |
| Google Gemini (Consumer) | Google Privacy terms | No | No | Activity controls available |
| Google Workspace with Gemini | Yes (Workspace DPA) | Yes (data regions) | Yes (Workspace BAA) | Not used for training in Workspace |
| Microsoft Copilot (M365) | Yes (Microsoft DPA) | Yes (EU Data Boundary) | Yes (via M365 BAA) | Not used for training |
| AWS Bedrock | Yes (AWS DPA) | Yes (EU regions) | Yes (AWS BAA) | Not used for training |
The pattern is clear: consumer-tier products rarely meet GDPR or HIPAA requirements. Enterprise tiers typically do. The gap between free and paid is not just features. It is legal compliance.
How to Check a Tool's Compliance Page
Every major AI vendor publishes compliance documentation. Here is where to find it and what to look for:
- Find the Trust Center or Security page. Search for "[vendor name] trust center" or "[vendor name] security compliance." For example: trust.anthropic.com, security.openai.com, cloud.google.com/security/compliance.
- Look for the DPA and BAA. These are typically available for download or request. If the DPA is only available on the enterprise tier, that tells you the consumer product is not sufficient for regulated data.
- Check the data handling documentation. Look for specifics: Where is data stored? Is it encrypted? Is it used for training? What are the retention periods? Vague language like "we take privacy seriously" is not compliance documentation.
- Review certifications. SOC 2 Type II, ISO 27001, HITRUST, and FedRAMP are strong indicators. The absence of any certifications is a warning sign.
- Check sub-processor lists. GDPR requires vendors to disclose their sub-processors. If the AI vendor routes data through third parties, those third parties also need to be compliant.
Questions to Ask Your Vendor
If the compliance page does not answer your questions, here are the specific questions to ask the vendor's sales or security team:
- Do you offer a signed DPA? Is it GDPR-compliant?
- Do you sign BAAs with HIPAA-covered entities?
- Where is our data processed and stored? Can we restrict it to the EU or a specific region?
- Is our data used to train, fine-tune, or improve your models? If so, can we opt out?
- What encryption standards do you use for data at rest and in transit?
- What audit logging capabilities are available?
- What is your breach notification timeline?
- What security certifications do you hold? (SOC 2 Type II, ISO 27001, HITRUST, FedRAMP)
- Who are your sub-processors, and are they compliant with the same standards?
- What happens to our data when we terminate our account?
If the vendor cannot answer these questions clearly, that is the answer. The tool is not ready for regulated data.
The Shadow AI Problem
The biggest compliance risk is not usually the tools your company formally adopted. It is the tools employees are using on their own. Free-tier ChatGPT, personal Claude accounts, browser-based AI writing tools. These shadow AI tools process data with no DPA, no BAA, no audit trail, and often no awareness from IT or legal.
HHS updated its HIPAA guidelines in 2025 specifically to address the risk of "shadow AI tools" in healthcare settings, where employees use unsanctioned consumer AI products to process patient information.
The fix is not banning AI. The fix is providing approved alternatives and making the rules clear. Employees use unapproved tools because the approved ones are unavailable, too slow, or too restricted. Governance that ignores this reality will fail.
Special Considerations
The Re-identification Risk
Even when data is "anonymized" before being entered into an AI tool, research has demonstrated that modern AI systems can often re-identify individuals from datasets that appear anonymized. HIPAA's de-identification standards were written before these capabilities existed. Relying solely on anonymization as a compliance strategy carries more risk than many organizations realize. Consult legal counsel before treating de-identified data as fully outside HIPAA scope.
The EU AI Act Overlap
The EU AI Act, with full high-risk enforcement beginning August 2, 2026, adds another layer. High-risk AI systems that also process personal data will often need to address both GDPR and the AI Act requirements. The two frameworks overlap in some areas but are not identical. Organizations should review both sets of obligations with legal counsel rather than assuming GDPR compliance covers AI Act requirements.
Consumer vs. Enterprise Tiers
This cannot be overstated: the free version of an AI tool and the enterprise version are often entirely different products from a compliance standpoint. ChatGPT Free may use inputs for training; ChatGPT Enterprise does not. Claude's consumer product does not offer EU data residency; the API deployed through AWS Bedrock does. Always verify which tier your organization is paying for and which tier employees are actually using.
The bottom line: compliance is not a property of the AI tool. It is a property of the tier, the configuration, and the agreements in place. Check the DPA. Check the BAA. Check the tier. If any piece is missing, the tool is not compliant for regulated data, no matter what the marketing page says.
Stay ahead of what your company's policy might miss.
Subscribe free. One email per week. Under 60 seconds.
Subscribe Free