Your Coworker Just Got Fired for Using AI — What Actually Happened
It wasn't for using AI. It was for how they used it. Here are the patterns behind AI-related terminations — and how to stay on the right side of the line.
Published March 28, 2026
Nobody Gets Fired for "Using AI"
Let's get this out of the way: your company is not going to fire you for using ChatGPT. Not in 2026. The market has moved past that.
What gets people fired is specific behavior that happens to involve AI tools. The AI is the mechanism, not the offense. The offenses are the same ones that have always gotten people fired: leaking confidential data, lying about your work, violating company policy, and cutting corners that put the company at risk.
But AI makes all of those things easier to do accidentally — and that's the trap. Here are the five patterns we see over and over.
The 5 Patterns
Pattern 1: "I Just Needed a Quick Summary"
What happened: An employee at a consulting firm uploaded a client's unredacted financial model to ChatGPT to generate a summary for an internal presentation. The client found out during a routine vendor security audit — the firm's network logs showed the file was sent to OpenAI's servers. The client escalated. The employee was terminated for violating the client's NDA and the firm's data handling policy.
Why it's common: Summarizing documents is the single most popular AI use case at work. It's also the one most likely to involve sensitive data. The convenience is irresistible — and that's exactly why it's dangerous.
The line: Summarizing generic content is fine. Uploading files with real client data, names, or financials is a policy violation at most companies. Sanitize first.
Pattern 2: "I Wrote It Myself" (They Didn't)
What happened: A marketing manager submitted a 15-page competitive analysis that was almost entirely AI-generated. They presented it as original work in a strategy meeting. A colleague noticed the writing style was suspiciously consistent and flagged it. When asked directly, the manager doubled down. IT pulled the ChatGPT Enterprise logs — 90% of the document was generated in a single session with minimal editing.
Why it's common: The pressure to produce is real. AI makes it possible to create polished output in minutes. The temptation to skip the "I used AI for this" disclosure is strong, especially when the culture around AI is still ambiguous.
The line: Using AI to draft, outline, or accelerate your work is fine at most companies. Claiming AI-generated work as entirely your own — especially when asked directly — is a credibility issue that can end careers. The firing isn't for using AI. It's for lying.
Pattern 3: "I Didn't Know the Code Was Proprietary"
What happened: A software developer pasted a large block of proprietary source code into a consumer AI tool to debug an error. The code included API keys and internal service endpoints. The company's DLP system flagged the outbound data transfer. The developer was placed on a performance improvement plan; a second incident three weeks later led to termination.
Why it's common: Developers copy-paste code into AI tools constantly. It's become a reflex. But consumer AI accounts have different data policies than enterprise ones — and most developers don't check which account type they're on before pasting.
The line: Using AI for coding help is standard practice. Pasting proprietary code with embedded credentials into a consumer tool is a security incident. Use your company's approved AI tools for work code, and always strip credentials before pasting anything anywhere.
Pattern 4: "The AI Hallucinated — But My Name's on It"
What happened: A compliance analyst used AI to draft a regulatory filing. The AI cited three regulations that sounded plausible but didn't exist. The analyst submitted the filing without verifying the citations. The error was caught by the regulator. The company faced a formal inquiry, and the analyst was terminated for submitting false information in a regulated document.
Why it's common: AI is remarkably good at generating content that sounds authoritative. Citations, case numbers, regulation references — AI will fabricate all of them with absolute confidence. In regulated industries, one unchecked hallucination can trigger an investigation.
The line: Using AI to draft regulated or legal documents is risky but not forbidden. Submitting AI output without human verification — especially in compliance, legal, or financial contexts — is negligence. If your name is on it, you verified it. Period.
Pattern 5: "I Didn't Know We Had a Policy"
What happened: A company rolled out an AI acceptable use policy in January. An employee in a regional office never read the email, never completed the acknowledgment, and continued using a banned AI tool for client work. When the usage was flagged during a quarterly audit, the employee's defense was "I didn't know." The company had documented proof that the policy was sent, training was offered, and the acknowledgment was never completed. The employee was terminated for policy non-compliance.
Why it's common: AI policies are new. They're often sent in the same email stream as every other HR update. Many employees genuinely don't know they exist. But "I didn't know" is not a defense when the company can prove they told you.
The line: If your company sends you an AI policy or training, complete it. Read it. Acknowledge it. Ignorance of the policy doesn't protect you once the company has documentation that you were notified.
The Common Thread
Every one of these patterns shares the same root cause: someone treated an AI tool like a private notepad when it's actually a third-party service.
ChatGPT is not your notes app. Claude is not your internal wiki. When you type something into an AI tool, you're sending it to a company. That company has servers, employees, policies, and — in some cases — legal obligations to retain or review your data.
The people who get in trouble are the ones who forget that distinction.
The survival rule: Before you paste, upload, or type anything into an AI tool at work, ask yourself: "Would I be comfortable if this showed up in an HR investigation?" If the answer is no, stop.
How to Protect Yourself
- Read your company's AI policy. If it exists, know what it says. If it doesn't exist, assume restrictive defaults until someone tells you otherwise.
- Use the approved tools. If your company provides ChatGPT Enterprise or Microsoft Copilot, use those — not your personal account. Enterprise tools have stronger data protections and your usage is expected.
- Disclose when asked. If someone asks whether you used AI, tell the truth. The cover-up is always worse than the usage. Most companies are fine with AI-assisted work — they're not fine with dishonesty.
- Verify everything. If AI generated a fact, a number, a citation, or a name — check it before it leaves your desk. Your name is on the output, not ChatGPT's.
- Sanitize sensitive data. Strip names, numbers, and confidential details before pasting anything into an AI tool. Five minutes of prep prevents five months of HR proceedings.
AI didn't get your coworker fired. A lack of boundaries did. The tools are powerful. The line between smart and reckless is thinner than you think.
Your One Action This Week
Go read your company's AI acceptable use policy. The whole thing. If you can't find it, email HR and ask for a copy. If they don't have one, send your manager this guide and start the conversation. The 10 minutes it takes to read that policy could be the difference between being the cautionary tale and being the person who saw it coming.
Get new guides delivered every Tuesday.
AI news, prompts, and workflows you can use between meetings. Under 60 seconds.