Skip to content
← All Guides

Is This AI Tool Approved? A Decision Tree for Employees

Not sure if that AI tool is safe to use at work? Walk through this decision tree. 60 seconds, clear answer.

Published March 28, 2026

The Decision Tree

Before you sign up for, paste into, or connect any AI tool to your work, run through these 5 questions. Start at the top. Follow the path.

Question 1
Does your company have an AI acceptable use policy?
Check your company intranet, employee handbook, or ask HR. It may be called an "AI policy," "generative AI guidelines," "acceptable use policy," or similar.
YES → Go to Question 2
NO or DON'T KNOW → See below

If NO policy exists: Treat all AI tools as restricted by default. Use only for generic, non-sensitive tasks (writing help, public research, templates). Do not upload any company or client data. Ask your manager to clarify before expanding usage.

Question 2
Is this specific tool on the approved list?
Your AI policy should name specific approved tools (e.g., "ChatGPT Enterprise," "Microsoft Copilot," "Claude for Work"). If it lists approved tools and yours isn't on the list, it's not approved — even if it's a well-known tool.
YES, it's on the list → Go to Question 3
NO, it's not listed → See below

If NOT on the approved list: Don't use it for work. If you believe it should be added, submit a request through your IT or procurement process. Don't use it first and ask permission later.

Question 3
Are you using the company-provided account (not a personal one)?
There's a big difference between your personal ChatGPT account and the one your company pays for. Enterprise accounts have different data policies, admin controls, and legal protections. Your personal account does not.
YES, company account → Go to Question 4
NO, personal account → See below

If PERSONAL account: Switch to your company-provided account. If your company hasn't provided one but the tool is approved, ask IT for access. Do not use a personal account for work data — the data protections are weaker and your employer has no visibility or control.

Question 4
Does the data you're about to input contain anything sensitive?
Sensitive means: client names or data, employee records, financial figures, proprietary code, legal documents, trade secrets, personally identifiable information (PII), anything marked "Confidential" or "Internal Only." If you're not sure, assume yes.
YES, sensitive data → Go to Question 5
NO, generic content → See below

If NO sensitive data: You're clear. Use the approved tool on your company account for generic tasks — writing help, brainstorming, summarizing public information, creating templates. Proceed with normal judgment.

Question 5
Does your AI policy explicitly allow this type of sensitive data in this tool?
Some policies draw distinctions. For example: "You may use ChatGPT Enterprise for internal documents but NOT for client data" or "Financial data requires VP approval before AI processing." Read the specific rules for the data type you're working with.
YES, policy allows it → See below
NO or UNCLEAR → See below

If YES: Proceed. Follow any additional rules (e.g., approval required, logging, disclosure). Document what you did in case anyone asks later.

If NO or UNCLEAR: Stop. Sanitize the data first (strip names, numbers, identifiers) or ask your manager for explicit permission before proceeding.

The Cheat Sheet Version

If you don't have time for the full tree, here's the 10-second version you can tape to your monitor:

1. Is the tool approved? 2. Am I on the company account? 3. Is the data clean? If any answer is "no" or "I don't know" — stop and check before proceeding.

Common Scenarios, Quick Answers

  • "I want to use ChatGPT to draft an email to a client." Fine — as long as you're on an approved account and the email doesn't contain sensitive data you're pasting in for context. Writing "help me draft a follow-up email to a client about their Q2 deliverables" is safe. Pasting the client's entire contract in for reference is not.
  • "I want to upload a spreadsheet to get a summary." Depends on what's in the spreadsheet. Generic data or your own notes? Fine. Client financials, employee records, or anything with real names and numbers? Sanitize it first or don't upload it.
  • "My colleague shared their Copilot login so I could try it." No. Shared credentials violate almost every IT policy. Get your own access through the proper channel.
  • "I found a new AI tool that's better than what we're using." Great — submit it for review through IT or procurement. Don't start using it for work data before it's approved. Even if it's objectively better, using an unapproved tool with company data is a policy violation.
  • "I'm using AI on my personal phone during lunch." If you're doing personal tasks on your personal device and personal account, that's your business. The moment you start working on company or client material — even on your own phone — you're back in policy territory.

When "I Didn't Know" Stops Working

There's a window of grace that's closing fast. In 2024, most companies gave employees a pass for AI missteps because the rules were new and unclear. By mid-2026, that window is closed at most organizations. Policies exist. Training has been offered. Acknowledgments have been sent.

"I didn't know we had a policy" is not a defense when your company can show they emailed it to you, offered training, and sent three reminders. Read the policy. Complete the training. Sign the acknowledgment. It takes 15 minutes and it protects you.

The decision tree isn't about slowing you down. It's about making sure speed doesn't cost you your job.

Your One Action This Week

Screenshot this decision tree and share it with your team in Slack or Teams. Not as a lecture — as a resource. Say: "Found this quick reference for figuring out what AI tools we can use at work. Thought it might be useful." That one share could prevent someone on your team from making a mistake they don't see coming.

Get new guides delivered every Tuesday.

AI news, prompts, and workflows you can use between meetings. Under 60 seconds.