AI Guides
Plain-English explanations for busy professionals. No PhD required.
Showing all 31 guides
The AI Briefing for Non-Technical Leaders
A plain-English field guide covering AI risks, a 30-day action plan, and a ready-to-use decision matrix.
8 min readThe Corporate AI Survival Guide
10 workflows for people with actual jobs. Meetings, inbox, content, vendor demos. Copy-paste prompts included.
12 min readHow to Actually Prompt AI
5 min readWhen NOT to Use AI
4 min readThe 5-Minute Guide to NotebookLM
5 min readWTF is an AI Agent?
4 min readAI Jargon Decoder
5 min readWhat Your Competitors Are Actually Doing with AI
5 min readClaude vs. ChatGPT vs. Gemini: The Honest Comparison
10 min readThe AI Tools Your Coworkers Are Using Behind Your Back
4 min readYour Company Just Mandated AI. Now What?
5 min readHow to Evaluate AI Vendors
5 min readThe AI Meeting Survival Guide
5 min readWhat Not to Put Into ChatGPT at Work
5 min readMicrosoft Copilot vs. ChatGPT Enterprise
7 min readWhat Is RAG? Why Your Company's AI Keeps Hallucinating
5 min readHow to Use AI to Prep for Any Meeting in 5 Minutes
11 min readBest Free AI Tools in 2026 (Actually Free, Not Freemium Traps)
13 min readAI for Performance Reviews: What to Use and What to Avoid
11 min readPerplexity vs. ChatGPT for Work: When to Use Which
11 min readWhat Is an MCP? The New AI Standard Explained
10 min readWhat AI Can and Can't Do in 2026: A Realistic Checklist
11 min readAI Tools by Job Role: A Practical Guide
12 min readShadow AI: What It Is and How to Handle It
12 min readHow to Use AI to Summarize a 50-Page Report
11 min readYour Company's AI Policy Explained
13 min readWhat Is AI Governance? What Your Company Actually Needs
12 min readIs Your AI Tool GDPR or HIPAA Compliant? How to Check
12 min readThe AI Tools Fortune 500 Companies Are Actually Deploying in 2026
11 min readGoogle Gemini at Work: What It Actually Does vs. What Google Claims
11 min readVibe Coding: What It Is and Why Your Team Is Talking About It
12 min readNo guides match your filters. Try adjusting your selection.
How to Actually Prompt AI
Most people type a vague question into ChatGPT, get a vague answer, and conclude AI is useless. The problem is almost never the AI. The problem is the instruction.
Prompting is not a technical skill. It is the skill of being specific about what you want. If you can write a clear email to a colleague, you can prompt AI effectively.
The Golden Rule
The Intern Test: If a smart but brand-new intern couldn't do the task from your instructions alone, the AI won't either. Be that specific.
The 4 Parts of a Good Prompt
Every effective prompt has four ingredients. You don't always need all four, but the more you include, the better the output.
- Role: Tell the AI who it is. "You are a senior project manager" gives better output than just asking a question into the void.
- Context: Give it the background. What's the situation? Who is the audience? What have you already tried?
- Task: Be specific about the output. "Write a summary" is vague. "Write a 3-sentence summary for my VP who has 30 seconds to read it" is useful.
- Format: Tell it how to structure the answer. Bullet points? Table? Email draft? One paragraph? If you don't specify, you get whatever the AI feels like.
Example: Bad vs. Good
Weak Prompt
"Summarize this article about AI agents."
Strong Prompt
"You are a technology advisor for non-technical executives. Summarize this article in 3 bullet points: what it is, why it matters for our industry, and whether we should act now or wait. Our industry is insurance."
5 Prompts You Can Steal Right Now
The Jargon Stripper:
The Devil's Advocate:
The Email Fixer:
The Meeting Prep:
The ELI5 (Explain Like I'm 5):
Common Mistakes
- Being too polite. "Could you maybe possibly help me with..." Just state what you need.
- Accepting the first output. The first answer is a rough draft. Say "Make this shorter" or "That's too generic, give me specifics for [industry]."
- Not giving examples. If you want a specific style or format, paste an example of what good looks like.
- Pasting nothing. AI works best when you give it material to work with. Paste the email, the document, the data. Don't make it guess.
The bottom line: AI is only as useful as your instructions. Spend 30 extra seconds on your prompt and the output improves dramatically.
When NOT to Use AI
Every AI newsletter tells you when to use AI. Almost none of them tell you when to stop. That's a problem, because using AI in the wrong situation doesn't just waste time. It creates risk.
Here's the honest guide to when AI makes things worse.
Never Use AI For These
- Final numbers. AI hallucinates statistics with total confidence. If a number matters (revenue, headcount, contract terms), verify it yourself. Every time.
- Legal or compliance decisions. AI doesn't know your jurisdiction, your contracts, or your regulatory environment. Use it to understand concepts, never to make the call.
- Sensitive HR situations. Writing a performance review? Drafting a termination plan? AI doesn't understand the human dynamics. Use it for structure, not judgment.
- Anything you can't verify. If you don't have the expertise to fact-check the output, you're trusting a confident guesser with your reputation.
The Reputation Test: If this output turns out to be wrong, whose name is on it? If the answer is yours, verify everything.
Use AI With Caution For These
- Research. AI is a great starting point, not a finishing point. It will give you a plausible-sounding overview that may contain errors. Use it to know what to Google, not to replace Google.
- Customer-facing content. AI-generated text is increasingly detectable and often reads as generic. Use it for drafts and structure, then rewrite in your own voice.
- Strategy and creative work. AI is a great brainstorming partner and a terrible strategist. It gives you the average of everything it's seen. Strategy requires judgment, context, and conviction that AI doesn't have.
- Confidential information. Before pasting internal data into any AI tool, check your company's policy. Many enterprise AI tools have data protections. Free consumer tools may not.
Where AI Actually Shines
The sweet spot is boring, repetitive, low-stakes work where a mistake is easy to spot:
- Summarizing long documents (then verifying key points)
- Drafting emails and messages (then editing for tone)
- Reformatting data (then checking the output)
- Brainstorming options (then applying judgment)
- Explaining technical concepts in plain language
- Creating first drafts of templates, SOPs, and checklists
The Decision Framework
Before using AI for any task, ask three questions:
- Can I verify the output? If no, don't use AI.
- What happens if it's wrong? If the consequences are serious, use AI for the draft only and verify everything.
- Am I saving time or creating work? If fixing the AI's output takes longer than doing it yourself, skip AI entirely.
The bottom line: AI is a power tool, not a brain replacement. It's great at first drafts and terrible at final answers. Use it for speed, not for judgment.
The 5-Minute Guide to NotebookLM
NotebookLM is Google's research tool and it solves one problem better than anything else on the market: making sense of long documents without reading them cover to cover.
If you've ever received a 40-page PDF before a meeting and thought "there's no way I'm reading all of this," NotebookLM is for you.
What It Actually Does
You upload documents (PDFs, Google Docs, web links, even YouTube videos). NotebookLM reads them and lets you ask questions about the content. It only answers from your documents, which means fewer hallucinations than regular AI chat tools.
The standout feature is Audio Overviews. It generates a podcast-style conversation about your documents. Two AI voices discuss the key points in a way that's surprisingly easy to follow. Listen on your commute instead of reading at your desk.
How to Start (Literally 5 Minutes)
- Go to notebooklm.google.com (free with a Google account)
- Click "New Notebook"
- Upload 1-3 documents (PDFs, Docs, or paste URLs)
- Ask a question: "What are the 3 most important points in this document?"
- Optional: Click "Audio Overview" to generate the podcast version
3 Use Cases That Actually Save Time
1. Meeting prep in 5 minutes. Upload the pre-read materials. Ask: "Summarize the key decisions that need to be made" and "What are the risks mentioned in this document?" Walk in prepared without reading 40 pages.
2. Vendor evaluation. Upload 3 competing vendor proposals. Ask: "Compare the pricing models across these documents" or "Which vendor has the strongest security guarantees?" Get a side-by-side comparison in seconds.
3. Catch up on a project you missed. Upload the last 5 meeting notes or status updates. Ask: "What changed since [date]?" and "What are the open action items?" Get current without asking your colleagues to repeat themselves.
What It's Not Good At
- Real-time information. It only knows what you upload. It won't search the web.
- Spreadsheet analysis. It can read basic tables but struggles with complex data. Use a dedicated tool for that.
- Creative work. It summarizes and analyzes. It doesn't generate new ideas well.
Pro tip: The more specific your question, the better the answer. "Tell me about this document" gives you a generic summary. "What does this document say about the Q3 budget shortfall?" gives you exactly what you need.
Cost
Free. Completely free with a Google account. There are usage limits, but for typical professional use (a few notebooks, a few audio overviews per day), you won't hit them.
The bottom line: NotebookLM is the fastest way to go from "I haven't read this" to "I have three smart questions to ask about it." Free, no setup, works in 5 minutes.
WTF is an AI Agent?
You've probably heard the term "AI agent" thrown around a lot lately. Meta just paid $2 billion for an AI agent startup. Every AI company is launching "agents." But what actually is an AI agent, and should you care?
The Simple Explanation
An AI agent is software that can take actions on your behalf, not just answer questions. Think of the difference between:
- ChatGPT: "Here's how you could book a flight to Tokyo"
- AI Agent: Actually books the flight to Tokyo
Regular AI assistants give you information. Agents do things with that information.
What Agents Can Do Today
The honest answer: less than the hype suggests. Current AI agents are good at simple, well-defined tasks (scheduling, data entry), tasks with clear success criteria, and workflows where mistakes are easy to catch.
They struggle with complex multi-step workflows, tasks requiring judgment calls, and anything where context changes frequently.
The Stranger Test: If a competent stranger couldn't complete the task with written instructions alone, an AI agent probably can't either.
Should You Care Right Now?
For most professionals: not yet. The technology is real, but it's early. Most "AI agents" in the wild are either demos or require significant hand-holding. Focus on getting good at prompting regular AI assistants. That skill will transfer when agents mature.
The Bottom Line
AI agents are AI that can take actions, not just give advice. The concept is powerful, but the current implementations are overhyped. Safe to ignore the breathless announcements for now.
AI Jargon Decoder
AI conversations are full of acronyms and technical terms that sound intimidating but aren't that complicated. Here's what they actually mean.
LLM (Large Language Model)
The technology behind ChatGPT, Claude, Gemini, etc. Think of it as a very sophisticated autocomplete that's read most of the internet. It predicts the next word in a sequence, which turns out to be surprisingly useful.
MCP (Model Context Protocol)
A standard way for AI tools to connect to your existing software. Think of it as USB-C for AI. Before MCP, every AI tool needed its own custom integration. Now there's one standard plug.
RAG (Retrieval-Augmented Generation)
When an AI looks up specific documents before answering your question, instead of relying only on its training. It's like the difference between answering from memory vs. checking your notes first.
Hallucination
When AI confidently states something that's completely false. Not a bug that will be fixed. It's a fundamental feature of how these systems work. Always verify important claims.
Fine-tuning
Training an AI model on your specific data so it performs better for your use case. Like teaching a general contractor the specific building codes for your city.
Token
How AI measures text. Roughly 1 token equals 0.75 words. When people talk about "context windows" or "token limits," they mean how much text the AI can process at once. Bigger is better.
Context Window
The AI's working memory. How much text it can "see" at once during a conversation. Think of it as the size of the desk. A bigger desk means more documents open at the same time.
Prompt Engineering
The art of writing instructions that get AI to do what you want. Sounds fancy, but it's mostly just being clear and specific. See our prompting guide for the practical version.
Vibe Coding
Using AI to write software by describing what you want in plain English instead of writing code. Real, but early. Non-developers can now build simple prototypes. Complex software still needs real engineers.
Agentic AI
AI that can take multi-step actions autonomously instead of just answering one question at a time. The buzzword of 2026. Real technology, overhyped timelines. See our AI agents guide for the full breakdown.
Rule of thumb: If someone uses a term you don't know, ask "What does that mean for my work this week?" If they can't answer in one sentence, it probably doesn't matter yet.
What Your Competitors Are Actually Doing with AI
Every LinkedIn post makes it sound like your competitors have fully automated their operations with AI while you're still figuring out ChatGPT. The reality is much less dramatic, but also much more interesting.
Here's what's actually happening behind the hype, and what's worth paying attention to.
The Real Adoption Numbers
The honest stats: About 75% of knowledge workers have tried AI tools. Roughly 30% use them weekly. Less than 10% have integrated AI into a daily workflow that actually saves measurable time. The gap between "tried it once" and "changed how we work" is enormous.
Most companies are in the experimentation phase. They've bought licenses, sent around some "AI best practices" emails, and maybe formed a committee. Very few have shipped anything that moves the needle on revenue or efficiency.
This is actually good news if you feel behind. The window to catch up is wide open.
What's Actually Working
Across industries, four use cases keep showing up as genuine time-savers:
- Content drafting. Marketing teams using AI for first drafts of blog posts, social media, and email campaigns. Not publishing AI output directly, but cutting first-draft time by 50-70%.
- Data summarization. Analysts using AI to summarize reports, extract key findings from long documents, and turn raw data into narrative insights for stakeholders.
- Meeting prep and follow-up. Managers using AI to generate agendas from past notes, summarize meeting transcripts, and draft follow-up emails with action items.
- Customer support triage. Support teams using AI to categorize incoming tickets, draft initial responses, and surface relevant knowledge base articles.
Notice a pattern? These are all about making existing work faster, not replacing entire jobs or inventing new processes.
What's Still Theater
Not everything labeled "AI initiative" is real progress. Watch out for:
- AI strategy decks that never ship. Impressive presentations about "AI transformation" that result in zero workflow changes. If there's no pilot with real users within 60 days, it's theater.
- Chatbot demos that impress the board. A custom chatbot trained on company data sounds amazing in a demo. In practice, most of these have low adoption because employees don't trust them or find them slower than just asking a colleague.
- "We're building an AI team." Hiring data scientists without clear business problems to solve. The best AI teams start with a specific pain point, not a headcount goal.
- Vendor-driven pilots. When an AI vendor runs your pilot for you and reports the results, the numbers will always look good. Real adoption means your own people use it without hand-holding.
The 3 Things Worth Stealing
These are the specific workflows that companies who are ahead are quietly using:
1. The pre-meeting brief. Before any important meeting, feed the relevant documents to an AI tool and ask for a one-page brief: key points, open questions, and potential objections. Takes 5 minutes, makes you the most prepared person in the room.
2. The weekly status automator. Instead of spending 30 minutes writing a status update, dictate or paste your notes and have AI format them into a structured update. Consistent format, half the time.
3. The competitive intel scan. Feed competitor announcements, earnings calls, or press releases into AI and ask: "What are they investing in? What are they cutting? What should we be worried about?" It won't replace your strategy team, but it gives you a 10-minute head start.
Where to Start If You're Behind
If you haven't done much with AI yet, don't panic. Here's the practical playbook:
- Pick one task you do every week that's boring. Status updates, meeting summaries, email drafts. Start there.
- Spend one hour learning to prompt well. Read our prompting guide. That one hour will save you 10+ hours over the next month.
- Use a free tool first. Don't wait for your company to buy an enterprise license. ChatGPT, Claude, and Gemini all have free tiers. Start building the habit.
- Track your time savings. For the first two weeks, note how much time AI actually saves you. Real data beats anecdotes when you eventually pitch this to your team.
The bottom line: Your competitors are not as far ahead as LinkedIn makes it seem. But they are experimenting. The best time to start was six months ago. The second best time is this week.
Claude vs. ChatGPT vs. Gemini: The Honest Comparison
Everyone wants a simple answer: "Which AI is the best?" We're not going to give you one, because the honest answer depends on what you're using it for. But we will tell you what each one is actually good at.
This is our opinionated take based on daily use of all three. Your experience may vary.
Stop Asking "Which One Is Best"
All three major AI tools (ChatGPT by OpenAI, Claude by Anthropic, Gemini by Google) are capable enough for most professional tasks. The differences are real but situational. Picking one based on a benchmark score is like picking a car based on top speed. What matters is how it handles your commute.
The real question: Don't ask "which is best." Ask "which is best for the specific thing I do 10 times a week." That's where the differences matter.
Writing & Editing
Quick verdict: Claude tends to produce the most natural-sounding writing. Less robotic, fewer cliches, better at matching a specified tone.
Claude consistently produces writing that sounds like a human wrote it. It's particularly strong at matching a tone you describe ("professional but warm," "direct but not blunt") and at editing without flattening your voice. ChatGPT is solid but tends toward a recognizable "AI voice" — slightly formal, slightly generic. Gemini has improved significantly but still occasionally produces output that feels like it's trying too hard to be helpful.
For email drafts, blog posts, and content editing, Claude is our first choice. For high-volume content where consistency matters more than voice, ChatGPT works well.
Research & Analysis
Quick verdict: Gemini has the edge for research that benefits from fresh web data. Claude excels at analyzing documents you provide.
Gemini's integration with Google Search gives it a real advantage for research tasks that need current information. It can pull in recent data, verify claims, and synthesize across web sources in a way the others can't match natively. Claude is stronger when you need deep analysis of specific documents — upload a 50-page report and Claude will give you the most thorough, nuanced breakdown. ChatGPT with browsing enabled lands in the middle.
Code & Technical Work
Quick verdict: Claude and ChatGPT are both strong. Claude tends to write cleaner code. ChatGPT has more integrations and plugins.
For writing code, Claude and ChatGPT are neck-and-neck, with Claude often producing more readable, well-structured code and ChatGPT offering broader ecosystem support through plugins and tools. Gemini is capable but a step behind for complex coding tasks. For non-developers using AI to understand or debug code, all three are adequate.
Creative & Brainstorming
Quick verdict: ChatGPT is the most willing to get weird and creative. Claude is the most thoughtful brainstorming partner.
ChatGPT is the most enthusiastic brainstorming partner. It generates a lot of ideas quickly and is willing to go in unexpected directions. Claude takes a more considered approach — fewer ideas, but each one tends to be more developed and practical. Gemini is solid for brainstorming but doesn't have a distinctive strength here.
The Free Tier Reality
What you actually get for $0:
Worth It Free
- ChatGPT free: GPT-4o with limits, good for casual use
- Claude free: Solid daily limits, great for writing tasks
- Gemini free: Generous limits, best for research with web access
Pay-Wall Pain Points
- Free tiers hit rate limits during heavy use
- Advanced features (file upload, longer context) often require paid plans
- Free models may be older or less capable versions
For occasional use (a few queries a day), all three free tiers are genuinely useful. If you're using AI as a daily work tool, expect to pay $20/month for one of them. Our suggestion: try all three free tiers for a week, then pay for the one you reached for most often.
Our Actual Setup
What the AI Minute team uses day-to-day:
- Writing and editing: Claude (paid). It's our primary drafting and editing tool.
- Research and fact-checking: Gemini (paid). The Google Search integration is genuinely useful for verifying claims and finding recent data.
- Quick questions and brainstorming: ChatGPT (paid). It's fast, reliable, and the plugin ecosystem adds value.
- Document analysis: NotebookLM (free). For making sense of long documents, nothing beats it. See our NotebookLM guide.
Yes, we pay for multiple tools. The total cost ($40-60/month) saves us hours every week. But if you're picking just one, start with the one that matches your most common task.
The bottom line: There's no single "best" AI. Claude writes best, Gemini researches best, ChatGPT does a bit of everything well. Try all three free, then pay for the one you use most.
The AI Tools Your Coworkers Are Using Behind Your Back
There's a quiet productivity revolution happening in your office. Some of your coworkers are getting things done in half the time, and most of them aren't talking about it. Here's why, and what they're actually using.
Why Nobody Talks About It
Two fears keep people silent about their AI use at work:
- Fear of looking lazy. "If I tell my boss I used AI to write that report, will they think I'm not actually working?" This is the big one. People worry that AI-assisted work is seen as less valuable, even when the output is better.
- Fear of being replaced. "If I show how much of my job AI can do, am I making the case to eliminate my role?" This fear is mostly unfounded — the people who use AI best become more valuable, not less — but it's real enough to keep people quiet.
The result: a growing gap between people who use AI tools openly, people who use them secretly, and people who don't use them at all. The secret users are often the most productive, and the non-users don't understand why.
The Meeting Cheaters
Your coworker who always shows up to meetings with perfect talking points? They might be spending 5 minutes with AI instead of 30 minutes reading the pre-read.
Tools they're using: Otter.ai or Fireflies for meeting transcription and summaries. ChatGPT or Claude for pre-meeting prep. NotebookLM for digesting long documents before a meeting.
The workflow: Upload the meeting agenda and any pre-read materials to an AI tool. Ask for a one-page summary, the three most important questions to ask, and any red flags in the data. Walk in looking like you spent an hour preparing.
The Email Ghosts
That colleague who responds to every email within 20 minutes with a perfectly crafted reply? They're probably not typing every word.
Tools they're using: Claude or ChatGPT for drafting replies. Superhuman or Shortwave for AI-assisted email triage. Gmail's built-in AI for quick responses.
The workflow: Copy the email thread, paste it into AI, and ask for a draft response that's professional, addresses every point, and keeps it under 5 sentences. Review, tweak two sentences, send. What used to take 10 minutes takes 2.
The Slide Deck Speedrunners
Your colleague who turns around presentations overnight? AI is probably doing the heavy lifting on structure and content.
Tools they're using: Gamma or Tome for AI-generated slide decks. ChatGPT or Claude for outline and content generation. Midjourney or DALL-E for custom visuals.
The workflow: Start by asking AI to create an outline for the presentation. Then generate content for each slide. Use a tool like Gamma to turn it into a visual deck. Spend your time refining the story instead of fighting with PowerPoint formatting.
The "I Did That In 5 Minutes" Club
Some tasks that used to take an hour now take minutes. Here's what people are quietly speeding through:
- Status updates and reports. Paste your raw notes into AI, get a formatted status update. 30 minutes becomes 5.
- Research summaries. Instead of reading 10 articles, paste them into AI and ask for a synthesis. An afternoon of research becomes 20 minutes.
- Data analysis write-ups. Export the data, paste it into AI with a prompt like "What are the three most important trends here?" Get the narrative in minutes.
- First drafts of anything. SOPs, project plans, job descriptions, customer emails, internal memos. AI gets you 70% of the way there. You spend your time on the last 30% that requires judgment.
How to Start Without Getting Caught
If you want to start using AI but aren't ready to announce it to your team:
- Start with tasks nobody sees. Your own meeting prep, your own email drafts, your own status updates. Build confidence before using it on shared work.
- Always edit the output. Never send AI-generated text without making it your own. Add your voice, your context, your judgment. The AI gives you speed; you add the quality.
- Check your company's AI policy. Many companies now have official guidelines on AI use. Follow them. If there's no policy, use common sense: don't paste confidential data into free tools.
- Keep a "time saved" log. For the first two weeks, note every time AI saves you 10+ minutes. This becomes your evidence when you're ready to advocate for broader adoption.
- Graduate to openness. The goal isn't to hide AI use forever. It's to build enough skill and evidence that you can confidently say "I use AI tools, and here's how they make my work better." That's a career move, not a confession.
The bottom line: Your most productive coworkers are almost certainly using AI tools. The question isn't whether to start — it's how quickly you can close the gap. Pick one task, try it this week.
Your Company Just Mandated AI. Now What?
The email came down from leadership: "We are an AI-first company now." Maybe it was a town hall. Maybe a Slack message with too many exclamation marks. Either way, the mandate is here and you're expected to start using AI tools. Yesterday.
Here's the survival guide for what to actually do — and what to ignore.
First: Don't Panic
Most AI mandates are long on enthusiasm and short on specifics. That's actually good news for you. It means you have time to figure this out before anyone notices.
Reality check: When a company says "use AI," what they usually mean is "find ways to be more efficient." They don't mean replace your entire workflow overnight. Start small.
What to Actually Do This Week
- Pick one repetitive task. Don't try to reinvent your entire job. Find one thing you do every week that's boring and low-stakes. Status updates, meeting summaries, email drafts. Start there.
- Get access to the approved tools. Check with IT or your manager about which AI tools are company-approved. Using unauthorized tools with company data is a fast way to make this mandate about you specifically.
- Spend 30 minutes experimenting. Not an hour. Not a whole afternoon. Thirty minutes. Paste a real document into the approved tool. Ask it to summarize, reformat, or draft a response. See what happens.
- Tell nobody. Not yet. Build some confidence first. Once you have a few wins under your belt, then share what's working.
What's Worth Your Time
- Summarizing long documents and email threads. This is the easiest win. Paste it in, get the key points out. Saves 15-30 minutes every time.
- Drafting first versions. Emails, reports, project briefs, meeting agendas. Let AI write the first 70%, then spend your time making it actually sound like you.
- Prepping for meetings. Upload the agenda and materials, ask for a summary and the questions you should be asking. Walk in prepared in 5 minutes instead of 30.
- Reformatting and cleaning up. Turn messy notes into clean bullet points. Convert a wall of text into a table. Make raw data readable.
What to Ignore (For Now)
- "AI agents" and automation workflows. These are real, but they're not where you start. Get comfortable with basic prompting before you try to automate complex processes.
- Building custom GPTs or bots. Unless that's literally your job, this is a distraction. Use the tools that already exist.
- The person on LinkedIn who automated their entire job. They didn't. Or their job was already automatable. Your job has nuance, judgment, and relationships that AI can't replace. Focus on the parts it can help with.
- Vendor pitches disguised as "AI strategy." If someone's trying to sell you a tool in the same breath as explaining why you need it, be skeptical.
Do This
- Start with one task this week
- Use company-approved tools only
- Always review AI output before sending
- Keep a log of time saved
- Share wins with your team once you have them
Not This
- Try to automate everything at once
- Paste confidential data into free tools
- Send AI output without editing it
- Announce yourself as an "AI expert"
- Ignore the mandate and hope it goes away
How to Talk About It
When your manager asks how you're using AI (and they will), have an answer ready:
That's it. Concrete, specific, measurable. Don't oversell it. Don't say "AI is transforming my workflow." Say "I used Claude to draft my status update and it saved me 20 minutes."
The 30-Day Plan
- Week 1: Pick one task. Try it 3 times. Note what works and what doesn't.
- Week 2: Add a second task. Refine your prompts on the first one. Start saving your best prompts somewhere.
- Week 3: Share one win with a colleague. Ask what they've tried. Compare notes.
- Week 4: Write a quick summary of what's working for you and send it to your manager. This makes you the person who "gets it" — without being the person who won't stop talking about AI.
The bottom line: An AI mandate is not a crisis. It's permission to experiment on company time. Start small, pick boring tasks, and build from there. The people who do this well become indispensable. The people who ignore it fall behind.
How to Evaluate AI Vendors
Every software company is now an "AI company." Your inbox is full of demos, your LinkedIn is full of pitches, and every vendor deck promises to "revolutionize" something. Most of it is noise. Here's how to separate the real from the hype.
Red Flags That Should Kill the Deal
- "Our proprietary AI model..." Unless they're OpenAI, Anthropic, Google, or Meta, they almost certainly don't have their own model. They're wrapping someone else's API in a nice interface and charging you enterprise prices. That's not always bad — but they should be honest about it.
- No clear answer on where your data goes. Ask: "Does our data train your model? Where is it stored? Can we delete it?" If they dodge, deflect, or say "it's in our terms of service," walk away.
- ROI claims with no methodology. "Our customers see 10x productivity gains." How? Measured how? Over what time period? Compared to what? If they can't explain the math, the number is made up.
- They can't explain what the AI actually does. If the sales team can only speak in buzzwords — "machine learning," "neural networks," "intelligent automation" — but can't explain the specific mechanism in plain English, they either don't understand their own product or it doesn't do much.
- No option for a pilot or trial. Any vendor confident in their product will let you test it with real workflows before signing an annual contract. No pilot = no confidence.
The Buzzword Test: Replace every AI buzzword in the pitch with "magic." If the sentence still makes the same amount of sense, the vendor is selling hype, not technology.
Questions to Ask Every Vendor
- "What model do you use, and what happens when it changes?" Models get updated. Performance can shift. You need to know if they're locked to a specific version or if you'll wake up one day to different behavior.
- "What does this look like when it fails?" Every AI tool fails sometimes. Good vendors have graceful failure modes, error handling, and human-in-the-loop fallbacks. Bad vendors pretend failure doesn't happen.
- "Can you show me a customer who looks like us?" Not a Fortune 500 case study when you're a 200-person company. A customer with your industry, your scale, your use case. If they can't produce one, you're the guinea pig.
- "What does implementation actually require?" Time, people, integrations, data prep, training. Get the full picture. "It's plug and play" is never true for enterprise software.
- "What happens to our data if we cancel?" Can you export it? Is it deleted? How long do they retain it? This matters more than most people realize.
What "Enterprise-Ready" Actually Means
Vendors love to call themselves enterprise-ready. Here's what that should actually mean:
- SSO and role-based access control. If every user gets the same permissions and there's no SSO integration, it's not enterprise-ready. It's a consumer app with a bigger price tag.
- SOC 2 Type II certification (at minimum). This means their security has been independently audited. SOC 2 Type I means they have policies. Type II means someone verified they actually follow them.
- Data residency options. Can you choose where your data is stored? For regulated industries, this isn't optional.
- Audit logs. Who used the tool, when, and what data was processed. If you can't audit it, you can't govern it.
- An SLA with teeth. Not just "99.9% uptime" in the marketing copy. An actual Service Level Agreement with penalties if they miss it and a clear escalation path.
The Evaluation Checklist
Must Have
- Clear data handling policy
- SOC 2 Type II or equivalent
- Free pilot or trial period
- Reference customer in your industry
- Transparent pricing (no "contact us")
Walk Away If
- They won't say what model they use
- No trial without an annual contract
- ROI claims with no methodology
- No SSO or access controls
- "Trust us" is the data security answer
How to Run a Pilot That Actually Tells You Something
- Define success before you start. "We'll know this works if [specific metric] improves by [amount] over [timeframe]." If you can't define success, you can't evaluate the tool.
- Use real data and real workflows. Demo data gives you demo results. Test with the messy, complicated reality of your actual work.
- Include your most skeptical team member. If the tool wins over the skeptic, it's probably good. If it only impresses the person who was already sold, you've learned nothing.
- Document the failures. What didn't work? What was frustrating? What took more effort than expected? The failures tell you more than the successes.
The bottom line: Most AI vendors are selling the future. You need to buy the present. Demand transparency on data, models, and evidence. Run a real pilot. And remember: the best AI vendor is the one whose tool your team actually uses after the first month.
The AI Meeting Survival Guide
You spend too much time in meetings. Everyone does. The average professional spends 31 hours per month in unproductive meetings. AI won't eliminate meetings (sorry), but it can make the time before, during, and after them dramatically more useful.
Before the Meeting: Prep in 5 Minutes
The biggest meeting sin is walking in unprepared. AI fixes this. Here's the workflow:
- Dump the materials into AI. Agenda, pre-reads, previous meeting notes, relevant emails. Paste or upload everything.
- Ask for a briefing. Don't read 20 pages of documents. Get the summary.
- Get your talking points. Ask AI what questions you should raise and what positions you should take, given your role.
This takes 2 minutes instead of 30. And you'll often catch things in the materials that you would have missed skimming.
During the Meeting: Transcription Tools
Stop trying to take notes and pay attention at the same time. Use a transcription tool and focus on the conversation.
- Otter.ai — The most popular. Records, transcribes, and summarizes meetings. Integrates with Zoom, Teams, and Google Meet. Free tier is limited but usable. Pro is ~$17/month.
- Fireflies.ai — Similar to Otter but with stronger search and CRM integrations. Good for sales teams. Automatically joins your calendar meetings.
- Microsoft Copilot (in Teams) — If your company is on Microsoft 365, this is already built in. Transcribes, summarizes, and can answer questions about what was discussed.
- Google Gemini (in Meet) — Same idea for Google Workspace users. Take notes, summarize, and capture action items automatically.
- Granola — A lightweight Mac app that enhances your own notes with the meeting transcript. Less invasive than a bot joining your call. Good for people who don't want to announce they're recording.
Important: Always check your company's recording policy and get consent from participants before recording or transcribing meetings. Many states and countries have two-party consent laws. When in doubt, ask first.
After the Meeting: Follow-Up Automation
The real value isn't the transcript — it's what you do with it. Here's the post-meeting workflow that saves the most time:
Step 1: Get the Summary
If your transcription tool generates a summary, start there. If not, paste the transcript into AI:
Step 2: Draft the Follow-Up Email
Nobody wants to write the follow-up email. Let AI do it:
Step 3: Update Your Task List
Extract your personal action items and put them where they belong:
The Complete Meeting Workflow
AI-Powered Workflow
- 5 min prep with AI briefing
- Auto-transcription during meeting
- AI-generated summary in 2 min
- Follow-up email drafted in 1 min
- Action items extracted automatically
The Old Way
- 30 min reading pre-materials
- Frantic hand-written notes
- 20 min writing up notes after
- 15 min composing follow-up email
- Manually adding tasks to your list
Pro Tips
- Create a meeting prompt template. Save your best pre-meeting and post-meeting prompts. Reuse them every time. Consistency beats creativity here.
- Record recurring meetings. Over time, you can ask AI to compare this week's meeting to last month's. "What changed? What's still unresolved? What keeps coming up?"
- Use transcripts for accountability. When someone says "I never agreed to that," you have the receipt. Transcripts are gentle but effective CYA.
- Share summaries, not transcripts. Nobody reads a 40-page transcript. The summary is the product. The transcript is the backup.
The bottom line: AI won't save you from unnecessary meetings. But it can turn a 1-hour meeting into 10 minutes of actual work: 5 minutes to prep, 2 minutes for the summary, 3 minutes for the follow-up. That's hours back in your week.
Get new guides delivered every Tuesday.
AI news, prompts, and workflows you can use between meetings. Under 60 seconds.
Want a guide on something specific?
We write these for real questions from real professionals.
Request a Guide