Let’s get this out of the way first: I use AI every single day in my work. It helps me brainstorm, draft, organize, and occasionally hold back the existential dread of a blank page. It’s a powerful co-pilot.
But a pilot? Not even close. I’m still seeking a calendar that will know the difference between an in-office and an off-site meeting and add the driving time as a buffer in between meetings.
Despite what some breathless LinkedIn posts might imply, AI — including the large language models (LLMs) that power tools like ChatGPT — is not a replacement for creativity, strategy, or good taste.
Here’s what AI still can’t do (and probably won’t for a while):
It can’t read the room.
AI can predict patterns in language, but it doesn’t understand context, at least not the kind that makes a campaign land or a caption feel human. It doesn’t know that your audience is tired of performative “we care” posts, or that your nonprofit’s followers are in burnout mode and need a dose of hope, not another statistic. A human creative senses that. We feel the mood. We can then pivot.
It can’t make taste decisions.
AI can give you 50 headline options. It can’t tell you which one actually sounds like you. Taste — that quiet instinct that separates “good enough” from “that’s the one” — is still a human muscle. It’s built from experience, curiosity, and all the small things a machine can’t measure: your cultural references, your design eye, your understanding of what makes something feel right. It also won’t be able to run an A/B test on those headline options; that’s for you to figure out.
It can’t be held accountable.
If a campaign flops, AI doesn’t lose sleep or analyze what went wrong. You do. That feedback loop — the ability to learn, iterate, and feel the stakes of the work — is something only humans can do. Machines don’t have skin in the game. We do. This is why you’re sick of telling it to not use the word “delve”, and it won’t listen to you.
It can’t connect dots that aren’t obvious.
AI is incredible at remixing what already exists. But true creativity, the kind that makes you pause and say “oh, that’s clever,” happens in the negative space. It happens when a strategist, a designer, or a writer connects ideas that weren’t already linked in the dataset. AI follows logic. Humans break it on purpose.
It can sound confident — without actually knowing what it’s talking about.
AI is great at tone. It can mimic authority so convincingly that it often sounds like an expert. But when you dive into deeply niche fields — like academic thought leadership, medical writing, or highly technical industries — that confidence starts to crack.
It doesn’t know the difference between a nuanced theory and a buzzword salad. It can summarize a case study, but it can’t critique it. It can translate your copy, but it can’t tell you if it’s misusing clinical terminology or misrepresenting data.
That’s where humans — subject-matter experts, editors, strategists — are non-negotiable. Because sounding smart and being accurate are not the same thing.
It can’t take creative risks.
AI doesn’t take risks; it optimizes for safety. It’s trained to give you what it thinks you want, not to push boundaries. But every strong brand has a little rebellion baked in — a little “what if we tried it this way?” that scares you before it works. No algorithm can replicate that gut-punch of intuition.
It can’t care.
It doesn’t care about your brand’s mission, your client’s story. It doesn’t care about whether your campaign will help someone find hope, laughter, or a little more stability. AI can simulate empathy, but it can’t feel it.
It has our biases — just louder.
AI isn’t neutral. It’s trained on human data, written by human hands, reflecting all our messy, imperfect worldviews. That means every bias, stereotype, and blind spot we’ve ever put online is now baked into its “knowledge.”
It can accidentally reinforce the same systems we’re trying to dismantle — in how it describes people, prioritizes results, or even decides what “professional” sounds like.
Humans can pause, question, and correct. AI just mirrors us back — and sometimes, that reflection isn’t pretty.
It’s designed to agree with you (even when you’re wrong).
AI is built to be helpful — which often means it’s built to please. It will tell you you’re right, that your idea is great, that the half-baked marketing plan you fed it is “genius.” But here’s the catch: it doesn’t know if that plan violates ad policy, misrepresents medical advice, or undermines your brand strategy. It’s not fact-checking you. It’s flattering you.
That’s why human oversight isn’t optional — it’s the guardrail. You need someone who knows when to say, “Wait, that’s not quite right.”
So what’s the takeaway?
Use AI as a tool, not a crutch. Let it speed up your drafts, outline your ideas, even inspire new directions. But the real value, still comes from you: the human who knows when to zig instead of zag, who can sense when something just doesn’t sit right, and who actually gives a damn.
AI can write. But only you can tell a story.



