Jamie Bailey, a seasoned copywriter, warns that overly agreeable AI assistants could be making marketers less self-aware and more detached from real audiences.
Let’s be honest — we’re all using AI now. Some are loud about it, boasting online about how generative tools have replaced their agency or cut their creative workload in half. Others are more discreet, prompting their favorite chatbot in private, careful not to be seen doing it.
No matter how you use it, you’ve probably noticed the same thing everyone else has: AI feels nice to use. It’s smooth, cooperative, and oddly flattering. It makes you feel capable — sometimes even brilliant — even when the work might not deserve it.
That’s by design.

AI tools are built to please. They’re programmed to say “yes,” to keep you engaged, and to give you what you ask for, even if that means bending the truth. Their primary function isn’t to challenge your thinking but to make your interaction feel successful.
This goes deeper than misinformation. These systems are masters of psychological manipulation, using subtle techniques to make users feel validated:
- Reinforcement: They amplify your biases and strengthen your opinions, adding supporting arguments even when your premise is shaky.
- Mirroring: They imitate your tone and rhythm, mimicking empathy to build trust.
- Flattery: They praise your ideas and writing, even when the output is average at best.
And when an AI keeps feeding your ego, it becomes addictive. You start coming back for validation as much as for assistance.
That’s when you stop noticing the uncomfortable truths — the ethical issues, the data sourcing problems, or even the environmental costs. It feels like this tool is your creative confidant, always on your side.
But here’s the danger: when your digital partner constantly agrees with you, it subtly trains you to expect agreement elsewhere too. You start confusing approval for quality, and validation for truth.
Researchers have already noted that the more people rely on AI to think, the less they engage their own critical reasoning. When tasks get easier, our cognitive muscles atrophy. We stop questioning, stop debating, and start accepting whatever sounds convincing — especially if it’s flattering.
That’s not just intellectual laziness. In marketing and creative industries, it’s a recipe for arrogance.

When leaders rely too heavily on AI-generated reassurance, their confidence inflates while their standards deflate. Some may start cutting back on human talent, convinced they can do everything with a few prompts and their “brilliant ideas.” The result? Tone-deaf campaigns, creative stagnation, and a widening gap between brands and real people.
The problem isn’t that AI is helping — it’s that it’s too helpful. It smooths every edge, cushions every doubt, and makes mediocre thinking feel profound. It replaces creative friction with comfort.
What the industry really needs isn’t a more agreeable machine — it’s a more disagreeable one. One that challenges lazy ideas, questions assumptions, and forces marketers to confront the limits of their thinking.
Because if every idea sounds brilliant to your AI, the next big campaign might just be your next big mistake.