- Kenzie Notes
- Posts
- Kenzie Notes: On AI amplifiers, thinking partnerships, and why being faster at being wrong isn't progress
Kenzie Notes: On AI amplifiers, thinking partnerships, and why being faster at being wrong isn't progress
The teams winning with AI aren't the ones with the fanciest tools—they're the ones who already knew how to think critically.

The Kenzie Note
Beyond the Hype: The Truth About AI Success
I've been watching something fascinating happen across the teams I work with. The same organizations that once prided themselves on thoughtful analysis are now celebrating how quickly they can generate reports. The same leaders who used to dig deep into assumptions are now impressed by AI outputs that "look professional." The same teams that built their reputations on careful reasoning are now moving fast and breaking things—just not the things they intended to break.
I've written before that it's not the tool, it's the approach. Good learners use AI as a thinking partner while others use it to avoid thinking altogether. I've also shared how the most successful teams organize AI into three distinct layers rather than treating it only as a productivity tool that writes faster.
But what creates that difference in approach? What separates teams that build sophisticated AI capabilities from those that just stay busy?
Here's what I keep seeing across teams implementing AI: they're getting really good at generating impressive outputs while getting worse at generating meaningful outcomes. They're faster at collecting information but not better at understanding it. They've confused being productive with being effective.
And it's costing them a lot.
The Real Problem: AI Amplifies Whatever You Bring To It
Most discussions about AI focus on the technology—which models to use, how to write better prompts, what features matter. But after working with dozens of teams implementing AI, I've realized that's missing the point entirely.
The real challenge isn't technical. It's intellectual.
AI doesn't make you smarter by default. It amplifies whatever thinking approach you already have. If you're intellectually passive, AI will make you more passive. If you take shortcuts in your reasoning, AI will enable bigger shortcuts. If you don't question sources, AI will flood you with unvetted information.
But here's the opportunity: if you approach information with curiosity, skepticism, and analytical rigor, AI becomes an incredibly powerful thinking partner.
The difference comes down to three foundational literacies that, in the age of AI, have become more critical than ever.
The Three Literacies That Actually Matter
I've identified three capabilities that separate those who build sophisticated AI systems from those who just stay busy:
Critical Thinking Literacy: Understanding cognitive biases, logical fallacies, and how your reasoning can lead you astray. This isn't academic knowledge—it's the practical ability to question assumptions, spot flawed logic, and recognize when you're being intellectually lazy.
Media Literacy: Evaluating sources, recognizing bias, understanding context, and distinguishing between correlation and causation. In a world where AI can generate convincing content at scale, these skills separate signal from noise.
Data Literacy: Interpreting statistics, understanding methodology, recognizing what data actually represents versus what it's claimed to represent. As we're flooded with AI-generated analysis and visualizations, this becomes essential for avoiding expensive mistakes.
These aren't new skills. They existed long before ChatGPT. But AI has made them exponentially more important because AI amplifies both good thinking and bad thinking equally.
How This Plays Out In Practice
Here is an example of two teams , both implementing AI for competitive analysis.
Team A got excited about AI's ability to instantly generate market research reports. They'd feed it competitor data and get back polished analyses with charts, recommendations, and action items. The output looked incredibly professional. The problem? They weren't questioning the methodology, checking the sources, or validating the conclusions. Three months later, they'd made strategic decisions based on AI-generated insights that were impressively wrong.
Team B took a different approach. They used AI to gather initial data, but then applied critical thinking to interrogate the results. They asked: "What assumptions is this analysis making? What's missing from this data? What biases might be baked in?" They treated AI like a research assistant, not an oracle.
The difference in outcomes was striking. Team A moved faster initially but ended up with analysis paralysis when their AI-driven insights contradicted each other. Team B moved more thoughtfully but built genuine competitive advantages because they understood what their data actually meant.
The Literacies In Action
Here's how these literacies manifest when you actually use AI effectively:
Critical Thinking in Practice: Instead of accepting AI outputs at face value, probe for logical consistency. "This recommendation assumes X, but is that actually true in our context?" Use AI to stress-test your thinking, not replace it.
Media Literacy in Practice: Evaluate AI-generated content sources, understand the training data limitations, and recognize when AI is confidently presenting incomplete information. Know the difference between "AI found this information" and "this information is accurate."
Data Literacy in Practice: Understand that correlation isn't causation, even when AI presents it convincingly. Know when sample sizes matter, when statistical significance is meaningful, and when impressive-looking visualizations are actually meaningless.
Common Obstacles (And How To Overcome Them)
There are three patterns that prevent teams from developing these literacies:
The "Fast Equals Better" Trap: Teams equate speed with progress. They celebrate generating 50 AI reports instead of asking whether those reports led to better decisions. The fix: measure outcomes, not outputs. Track decisions made and results achieved, not content produced.
The "AI Knows Best" Fallacy: People treat AI like an infallible expert rather than a powerful but flawed tool. They stop questioning because the output sounds authoritative. The fix: require human validation of AI conclusions, especially for important decisions.
The "Too Smart To Question" Problem: Successful people often resist acknowledging their knowledge gaps. They'd rather look confident using AI incorrectly than admit they need to strengthen basic literacies. The fix: frame it as upgrading capabilities, not remedying deficiencies.
Why This Creates Competitive Advantage
Teams that strengthen these literacies don't just use AI better—they develop what I call an "intellectual multiplication effect." They:
Learn faster because they're actually understanding information instead of just collecting it
Make better decisions because they can distinguish between compelling outputs and reliable insights
Waste less time chasing AI-generated dead ends that look impressive but lead nowhere
Build institutional knowledge instead of AI dependency
I watched a long-time colleague who is an expert marketer use this approach to completely outmaneuver competitors with bigger AI and marketing budgets. While competitors were drowning in AI-generated reports and flashy output, she was asking better questions and getting actionable insights. She didn't win because she had better AI tools—she won because she had better thinking processes.
The Implementation Reality
Want to start building these literacies? Here's what actually works:
Before accepting any AI output, ask three questions:
What assumptions is this making?
What sources is this drawing from?
What would I need to verify to trust this conclusion?
That's it. Don't overhaul your entire process. Just consistently ask these three questions. I've seen this simple practice transform how teams interact with AI within weeks.
When my program team plans our event schedule, they typically brainstorm the initial concept and title, then use AI to help flesh out event descriptions. They quickly discovered that AI would include details that weren't true and frame events in ways that weren't particularly attractive or engaging. Instead of just accepting those outputs, they started using AI to generate initial drafts, then asking: "What assumptions is this making about our event? What details did it add that we didn't provide? How would we actually describe this to make it compelling?" This simple questioning process called on skills they already had without AI and created better outcomes than just fixing bad descriptions after the fact.
The Bottom Line
In a world where AI can generate endless content, analysis, and answers, the scarcest resource isn't information—it's judgment.
These three literacies are what develop that judgment. They're what separate teams that use AI as a crutch from teams that use it as a catalyst for deeper thinking.
The future belongs not to those who can use AI tools fastest, but to those who can think most clearly about what those tools produce. And that starts with strengthening capabilities that have nothing to do with AI and everything to do with being human.
3 Ways To Build Better
I
Start With "Source, Please": Before acting on any AI-generated insight, require one team member to verify the underlying sources and methodology. This simple step catches 80% of AI hallucinations and builds evaluation habits across your team. Takes 10 minutes, saves weeks of misdirected effort.
II
The "Bias Assumption" Practice: When reviewing AI outputs, explicitly ask "What biases might be baked into this analysis?" AI inherits biases from training data, and acknowledging this upfront prevents costly strategic mistakes. Turn this into a standard agenda item for any AI-assisted decision meeting.
III
Teach "Question Stacking": Instead of stopping at AI's first answer, train your team to ask follow-up questions that probe deeper. "What evidence supports this? What contradicts it? What's missing?" This transforms AI from an answer machine into a thinking partner that helps you explore ideas more thoroughly.
2 Questions That Matter
I
Are we using AI to avoid thinking or to think better? - This cuts to the heart of whether your AI implementation is creating genuine capability or sophisticated procrastination. Honest assessment of this question reveals whether you're building competitive advantage or just impressive-looking activity.
II
What decision would we make differently if we couldn't use AI for this analysis? - This question forces you to identify what unique value AI is actually providing versus what you could figure out through conventional means. It separates AI-enhanced insights from AI-dependent confusion.
1 Big Idea
The most successful AI implementations aren't about having better prompts or fancier tools—they're about bringing stronger thinking to whatever AI produces. In a world of infinite AI-generated content, human judgment becomes the ultimate competitive advantage.