• Kenzie Notes
  • Posts
  • 7 Leadership Lessons I Learned From Training AI

7 Leadership Lessons I Learned From Training AI

Leadership Lessons from AI Training: What Teaching Machines Reveals About Leading Teams

Deep Dive: AI For Humans By Humans

I just finished a certification program on AI for leaders at UT Austin, and something kept nagging at me throughout the course.

The principles for training AI models—how you feed them data, handle their mistakes, manage their biases—kept sounding familiar. Really familiar.

Then it hit me: I've been doing this exact thing with teams for 20 years.

The parallels aren't perfect. People aren't algorithms. But the fundamental patterns of how systems learn and grow? Those show up everywhere. Whether you're training an AI model to recognize patterns in data or helping a team member develop strategic thinking, the underlying principles are surprisingly similar.

Here's what teaching machines taught me about leading people.

Information Overload Breaks Everything

When you're training an AI model, there's this phenomenon called overfitting. You flood the model with too much data, and instead of learning useful patterns, it starts memorizing noise. The model becomes hyper-focused on irrelevant details and loses the ability to generalize to new situations.

I've watched this exact thing happen with teams.

A few years ago, I inherited a product organization that was drowning. We had access to everything: market research, user data, competitive analysis, stakeholder feedback, industry reports. Making decisions was difficult because the company had been trying to account for every data point simultaneously.

The solution wasn't more sophisticated analysis. It was radical simplification.

We identified the three metrics that actually mattered for our next release. Everything else became background noise we'd revisit later. Within two weeks, we had more clarity around product direction that would directly impact velocity. Not because we were working harder, but because we could finally see the pattern through the noise.

AI models need structured, manageable batches of data to learn effectively. So do teams.

When you dump everything on people at once: every priority, every constraint, every stakeholder opinion—you create the human equivalent of overfitting. They optimize for the wrong things because they can't distinguish signal from noise.

One of The most important things you can do as a leader is be an editors, not an amplifier. Curate information, don’t just pass it along.

Mistakes Are Just Expensive Data

Here's something I love about AI training: when a model makes a mistake, nobody gets defensive.

The trainers don't blame the algorithm. They analyze the underlying data, identify what went wrong, and adjust the training process. Every error is treated as valuable information about gaps in the training or biases in the approach.

Now contrast that with how most organizations treat mistakes.

I once worked somewhere that had a "blameless postmortem" policy. In theory, we analyzed failures to learn from them. In practice, everyone spent the meeting carefully avoiding mentioning who actually made the decision that caused the problem.

The AI training approach is better: treat errors as data points, not moral failures.

When someone on my team makes a mistake, my first questions are: What data did they have? What constraints were they under? What would I have done with the same information? Usually, I find the mistake was either unavoidable with the information available, or it revealed a gap in our processes that would have caused problems eventually.

The mistake wasn't the problem. The mistake surfaced the problem.

This isn't just nice management theory. It has real business impact. Teams that treat errors as learning opportunities ship faster because they're not paralyzed by fear of mistakes. They take more calculated risks because they know failure is data, not punishment.

One of my teams reduced their average deployment time from two weeks to two days using this approach. Not because we accepted lower quality—our bug rate actually went down. Because people stopped being afraid to ship incremental improvements and learn from real user behavior.

You Can't Rush Development (AI or Human)

AI models go through multiple training iterations. You train the model, evaluate its performance, adjust based on feedback, and train again. Each iteration builds on the last.

Try to skip iterations and you get subpar results. Every time.

I learned this the hard way managing a product roadmap once. We had a major feature launch deadline. Engineering said they needed three sprint cycles to do it right. Leadership said we had time for one.

We shipped in one cycle. It was technically functional. It was also obviously half-baked, and we spent the next six months fixing problems we would have caught if we'd taken the time to iterate properly.

The total development time ended up being longer than if we'd just done it right the first time. Plus we burned credibility with users and demoralized the team.

Since then, I've gotten religious about iteration cycles. Not because I'm patient (I'm not), but because rushing past necessary iterations is the most expensive way to do anything.

When someone joins my team, we set explicit expectations for their development timeline. First 30 days: learn the systems and context. Next 60 days: contribute with guidance. After 90 days: operate independently in their domain.

This isn't arbitrary. It mirrors how AI models need multiple training cycles to build reliable performance. Each phase builds the foundation for the next.

When you trust this process, you will always outperform teams that expect immediate expertise. And retention is far better because people feel set up for success rather than thrown in the deep end.

The Input Quality Problem

In AI training, there's a brutal truth: garbage in, garbage out.

You can have the most sophisticated model architecture in the world. If you train it on poor quality data—incomplete, biased, or unrepresentative—you get poor quality outputs. The model's potential doesn't matter if the inputs are wrong.

I see this play out constantly in team performance.

A talented designer gets put on a project with unclear requirements, shifting stakeholders, and no decision-making authority. They produce mediocre work. Not because they lack skills, but because the inputs are garbage.

Meanwhile, a less experienced designer gets a well-defined problem, clear constraints, direct access to users, and authority to make decisions. They do brilliant work.

Same company. Same week. Totally different results based entirely on input quality.

This is why I spend so much time on what I call "strategic placement." It's not just matching skills to tasks (though that matters). It's ensuring people have the right inputs to succeed: clear problems, good data, appropriate authority, direct stakeholder access, realistic timelines.

When someone's underperforming, my first question isn't "what's wrong with them?" It's "what's wrong with their inputs?"

Usually the problem isn't the person. It's that we've given them garbage data and expected quality outputs.

Continuous Learning Isn't Optional

AI models experience something called performance decay. If you don't continuously retrain them with new data, they become less effective as conditions change. The model that worked perfectly six months ago gradually loses accuracy because the world moved on.

Your team works exactly the same way.

I watched this happen during the rapid shift to AI tools in 2023. Teams that assumed their existing skills would carry them forward struggled. Teams that built continuous learning into their workflow adapted quickly.

The difference wasn't talent. It was whether learning was treated as ongoing or one-time.

I now build skill development into sprint planning. Not as separate "training time," but as part of regular work. Someone wants to learn a new framework? They lead a small project using it. Someone needs to understand a domain better? They run user research in that area.

Learning happens through doing, and doing generates learning. You can't separate them.

The teams that make this work well don't have higher training budgets. They just treat capability development as part of the job, not something that happens outside work hours or at annual conferences.

Bias Compounds Invisibly

In AI training, bias in your training data leads to biased outputs. And the scary part? The bias often isn't obvious. The model performs well by standard metrics while systematically underperforming for certain scenarios.

You need active detection systems and diverse training data to combat this.

Same with leadership.

I once ran a team where I consistently gave the most challenging projects to two people. Not consciously—I just naturally thought of them first for high-stakes work. They were reliable, experienced, and I trusted them.

What I didn't notice: I was inadvertently limiting growth opportunities for everyone else. The two people I kept choosing weren't learning anything new (they were just doing harder versions of what they already knew). And the rest of the team wasn't getting chances to step up.

A team member finally pointed this out in a 1:1. "I never get the interesting projects. They always go to Sarah or Mike." - [yes, names changed to protect the innocent ;)]

She was right. My bias toward reliability was creating a system where only reliable people got reliability-building opportunities. Classic self-fulfilling prophecy.

Now I try to track project assignments explicitly. Not because I don't trust my judgment, but because I don't trust anyone's unconscious biases—including my own.

Like AI bias detection, this requires systematic approaches, not just good intentions.

Systems Drift Without Active Maintenance

In AI development, there's a concept called model drift. The model's performance gradually declines as the real-world conditions it's operating in change. You have to monitor for drift and adjust not just the model but often the entire training system.

This also happens in team dynamics.

A process that worked great with five people breaks down at ten. A communication style that kept everyone aligned falls apart when the team goes remote. A decision-making framework that fit your product market stops working when your market shifts.

The drift is gradual. You don't notice it day to day. But six months later you realize everything feels harder and you're not sure why.

I learned to schedule quarterly "system audits" where we explicitly ask: What's changed? What's not working as well as it used to? What assumptions are we making that might not be true anymore?

This isn't just reviewing metrics. It's checking whether our entire approach still fits our current reality.

Last quarter, this audit revealed that our weekly all-hands meeting, which had been valuable for years was now mostly wasted time because team leads were already sharing the same information in their own meetings. We killed it. Got 45 minutes back for everyone, every week.

The meeting wasn't bad. It had just drifted from useful to legacy.

Systems need active maintenance. Teams, processes, communication patterns—they all drift. Regular check-ins catch the drift before it becomes a crisis.

Why Even Bring Up These Similarities?

None of this is groundbreaking leadership advice.

"Don't overwhelm people with information" isn't exactly revolutionary. Neither is "learn from mistakes" or "invest in development."

But seeing these same principles emerge in AI training validates something important: there are universal patterns in how systems—whether human or algorithmic—learn and grow.

The difference is that AI operates on pure data and algorithms. Humans bring creativity, emotion, context, and relationships. But the fundamental dynamics of learning, growth, and performance? Those patterns repeat.

Understanding this has made me a better leader. Not because I treat people like machines, but because I recognize when I'm fighting against how systems naturally work.

When a team member is struggling, I ask: Are they overfitting (too much information)? Do they need more iterations (time to develop)? Are their inputs low quality (unclear requirements)? Am I seeing bias in how I'm evaluating them? Has the system drifted and they're playing by outdated rules?

These questions reveal different problems than "why aren't they performing better?"

And they suggest different solutions.

3 Ways To Build Better

Track where you're creating information overload, then ruthlessly simplify. Look at what you're asking your team to optimize for. If it's more than three things, you're creating the conditions for overfitting. Identify the signals that actually matter, make everything else background noise you'll revisit later. Your team will move faster immediately.

Build iteration cycles into your expectations and timelines. Stop treating development (of people, products, or processes) as something that should work perfectly on the first try. Explicitly plan for multiple cycles of learning and refinement. The time you "save" by skipping iterations almost always gets spent fixing problems later, with interest.

Create systematic checks for bias and drift in your team systems. Schedule quarterly reviews that explicitly ask what's changed and what's not working as well anymore. Track project assignments, promotion decisions, and development opportunities to surface patterns you might not see day-to-day. Active monitoring catches problems while they're still small.

2 Questions That Matter

"Am I treating this performance issue as a people problem or a system problem?" Most of the time, underperformance comes from poor inputs—unclear requirements, mismatched skills to tasks, insufficient context, lack of decision authority. Before you conclude someone isn't capable, check whether you've given them quality inputs. The answer is usually more revealing than you expect.

"Which of my team processes are running on outdated assumptions?" Systems drift gradually as conditions change. The meeting that was essential last year might be wasteful now. The communication pattern that kept everyone aligned might be creating bottlenecks. Regular audits catch drift before it becomes crisis, but only if you're willing to kill things that used to work.

1 Big Idea

The same principles that make AI models learn effectively make teams perform better: manageable information loads, treating errors as data, allowing time for iteration, ensuring quality inputs, continuous learning, active bias management, and monitoring for system drift.

This isn't because people are like machines. It's because these are fundamental patterns in how any complex system learns and grows.

The leaders who recognize these patterns don't fight against how systems naturally work. They create conditions that align with those patterns, making growth and performance easier rather than harder.

That's not new leadership wisdom. But seeing it work the same way in AI training as in team development suggests we might finally understand why these principles matter so much.