The Friction Is Gone: Who Are You Without Your Tools?
Who were you before the tools? What AI is really exposing about how we work and who we've become

Yesterday I published a newsletter issue about building this website. On the surface it was about what became possible when AI removed the execution bottleneck that I had become used to. I talked about how two and a half weeks of work while working a full-time job produced a publishing platform and a complete brand system that would have taken a traditional team months. I ended it with a question: are you worth amplifying?
But that article isn’t really the story. It’s proof of an idea. Something that has been forming in my head for the better part of a year, in bits and pieces, as I’ve watched AI evolve and watched the people around it react. I’ve seen it in organizations I’ve been part of, in conversations with colleagues, and in the way smart and capable people talk about these tools in ways that reveal something they’re not quite saying out loud.
The website piece was the concrete example. This is the argument underneath it.
The Artifact Economy
Since the beginning of the web and SaaS industries back in the late 90s, we’ve built something that nobody quite intended to build. Call it the artifact economy.
It happened gradually, the way most systemic shifts do. Organizations needed ways to measure work, measure expertise, and coordinate across growing teams. So we created deliverables. Wireframes and decks. Specs and briefs. Roadmaps and reports. Repos and Scrum. Frameworks and audits and strategy documents that took weeks to produce and hours to present. These things weren’t the goal, they were supposed to be the path to the goal. But over time, in organization after organization, something quietly flipped. The artifact stopped being the means and became the measure.
You can see how it happened. Outcomes are slow. They’re diffuse. They’re hard to attribute to any one person or any one team. But a deck? A deck is done or it isn’t. A wireframe ships or it doesn’t. A report lands in inboxes on Friday with your name on it. The artifact is tangible in a way that outcomes rarely are. So organizations optimized for what they could see and count, and careers followed the same logic. We built expertise around the production of things. We built status around the ownership of things. We built entire professional identities around being the person who makes the thing.
None of this was wrong per se. It was rational, given the constraints of scale. When you’re coordinating hundreds of people across complex systems, you need something to point to. The artifact is something you can point to.
The problem is what happens when the artifact becomes the point.
When that shift happens — and it has happened, quietly, across many organizations the means start to matter more than the end. The goal is still nominally important, but it gets filtered through role. An engineer frames the goal as an engineering problem. A designer frames it as a design problem. A strategist frames it as a strategy problem. Everyone is working toward the outcome, in theory. But in practice, they’re optimizing for the version of the outcome that validates the kind of work they do. The system as a whole gets lost.
This is the world AI walked into.
And here is where a quote from Upton Sinclair becomes uncomfortably relevant. “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
We talk about AI resistance like it’s a failure of imagination. People who don’t see the opportunity, who are slow to adapt, who need better training and more thoughtful change management. That framing is too charitable. Some resistance is genuine uncertainty. Yes, the technology is moving fast and the stakes feel high and that’s real. But a significant portion of what looks like confusion is something else entirely. It’s protection. Sophisticated, reasonable-sounding protection from people who have built careers, and in some cases entire organizational structures, around artifacts that AI can now produce in minutes.
These people are not confused. They understand exactly what AI is doing. That’s precisely why they’re resistant. And you cannot solve that with a training program, because it isn’t a technology problem. It’s a political one.
The Friction Revelation
If we stay focused on the fact that “AI speeds things up”, we’re keeping the real conversation buried.
What makes AI structurally different from every other productivity tool we’ve adopted is that it exposes who is producing outcomes and who is managing the appearance of producing them. For the first time, the distance between those two things is visible — and so is who was controlling it.
Friction has always been a hiding place.
When a process takes six weeks, nobody can easily tell you which two of those weeks contained the actual work. When a deliverable requires three rounds of review across four departments, the value of any single contributor gets lost in the coordination. Complexity has created cover. And in complex organizations, a surprising amount of what looks like expertise is actually the management of that complexity. Knowing how to navigate the process, who to cc, when to escalate, how to make something move, those are real skills. But it is not the same skill as knowing how to solve the underlying problem.
AI is collapsing that distance. Not because it’s smarter than the people in those processes, but because it removes enough of the procedural overhead that the actual thinking — or the absence of it, is becoming more visible.
This isn’t a new pattern. The same fight happens every time a disruptive technology enters a field. When George Lucas decided to use computers to make Star Wars, the veteran effects artists pushed back hard. They weren’t wrong that their craft was real. They were wrong because they judged the technology by what it could do at that moment rather than what it was becoming. The people who ended up shaping the future of film weren’t the established experts. They were outsiders who didn’t know what they weren’t supposed to be able to do.
But here’s what’s different now, and why this moment demands more urgency than that one did.
When computers entered filmmaking, the technology had a ceiling. It was a better tool — faster, more flexible, capable of things practical effects couldn’t do. But the computer didn’t get smarter over time. It didn’t start developing opinions about the scene. Human judgment still sat above it, and the veterans who eventually engaged found that their craft still mattered because there was a layer the tool couldn’t reach.
The technology we’re dealing with now is itself a player. It isn’t a better hammer. It’s something on a curve, learning, improving, and expanding into the judgment layer that was always supposed to be safely human. Which means the window for figuring out how to engage is closing while the technology moves. The veterans who wait to find their footing are waiting while the ground shifts beneath them.
And I am seeing the response to that in real time. When I step back and observe how people are actually talking about AI across industries, a pattern emerges. CEOs excited that they can move without waiting on engineers. Editors who don’t need to wait on writers. Developers who don’t need to wait on product managers. Designers who don’t need to wait on developers. Everyone is celebrating the removal of the friction based on someone else’s role.
Nobody is asking whether their own role is the one that needs to evolve.
That’s not a coincidence. It’s genuinely easier to see how AI disrupts the person adjacent to you than to see how it disrupts you — especially when your identity, your status, and your compensation are all organized around what you specifically do. The question of how you fit into the system as a whole, what the system is actually trying to produce and whether your piece of it is still necessary at the same scale, that question is much harder to sit with.
So people don’t sit with it. Instead they adopt AI just enough to remove the friction points that annoy them. The ones caused by other people and challenge the parts of the system that feel most essentially theirs. Which is rational, and human, and also exactly how you end up optimizing a node while the whole system reorganizes around you.
The threat was never from people who reject AI outright. Those people are visible and easy to address. The more consequential friction comes from people who engage with AI enthusiastically, selectively, in ways that happen to preserve everything they’ve already built.
The pattern is the same as it’s always been. The stakes are now categorically different.
The Wrong Question and the Three Right Ones
There’s a question underneath almost every conversation about AI I’ve had in the last three years. It rarely gets stated directly, but I can hear it in the way people talk about tools and workflows and what they’re trying to get done.
The question is: how can AI make things easier or harder for me?
On the surface it seems perfectly reasonable and practical, but it really is the wrong question.
Not because using AI to work more efficiently is wrong. But because of what the question assumes without saying it. It assumes the job stays the same and the tools change around it. It assumes the system you’re operating inside is fixed, and the only variable is how effectively you move within it. It treats your role, your function, your way of contributing as given — and asks only how to do it faster or with less friction.
That’s a closed loop. It starts and ends with you.
Systems don’t work that way. When you change the conditions inside a system, the system itself reorganizes. Roles shift. Boundaries move. Value flows differently. What counted as expertise yesterday gets reclassified. What required three people might require one, or none, or someone entirely different with an entirely different orientation. The system doesn’t wait for everyone to agree that it’s reorganizing. It just does.
Asking how to make your job easier with AI is like asking how to get better at rowing while the river is changing course. The question isn’t wrong because it’s lazy. It’s wrong because it’s too small. It optimizes the node while the whole reorganizes around it.
And here’s the harder thing to sit with: most of us don’t question the frame because the frame is where we live. Your expertise is real. Your craft is real. The value you’ve created is real. That makes it genuinely difficult to step back far enough to ask whether the frame itself is the right one — whether what you’ve been optimizing for is still the thing that matters. That’s not a failure of intelligence. It’s just what it feels like to be inside a system rather than looking at it from the outside.
But that’s exactly the shift this moment requires.
So instead of asking how to make your job easier, the questions worth sitting with are bigger and more honest:
What jobs become possible now? Not your job with better tools. Entirely new orientations toward work that didn’t exist or weren’t viable before — because the intelligence constraint has been removed and what’s left is judgment, connection, and the capacity to see the whole system at once.
What companies become possible now? Not existing businesses with AI features bolted on. New kinds of organizations that the old economics made impossible — where intelligence is abundant and the product is something entirely different from what we’ve been building.
What kind of person thrives when intelligence is abundant? Not the fastest adopter or the most technically fluent. Something older and harder to develop than either of those things.
Each of these questions operates at a different level of the system: the person, the organization, and the market. And unlike the old question, none of them assume the system stays where it is.
What Jobs Become Possible Now
The easiest version of this question is also the least useful one. People want to know which jobs AI will eliminate and which ones are safe. That framing is too static. It assumes the job categories we have now are the right unit of analysis — that the future is just the present with some roles removed and some roles remaining.
That’s not how systems reorganize.
What’s actually happening is more fundamental. When intelligence stops being the scarce resource, the nature of valuable work shifts. Not just the tools used to do it. The work itself. And that shift creates two things simultaneously: it exposes roles that were built around production and process, and it opens space for something that didn’t have room to exist before.
The roles that get exposed first are the ones where the primary output was the artifact. Where value was measured in deliverables produced, documents completed, assets shipped. Not because the people in those roles lacked ability — many of them are genuinely talented — but because the artifact was the frame, and the frame is dissolving. When AI can produce the artifact in minutes, the question of what you actually bring to the work becomes impossible to avoid.
But here’s the other side of that, and it’s the more important one.
The jobs that become possible now are built around something AI cannot replicate at scale: judgment that spans domains. The ability to see the whole system rather than optimize a single node within it. The capacity to ask whether the question itself is the right one before answering it. The willingness to work across the boundaries that specialists have traditionally maintained — not because you’re a generalist who knows a little about everything, but because you’ve developed the underlying capabilities that make learning any new domain faster and more transferable.
These aren’t soft skills. They aren’t personality traits. They’re something more foundational — what I think of as metaskills. Six fundamental human capabilities that sit underneath everything else you know how to do: Exploring, Creating, Feeling, Imagining, Innovating, and Adapting. They don’t have a shelf life the way hard skills do. They don’t get disrupted when the tools change. If anything, they become more valuable as the tools get better, because they’re the layer AI is furthest from reaching.
The person who thrives in the reorganized system isn’t the one who knew the most about a specific domain. It’s the one who has been practicing the underlying capabilities all along. Someone who treats curiosity as a discipline, who knows how to break a frame and rebuild it, who can move across domains without losing their orientation. That person doesn’t just survive the shift. They’re the one the new kinds of jobs get built around.
If you want to go deeper on what these six capabilities actually look like in practice, I’ve mapped them out in full in the Metaskills framework. What matters here is the principle: when intelligence is abundant, the scarce thing is the judgment that knows what to do with it. And that judgment isn’t built from hard skills alone. It’s built from the slower, harder work of developing the capabilities underneath them.
That’s what the new jobs are made of. And it’s also, not coincidentally, what new companies will be built on.
What Companies Become Possible Now
When electricity became widely available, the first thing most businesses did was use it to do what they were already doing. Factories replaced steam power with electric motors. They kept the same layouts, the same workflows, the same organizational logic. They just swapped the power source. And for a while, that looked like transformation.
The real transformation came later, and it came from a different question. Not “how do we electrify what we already do?” but “what becomes possible now that electricity exists?” The answer to that question produced entirely new industries, entirely new business models, and entirely new kinds of companies that couldn’t have existed under the old constraints. The businesses that electrified their existing operations got more efficient. The businesses built around what electricity made possible got wealthy.
We are at the same inflection point now, and most organizations are making the same first mistake.
The conversation in almost every boardroom and product meeting right now is about AI features. How do we add AI to our product? How do we use AI to reduce costs? How do we move faster with AI than our competitors? These are reasonable questions. They are also the wrong level of thinking. They assume the business model is fixed and the tools are the variable. They’re the equivalent of electrifying the factory without asking whether the factory is the right thing to be building at all.
The more important question — the one almost nobody is asking is: What kinds of companies only become possible when intelligence is effectively unlimited?
Not companies that use AI. Companies where the entire model depends on AI in the way that a power grid depends on electricity. Where the product isn’t software with intelligent features but something categorically different: judgment, delivered at scale, through systems that couldn’t exist before now.
Think about what was always true about the most valuable professional services. A great advisor — whether in law, medicine, finance, or strategy — does something that scales terribly. They bring accumulated expertise to your specific situation. They ask the questions you didn’t know to ask. They pattern-match across domains in ways that take decades to develop and can only be applied to one client at a time. That’s why great advisory relationships are expensive and rare. The constraint was always human attention.
Remove that constraint and the question becomes: what does the advisory firm look like when it can operate at any scale without losing the quality of the judgment? What does the fractional executive team look like when it’s available to a small business owner who could never afford one? What does the strategic partner look like when it compounds its knowledge of your specific situation over time rather than starting fresh every engagement?
These aren’t software products. They’re new kinds of companies. Organizations where the product is the intelligence itself, applied with genuine judgment to real problems, at a scale the old economics made impossible.
This is what I call the agentic business. Not a company that builds AI tools. A company that is an intelligent system — where agents handle the operational layer and the humans inside it focus entirely on the judgment layer. One person with deep expertise and genuine taste running something that previously required fifty. Not a lifestyle business. An actual enterprise, organized around what becomes possible when coordination overhead collapses and intelligence stops being scarce.
The businesses that will define the next decade aren’t the ones adding AI to what they already do. They’re the ones being founded right now around what only becomes possible because of it. The same way we don’t think of the electric motor as the legacy of electrification, we think of the industries it made possible.
The question for every leader and organization right now isn’t “how do we adopt AI?” It’s “what are we now capable of becoming that we couldn’t have been before?” Those are not the same question. And the distance between them is where the real opportunity lives.
What Kind of Person Thrives When Intelligence Is Abundant
This is the question that cuts closest. The previous two questions are about structures — what work looks like, what organizations look like. This one is about you. And it’s harder to sit with because it doesn’t have a comfortable answer for everyone.
The easy version of this question produces the wrong answer. People assume the person who thrives is the earliest adopter. The most technically fluent. The one who knows the most tools and moves the fastest. That person has an advantage right now, in the same way that knowing how to type fast was an advantage before voice recognition. It’s real, and it’s temporary, and it’s not the thing.
The person who thrives when intelligence is abundant isn’t defined by their relationship to the tools. They’re defined by what they brought before the tools arrived.
The article I wrote is about looking at this from the inside. My experience of what it felt like to build something substantial in a fraction of the time it should have taken, and what that experience actually revealed. The short version is that the AI moved fast because I knew where I was going. The judgment about what to ask for, what to reject, and when something was wrong — that couldn’t be delegated. If you haven’t read that piece, it’s worth the fifteen minutes. What I want to do here is pull on the thread it left hanging.
Because the pattern that experience revealed runs deeper than any single project.
The people who are most exposed right now aren’t the ones who never engaged with AI. They’re the ones who built careers on something that looked like depth but was really proximity — to information, to process, to the artifact itself. When AI collapses the distance between intention and output, what’s left is the actual thinking. And for a lot of people, that exposure is the real source of the resistance we talked about in the beginning.
Psychologist Prescott Lecky called the underlying mechanism self-consistency — the idea that people will rationalize, resist, and reorganize their entire worldview before they’ll update their identity. The Sinclair dynamic explains the economic motivation. Lecky explains the psychological one. Together they’re more complete than either alone. The resistance isn’t just “I’ll lose my job.” It’s “if this is true, then who have I been?” That’s a much harder thing to update than a skill set.
The people who thrive are the ones who did the slow work. Not the glamorous work. The unglamorous, invisible, doesn’t-show-up-in-a-sprint-review work of building genuine judgment. Reading deeply. Developing real points of view. Paying attention to why things work rather than just shipping the thing that worked. Accumulating the patterns that make you hard to replace not because you move fast but because you think well.
Farmers think in seasons. They don’t ask how to adopt better tools faster. They ask what they’re growing and whether the soil is actually ready. The fast-followers built on appearance. The slow workers built on soil. When the tools change — and they will keep changing, faster than most people are ready for — the soil is what remains.
This isn’t a comfortable message. It’s not meant to be. But it is an honest one. And it comes with something important attached to it: it’s not too late to start. The slow work doesn’t require you to have started twenty years ago. It requires you to start now, and to take it seriously, and to stop measuring your value by the artifacts you produce and start measuring it by the judgment you develop.
The question is the same one I ended with when I wrote about building the website. Before the environment asks it for you: are you worth amplifying?
The Reckoning
The question this article has been building toward isn’t really about AI. It’s about what you’ve been building while the tools were changing around you. Whether the value you bring is located in the artifacts you produce or in the judgment underneath them. Whether your position in the system depends on the system staying where it is or whether it holds regardless of how the system reorganizes.
Those aren’t comfortable questions. They’re not meant to be. But they are the honest ones. And the honest ones are the only ones worth asking right now, because the environment is going to ask them regardless. The only variable is whether you get there first.
This isn’t a warning about AI replacing people. That framing is too simple and too convenient — it lets you locate the threat outside yourself, in the technology, rather than in the assumptions you’ve been making about where your value actually lives. The real reckoning isn’t with the tools. It’s with the story you’ve been telling yourself about what you bring.
For some people, that reckoning will be clarifying. They’ve been doing the slow work all along — building genuine judgment, developing real depth, practicing the underlying capabilities that don’t have a shelf life. For those people, this moment isn’t a threat. It’s the moment the work they’ve always been doing finally becomes as visible as the artifacts everyone else was producing.
For others, it will be harder. Not because they’re less capable, but because they’ve been optimizing for the wrong thing long enough that it feels like the right thing. The artifact became the point. The process became the product. The role became the identity. Unwinding that is genuinely difficult, and it requires something most professional environments don’t reward: the willingness to question the frame you’ve been operating inside.
But that willingness is exactly what this moment requires. And it’s available to anyone who decides to exercise it.
The system is reorganizing. The window is open. The only real choice is whether you’re doing the evolving consciously or having it done to you.
What are you actually growing? And is the soil ready?
Kenzie Notes
Analog wisdom for a digital world
A weekly page from The Workshop — frameworks, stories, and practical thinking on leadership, systems, and the craft of building things that matter. Wednesdays.
Related Ideas
The Bottleneck Was Never Your Hands
What becomes possible when the only thing left is what you actually know
Why Your Tools Are Costing More Than They’re Worth
How to avoid the trap of thinking technology can solve human collaboration problems
Working Memory Is a Budget
Why front-loading with too much information actually makes AI and your team perform worse