I sat in an executive meeting last quarter where the answer arrived before the question did.
The CEO opened with two slides. The first showed AI tools the company had deployed across six departments. The second was a headcount projection. "If we’re getting this much productivity," he said, "how many people can we cut by Q3?"
Nobody in the room had measured the productivity. Nobody had asked what the tools were actually changing. The conversation jumped from adoption to reduction in under ten minutes. I’ve seen this meeting three times now, at three different companies, and it plays out the same way every time.
Why are companies cutting headcount before measuring impact?
They’re not alone. Fiverr’s CEO told employees to "automate 100 percent" of their work with AI, then cut 250 people a few months later to become an "AI-first company." Klarna replaced 700 customer service employees with an AI chatbot that handled two-thirds of all queries. Quality dropped. Customers revolted. The CEO later admitted they "went too far" and started quietly rehiring humans. Roughly 55,000 jobs were cut in layoffs that companies attributed directly to AI in 2025, more than three times the total in the preceding two years.
The pattern is the same everywhere. The solution shows up before the problem is understood.
A recent study from the National Bureau of Economic Research surveyed nearly 6,000 executives across the US, UK, Germany, and Australia. About 70 percent of their firms are using AI. Nine in ten report no meaningful change in employment or productivity. The executives themselves spend an average of 1.5 hours a week using AI, less than the workers they manage.
That’s the gap I keep running into. Not between companies that have AI and companies that don’t. Between companies that adopted AI and companies where AI actually changed something.
What happens when the bottleneck moves up the stack?
Here’s what most people miss. AI has collapsed the cost of execution. Writing code, generating content, summarizing research, building prototypes... all of it is faster and cheaper than it was two years ago. That part is real. But when execution gets cheap, it stops being the bottleneck. The bottleneck moves up the stack, to the decisions, the context, and the organizational thinking that execution depends on.
Most companies haven’t noticed the shift. They’re still optimizing for speed at the task level. They’re still treating AI as a tool. Helpful. Fast. Impressive in demos. But still sitting on the edges of the business.
It writes. It summarizes. It assists.
It doesn’t decide. It doesn’t orchestrate. It doesn’t change how work flows through the company.
Tools improve tasks. Systems change outcomes.
What is the electric motor mistake in AI?
Early factories made the same mistake with electricity. They replaced steam engines with electric motors and expected the same work to get faster. Nothing happened. Only when they redesigned the factory floor, rethinking how materials moved, how workers collaborated, how decisions got made, did productivity jump.
AI is at that same inflection point. And the companies cutting headcount before redesigning their operations are making the electric motor mistake all over again.
I’ve spent the last two years building Revolv around a single frustration: the questions most AI strategies never bother to ask. Not questions about models or vendors. Questions about the business. Where do decisions slow down? Where does context get lost? Where do relationships quietly determine outcomes, but never show up in any system?
Here’s an example. A CFO is about to approve a vendor switch. The spreadsheet says yes: better pricing, stronger SLA, cleaner integration. What the spreadsheet doesn’t say is that the incumbent’s CEO just joined the board of their biggest client. One relationship, invisible to every system in the building, changes the entire decision. That kind of context lives in people, not dashboards. No CRM captures it. No AI tool surfaces it.
Who’s getting it right?
McKinsey’s 2025 State of AI report found that high-performing companies are nearly three times as likely to have fundamentally redesigned workflows around AI, not just deployed it. The difference isn’t better models. It’s better questions about how work actually flows.
One example: a North American fleet services company stopped treating AI as a reporting tool and rebuilt its entire repair order workflow around a custom model that reads the context of every job. The result was a 90% reduction in error detection time, with over 30% of orders resolving automatically. They didn’t add AI to the process. They redesigned the process so AI could operate inside it.
Three questions before your next AI investment
Before approving another AI tool, every leadership team should answer three questions:
-
Have we measured what we’re trying to improve? Not adoption metrics. Outcome metrics. If you can’t name the decision, cycle time, or cost you’re targeting, you’re not ready.
-
Where do decisions actually slow down? Map the real bottleneck. It’s rarely execution. It’s usually context: the relationship a CRM doesn’t capture, the institutional knowledge that lives in one person’s head, the strategic alignment that never made it past the slide deck.
-
Are we redesigning the work, or just accelerating it? If the workflow stays the same and AI makes it faster, you’ve bought an electric motor for a steam-era factory. The gains come from rethinking how materials move, how people collaborate, how decisions get made.
The executives in that NBER study expect impact to come. They’re right. But not because the models will get better. Because the companies that figure this out will stop optimizing tasks and start redesigning how their organizations think.
AI makes execution cheap. Systems thinking makes it count.
We built Revolv to help leaders answer these questions before the next budget cycle.






