Every quarter, another company restarts its AI strategy from scratch. New vendor. New use case list. New team. Same lessons relearned.
McKinsey just published data that explains why.
They studied 20 companies that actually transformed with AI. The results: 20 percent EBITDA uplift. Three dollars of incremental EBITDA for every one dollar invested. Breakeven in one to two years.
Those are not pilot results. That is structural repricing of how a business operates.
But here is the part most executives will read past: the advantage did not come from the technology. McKinsey says it directly. The tools are broadly available. The gap between the companies generating those returns and everyone else is not what AI they use. It is how fast they apply it to real problems. How they organize. How they decide. How quickly they close the distance between knowing something and doing something about it.
I have been saying this for the past year building Revolv. Technology made execution cheap. The scarce resource is the organizational capacity to know what to execute, learn from what happened, and remember what was learned.
McKinsey just gave that argument a dataset.
The Missing Thirteenth Theme
The manifesto lists twelve themes. Strategy. Talent. Speed. Platforms. Data. Trust. Adoption. Agentic engineering. Continuous learning. All of them are real. All of them matter.
But there is a thirteenth theme hiding underneath the other twelve, and it is the one that explains why the winners keep winning and the losers keep restarting.
Organizational memory.
Not knowledge management. Not documentation. Not the wiki nobody reads. Memory. The accumulated context that lets an organization make its next decision better than its last one.
Consider the pattern. The companies that generated $3 per $1 invested concentrated on one to three business domains. Not a long list of AI use cases. One to three areas where they went deep and stayed deep.
Why does concentration produce those returns? Because depth builds institutional knowledge. Each deployment teaches the organization something. A model fails in a specific way and the team learns why. A workflow redesign exposes a bottleneck nobody had mapped. A data enrichment effort reveals that the real signal was in a field everyone had been ignoring.
That learning informs the next decision. The next decision deepens the context. The context compounds.
I introduced the Apex Pyramid framework last year after seeing this pattern across every advisory engagement. BCG's data told the same story from a different angle: only 25 percent of AI initiatives deliver expected ROI and just 16 percent scale across the enterprise. The framework addressed the structural gap. Alignment. Adaptive workforce. Scalable systems. What I did not name explicitly then was the mechanism underneath all three layers. The thing that makes alignment hold, that lets workforce planning adapt, that keeps systems from becoming shelfware.
Memory. The organization's ability to carry what it learned into what it does next.
Companies that spread across dozens of use cases never build that depth. Each initiative starts close to zero. Lessons from one deployment do not transfer to the next because the teams are different, the sponsors are different, and the institutional knowledge from round one lives in someone's head or a slide deck that nobody will open again.
The organization relearns the same things every quarter.
The winners remember what they learned. The losers keep relearning.
That is the gap. Not technology. Memory.
What the Talent Shift Actually Means
McKinsey introduces what they call the "30-70 shifts." More than 70 percent of tech talent should be in-house. More than 70 percent should be builders, not coordinators. More than 70 percent should operate at competent or expert level.
The implication is compact, high-density teams outperforming large armies of lower-skilled staff. That tracks with what I have seen across every advisory engagement and what became obvious building Revolv.
But the manifesto goes further. As AI agents absorb coordination, execution, and routine decision-making, human roles shift up the value stack. Engineers spend less time writing code and more time designing architecture, workflows, constraints, and quality controls. Business leaders spend less time managing tasks and more time setting objectives, defining success metrics, and making trade-offs.
Fewer people. Higher-leverage work. Faster learning loops.
I wrote about this shift in "When AI Makes Software Cheap, Judgment Becomes Expensive." The argument was that as execution gets cheaper, the premium on product sense, systems thinking, and review goes up. McKinsey's data confirms it from the enterprise side. The 20 companies that generated real returns did not just deploy AI. They changed who does what. Business leaders, usually one to three levels below the CEO, became the ones conceptualizing, building, and running the AI systems. Not IT. Not a center of excellence. The people who understand the domain.
That is not a reorg chart. That is a different kind of company. And it only works when those leaders have enough context to make informed decisions quickly.
Which brings us back to memory.
A compact team with deep institutional context will outperform a larger team that has to reconstruct its understanding every cycle. The talent shift McKinsey describes is real, but it has a prerequisite that the manifesto does not name: the organization has to be able to preserve what its best people know, even as the team composition changes.
When a key leader leaves and takes three years of deployment context with them, the team that replaces them spends six months relearning decisions that were already made. Nobody got less talented. The system forgot.
Data Enrichment Is the Compounding Asset
The manifesto makes a distinction that most companies miss. There is a difference between making data easy to consume and enriching data for sustained advantage.
Productized data means teams can discover it, access it, and use it across applications without wrangling. That is necessary. It is also table stakes.
Enriched data means the data gets more valuable through use. Quality improves. Context deepens. Uniqueness compounds. As David Baker, the 2024 Nobel laureate in Chemistry, observed: "AI needs masses of high-quality data to be useful." Without enrichment, AI plateaus. The model is only as good as what you feed it, and most organizations are feeding it the same raw inputs they had before the model existed.
This is the exact dynamic driving what we build at Revolv. Every interaction, every meeting note, every relationship signal adds context that makes the next insight more precise. The intelligence layer does not just store data. It enriches it through use. A note about a conversation becomes a knowledge graph connection becomes a trust signal becomes an opportunity surface.
That is what compounding context looks like in practice. Not a bigger database. A smarter one.
The same principle applies at the enterprise level. The 20 companies McKinsey studied are not just collecting more data. They are building systems where data quality improves as a byproduct of using the system. Each decision, each deployment, each customer interaction leaves the data layer richer than it found it.
That is the moat. Raw data is available to everyone. Enriched, contextual, use-deepened data is not.
The Agentic Problem Nobody Is Ready For
The manifesto's most forward-looking theme is agentic engineering. Foundation models capable of sustained, autonomous work. Productivity gains in software development that McKinsey calls astonishing. Companies racing to build repeatable agentic playbooks.
But the manifesto includes a warning that deserves more weight than it gets: "The excitement for agentic AI may be getting ahead of companies' ability to manage the more complex risks."
I have built a multi-agent system. Six specialized agents with a circuit breaker, a knowledge graph, and a progressive context loader that keeps the whole system under a token budget. The thing I can tell you from inside the build is that agents are powerful when they operate inside a system that already has strong feedback loops, clear accountability, and enriched data to act on.
Agents inside a fragmented organization create more sophisticated confusion.
An agent that can reason autonomously is impressive in a demo. In production, it needs to know when it is wrong, how to degrade gracefully, and where to hand off to a human. It needs guardrails that are not bolted on but designed in from the first line of architecture. It needs trust as a design constraint, not a compliance checkbox.
McKinsey is right that "Rewired leaders consistently absorb new technologies faster because they have built the underlying capabilities to do so." The inverse is also true. Companies that skipped the organizational fundamentals will find that agentic AI does not help them catch up. It widens the gap. Faster execution on top of fragmented context just produces more wrong answers with more confidence.
The Speed Tax
McKinsey frames organizational speed as "the metabolic rate of the organization." Companies win when they redeploy resources faster, empower teams without excessive dependencies, and reduce the latency from insight to decision and decision to action.
Here is the part that makes this harder than it sounds. Speed without memory is just churn.
Moving fast only helps if the organization retains what it learns along the way. I have watched companies sprint through quarterly AI initiatives, each one starting fresh because nobody documented what the last team discovered. The cycle looks productive from the outside. Internally, it is the same lessons being re-derived by different people in different rooms.
This is the adoption gap I described in "AI Is Everywhere in Business. Its Impact Isn't." Seventy percent of firms using AI. Nine in ten reporting no meaningful change. The gap is not adoption. It is retention. Not of people. Of learning.
True organizational speed requires three things working together. The ability to decide quickly. The ability to act on that decision without bureaucratic latency. And the ability to remember the outcome so the next decision starts from a higher baseline.
Most organizations have invested in the first two. Almost none have invested in the third.
What This Means
McKinsey closes with a line that is easy to skim and hard to argue with: "Companies can accelerate their way through developing capabilities, but they cannot skip over the foundational work."
The companies generating $3 per $1 did not start with AI. They started with organizational clarity. Clear ownership. High-density teams. Data treated as a performance asset. Speed embedded in the operating model. AI amplified what was already working.
The companies struggling did not fail at AI. They skipped the prerequisite.
And the prerequisite that ties all the others together is the one nobody has a line item for: the capacity to learn from what you did and carry that learning into what you do next.
Most organizations invest in hiring. They invest in tooling. They invest in AI. They do not invest in preserving and compounding the context that makes all those investments coherent.
That is where transformation stalls. The Apex Pyramid gave teams a structure: align people around outcomes, design workforces that adapt, build systems that multiply execution. This article names what holds that structure together. Without organizational memory, alignment drifts. Adaptive teams lose what they adapted to. Scalable systems scale the wrong lessons.
Technology made execution cheap. Judgment got expensive. But the thing that makes judgment improve over time, the thing that separates the $3-per-$1 companies from the ones still running pilots, is not talent alone and not technology alone.
It is memory that compounds.
The companies that build that layer will pull away. The rest will keep resetting.






