The New Solow Paradox: Why $1.5 Trillion in AI Spending Isn't Showing Up in Your Numbers
Ninety percent of enterprises say AI has had no measurable impact on productivity in three years. $1.5 trillion in projected AI spending. 80% project failure rates. 1.5 hours of weekly usage per executive. We've seen this pattern before — and history tells us exactly why the productivity surge hasn't arrived yet, and what it will take to get there.
Ninety percent of enterprises say AI has had no measurable impact on their productivity or employment in the last three years.
Those enterprises are not wrong.
They are also not going to fix it by buying more AI tools.
A February 2026 study from the National Bureau of Economic Research surveyed more than 6,000 CEOs, CFOs, and senior executives across the United States, United Kingdom, Germany, and Australia. The findings were stark. Despite two-thirds of respondents reporting they use AI, that usage averaged just 1.5 hours per week per executive. Nearly 90% of firms said AI had produced no measurable impact on productivity or employment over the preceding three years.
Meanwhile, 374 S&P 500 companies mentioned AI positively in their earnings calls. Gartner forecast worldwide AI spending at $1.5 trillion for 2025. Hyperscalers alone committed more than $300 billion in capital expenditure to AI infrastructure that same year.
Apollo chief economist Torsten Slok, invoking a 40-year-old observation, put it plainly: "AI is everywhere except in the incoming macroeconomic data."
We have been here before. And the lesson from last time is not what most executives think it is.
I. The Paradox That Already Happened Once
In 1987, Nobel laureate Robert Solow wrote a line that would define a generation of economic research.
"You can see the computer age everywhere but in the productivity statistics."
The late 1960s and 1970s had seen the transistor, the microprocessor, integrated circuits, and memory chips. Computing promised to revolutionize workplace productivity. Instead, productivity growth collapsed — falling from 2.9% in the postwar era to 1.1% after 1973. The computers were multiplying. The productivity was not.
Organizations were deploying the new technology on top of old workflows. They were using computers to print paper faster — generating more detailed reports, more documentation, more administrative output — rather than redesigning the processes the paper had been serving.
The tool was new. The architecture was not.
It took until the mid-1990s — more than two decades after the technology arrived — for productivity to surge. From 1995 to 2005, productivity growth jumped 1.5%. Not because the computers improved. Because organizations finally rebuilt their operations around what computers actually made possible.
That is the Solow Paradox. And enterprise AI is living inside a nearly identical version of it right now.
II. The Investment Is Real. The Impact Is Not.
Gartner pegged worldwide AI spending at $1.5 trillion for 2025. The hyperscalers committed more than $300 billion in infrastructure capex that year. Venture capital investment in AI firms globally exceeded $200 billion in 2025 — more than half of all VC deployed worldwide. Every major consulting firm, every enterprise software vendor, every systems integrator has reoriented around AI delivery.
And yet. The NBER study found that among those executives reporting AI use, the average weekly engagement was 1.5 hours. Twenty-five percent of respondents reported not using AI at all. Nearly 90% saw no productivity or employment impact over three years.
McKinsey's 2025 State of AI research found that 88% of organizations now use AI in at least one business function — but only 39% report any measurable impact on EBIT. More than 80% saw no meaningful enterprise-wide returns despite adoption.
RAND Corporation's 2025 analysis of enterprise AI projects found that 80.3% fail to deliver their intended business value. The breakdown is instructive: 33.8% are abandoned before reaching production; 28.4% reach completion but fail to deliver expected value; 18.1% deliver some value but cannot justify the investment cost.
ManpowerGroup's 2026 Global Talent Barometer surveyed nearly 14,000 workers across 19 countries. Regular AI use increased 13% in 2025. Confidence in AI's utility dropped 18%.
The investment is real. The executive intent is real. The gap between investment and outcome is also real — and widening.
III. Why Tool Deployment Is Not AI Transformation
The diagnosis that enterprises are drawing from these numbers is almost universally wrong.
The common response to AI underperformance is: the models need improvement, the prompts need refinement, employees need more training. These responses assume the problem is in the AI itself. The problem is not the AI.
MIT Sloan's 2025 research found that 95% of generative AI pilots fail to scale to production deployment. The leading cause — cited in 64% of those failures — was not model quality. It was infrastructure limitations.
Boston Consulting Group identified the same pattern at the individual worker level. In a study of 1,488 full-time U.S.-based workers, productivity increased when respondents used one, two, or three AI tools. But when respondents used four or more AI tools, self-reported productivity collapsed. Workers described brain fog, increased error rates, and decision fatigue — a phenomenon BCG termed "AI brain fry."
More AI tools, applied on top of existing workflows, do not compound productivity. They compound cognitive load. Each tool requires context-switching. Each tool maintains its own interface, its own outputs, its own logic. Workers are not becoming more productive because they are not working within a more productive system. They are managing more technology.
This is the same error organizations made with computers in the 1970s. They put computing power into existing processes. They did not redesign the processes around what computing power made structurally possible.
Compounding requires architecture, not applications.
IV. The 1.5-Hours-Per-Week Problem
The NBER finding that executive AI usage averages 1.5 hours per week deserves more attention than it has received.
This number is not an indictment of executive laziness or resistance to change. It is a structural signal.
Executives are using AI for 1.5 hours per week because AI is available to them in discrete, bounded interactions. They open a tool. They prompt it. They review the output. They close it. Then they return to the systems — the CRM, the ERP, the communication platforms, the dashboards — where actual work happens.
AI exists adjacent to the workflow. It does not exist within it.
Every step of that handoff is friction. Friction reduces usage. Reduced usage reduces impact. Reduced impact produces exactly the data the NBER found: 90% of enterprises reporting no measurable effect.
In 2025, 42% of companies abandoned at least one AI initiative entirely. The average sunk cost per abandoned initiative: $7.2 million. Not because the models failed. Because the architecture around them did.
The 1.5 hours per week is not a training problem. It is not a change management problem. It is what AI usage looks like when AI is not embedded in the systems where revenue is actually created.
V. The J-Curve Is Real — but Only for the Right Architecture
There is legitimate reason to believe the productivity picture will change. Slok's "J-curve" theory — an initial performance decline followed by exponential growth — maps closely to how the IT productivity paradox ultimately resolved.
Stanford's Erik Brynjolfsson pointed to a 2.7% U.S. productivity jump last year as potential evidence that the curve is beginning to inflect upward. Fourth-quarter GDP was tracking 3.7%.
But here is what history actually shows about how the IT productivity surge materialized.
It did not happen when computers got better. Computers were already extremely capable by the mid-1980s. The surge happened when organizations finally redesigned their processes around what the technology made possible — when they stopped automating existing workflows and started building new ones that only computers could execute.
Slok's framing of the underlying dynamic was precise: "The value creation is not the product, but how generative AI is used and implemented in different sectors in the economy."
How it is implemented. Not what it is.
VI. The Architecture That Breaks the Paradox
Breaking out of the AI productivity paradox requires a fundamentally different deployment model. Not more tools. Not better prompts. An infrastructure shift.
Continuity over interaction. Productive AI is not AI that responds to prompts. It is AI that operates continuously — monitoring signals, making decisions, executing actions — without waiting to be invoked. An agentic system that runs inside your customer engagement infrastructure 24 hours a day produces compounding value. An AI assistant that a human opens for 1.5 hours per week does not.
Embedded execution over adjacent assistance. AI that lives alongside the workflow — generating outputs that humans then carry into execution systems — is tool-tier AI. AI that lives within the workflow — directly controlling communication channels, activation logic, and customer engagement sequences — is infrastructure-tier AI. Only infrastructure-tier AI produces the compounding productivity gains that show up in the numbers.
Closed feedback loops over one-directional output. AI that generates a recommendation and stops has a limited impact ceiling. AI that observes the results of its own decisions and recalibrates in real time is a learning system. Learning systems compound. One-directional output systems plateau.
VII. What GetScaled Built and Why It Matters
GetScaled was built around the premise that AI's commercial value is an infrastructure problem, not an intelligence problem.
We do not sell AI tools. We built an AI execution infrastructure: proprietary communication rails across email, SMS, RCS, WhatsApp, and interactive voice; centralized identity resolution that unifies customer records across sources and brands; and agentic AI systems that operate continuously within that infrastructure rather than adjacent to it.
Our agents do not wait to be invoked. They monitor customer behavior signals, make autonomous engagement decisions, execute across communication channels, and update their decision logic based on live conversion feedback — all without requiring human intervention at each step.
This is what it means for AI to be embedded in the workflow rather than adjacent to it.
Because we own and operate the platform end to end — the infrastructure, the communication rails, the identity layer, and the agent logic — our clients do not inherit the fragility of stitched-together vendor stacks. The result is proven and reliable execution: faster rollout timelines, dramatically reduced integration risk, and higher reliability at every stage of deployment. Enterprises that have spent months attempting to assemble similar capabilities through third-party integrations consistently achieve that same architecture in a fraction of the time when they build on owned infrastructure.
For our clients, this architectural distinction is the difference between the 90% and the 10%. The enterprises in the NBER study reporting no productivity impact are using AI as a tool. They are in the 1.5-hours-per-week category. The productivity gains they expect to materialize in the next three years will only arrive if the architecture changes — if AI moves from an adjacent assistant into an embedded execution layer.
That architectural shift is what GetScaled enables.
The IT productivity paradox resolved when organizations stopped using computers to do old things faster and started building new operations that only computers could run. The AI productivity paradox will resolve the same way.
Not when the models improve. When the architecture changes.
To learn more about how GetScaled’s owned and operated AI execution platform accelerates deployment and drives measurable revenue impact, visit GetScaled.com.
Conclusion
The Solow Paradox is not a historical curiosity. It is a live case study in how transformative technology fails to deliver its promise when deployed on top of existing structures rather than built into new ones.
Enterprise AI is producing the same pattern. $1.5 trillion in projected spending. $300 billion in infrastructure commitments. 80% project failure rates. Ninety percent of executives reporting no measurable impact.
The diagnosis is not a model problem. It is not a training problem. It is an architecture problem.
AI that lives in a tool, invoked for 1.5 hours per week, will not compound. AI that lives inside the infrastructure where revenue is created — operating continuously, executing autonomously, learning from its own outcomes — will.
The productivity surge from the computer age took two decades to materialize. The enterprises that captured it were not the ones with the best computers. They were the ones that rebuilt their operations around what computers made possible.
The AI productivity surge is coming. The only question is which enterprises will be positioned to capture it.
That depends entirely on whether you are buying tools or building infrastructure.
Related Articles
Here There Be Dragons: Why the Businesses That Keep Pushing Win the AI Era
Old maps marked unknown territory with a warning: ‘Here there be dragons.’ AI implementation looks like that map right now — for businesses of every size. You don’t know how deep the water is. You don’t know where the dragons are. But the businesses that keep pushing for usage are the ones that cross to the other side. The ones that pause are the ones that lose.
94% of Enterprises Have an AI Agent Sprawl Problem. Only 12% Have Done Anything About It.
A new OutSystems report found that 94% of enterprises are concerned about AI agent sprawl — yet only 12% have taken action. The gap between awareness and governance is not a technology problem. It is a revenue problem in progress.
The Agentic Coordination Failure: Why More AI Agents Are Making Your Revenue Engine Less Intelligent
Enterprise AI teams are deploying more agents than ever. But without a shared coordination layer, each new agent compounds the chaos. Here's why the Agentic Coordination Failure is the defining infrastructure problem of 2026.
