Why AI Isn't Boosting Productivity (Yet)
- Mira Yossifova

- Apr 13
- 5 min read
Updated: Apr 14

General-purpose technologies have never delivered quick productivity gains. Electricity took 40 years to show up in the data; computers took roughly two decades of heavy investment before the late-1990s boom. AI appears to be following the same pattern.
The bottleneck is organizational, not purely technical. Bolting chatbots onto existing workflows will not transform productivity. The gains depend on firms redesigning how they work, and that process has barely started.
The data cannot yet settle the debate, but three leading indicators over the next decade will reveal whether AI's broad economic impact is coming or whether the optimism is misplaced.
In 1987, Robert Solow said, "you can see the computer age everywhere but in the productivity statistics."
Nearly four decades later, the line reads like a recycled prophecy. Money is pouring into AI at a staggering rate. According to the OECD, global venture capital investment in AI firms reached $258.7 billion in 2025 [1], now accounting for 61 percent of all VC investment worldwide, up from 30 percent just three years earlier. Yet the productivity payoff remains elusive. The OECD's 2025 Compendium of Productivity Indicators [2] estimates that labor productivity across its member countries grew just 0.4 percent in 2024, and the report states plainly that AI's impact "is not yet evident in the productivity statistics." In the euro area, productivity actually fell 0.9 percent in 2023, the steepest decline since the 2008 financial crisis. The United States has done somewhat better, with the Bureau of Labor Statistics reporting growth of 2.3 percent in 2024 and 2.1 percent in 2025 [3], but even those figures are only modestly above the long-run trend.
Something is not adding up. So what is going on?
The answer may lie in how earlier transformative technologies played out. Economists use the term "general-purpose technology" (GPT) for innovations broad enough to eventually reshape entire economies. Timothy Bresnahan and Manuel Trajtenberg formalized the concept in the 1990s [4]: pervasiveness across sectors, improvement over time, and the ability to spawn complementary innovations. Electricity, the internal combustion engine, and the semiconductor all qualify. AI plausibly does too. But the historical record shows that these technologies never delivered quick productivity gains. The pattern was always the same: the technology spread first, and the economic payoff came much later, after firms made painful complementary investments in reorganization, training, and process redesign.
Electric motors were commercially available by the 1880s, but U.S. manufacturing productivity did not accelerate until the 1920s, a lag of roughly 40 years. The problem was not immature technology. Steam-era factories were multi-story buildings organized around a central power shaft. Electric motors allowed single-story layouts with decentralized power, but firms did not tear up functional factories overnight. It took a generation of new buildings, new wiring standards, and new utility pricing models before the productivity payoff showed up in the national accounts.
Computers followed a compressed but structurally similar path. Business investment in IT surged through the 1970s and 1980s, yet productivity did not rise during the period Solow was commenting on. The boom finally arrived in the late 1990s. Erik Brynjolfsson and Lorin Hitt showed in a series of studies that the firms capturing productivity gains from IT were not the ones spending the most on hardware. They were the ones that simultaneously reorganized management structures, decentralized decision-making, and invested in worker training. The technology was necessary but nowhere near sufficient. The resistance was organizational, not technical.
If this pattern holds, it has a clear implication for AI. The current wave of adoption (chatbots, copilots, summarization tools layered on top of existing workflows) is unlikely to move the productivity needle at a macro level. Those tools are useful, but they are being bolted onto organizational structures designed for a pre-AI world. The serious impact will require the harder, slower work of rethinking what those structures should look like in light of what AI can do. That work has barely started.
There is reason to think the AI timeline could be shorter than the electricity timeline. Digital technologies can diffuse faster than physical infrastructure, and many LLM-based tools can be deployed without the large fixed investments required by earlier general-purpose technologies. AI usage costs have also been falling rapidly, in some cases by orders of magnitude, even if frontier model training and enterprise-scale integration remain capital- and capability-intensive.
But "faster than electricity" should not be confused with "fast." It took years for computers to go from obviously useful inside individual companies to measurably boosting the economy as a whole. And when the boom finally arrived in the late 1990s, it was concentrated in a surprisingly narrow slice of the economy. Nordhaus and others showed [5] that the industries making computers captured most of the productivity gains. AI could easily follow the same pattern, with a handful of AI-building companies pulling ahead while everyone else waits for the organizational changes that would let them catch up.
None of this means the productivity statistics prove AI is failing, nor that it is working. The OECD average of 0.4 percent growth is dismal, but the United States, at 2.1 to 2.3 percent, is doing better. That gap could reflect earlier AI adoption in American firms, post-pandemic normalization, measurement differences, or simply because the biggest AI companies are US-based. We are too early for the data to settle the question. But this argument has a limit: it is not falsifiable within any short time frame. An argument structured as "the gains will come eventually, just wait" can defend any technology, including ones that never deliver. The historical GPT framework provides useful context, not a guarantee.
What it does provide is a diagnostic. If AI is following the GPT pattern, we should expect three things over the next five to ten years:
Widening productivity gaps between firms as early adopters pull ahead.
The emergence of new business models and service categories that could not have existed without AI, analogous to how electrification eventually enabled home appliances and refrigerated supply chains.
Growing demand for managers, designers, and domain experts who know how to restructure work around AI capabilities.
If those indicators appear, the productivity surge is likely to follow. If they do not, the analogy may not hold.
For leaders making decisions now, the practical takeaway is not "invest in AI" as a generic directive. It is to invest in the organizational complements: redesign workflows, train people to work differently, and build the capacity to experiment with new business models. The firms that captured the productivity gains from IT in the 1990s were not the ones with the biggest technology budgets. They were the ones who changed how they worked. That lesson, at least, the history is clear about.
Image: AI generated.
[4] Bresnahan, T.F. and Trajtenberg, M., "General Purpose Technologies: Engines of Growth?" Journal of Econometrics, 1995.
[5] Nordhaus, William D., "The Sources of the Productivity Rebound and the Manufacturing Employment Puzzle," NBER Working Paper 11354, 2005.




Comments