← back

Thousands of CEOs just admitted AI had no impact on employment or productivity

virgildotcodes | 2026-02-18 01:40 UTC | source

In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.

Newfangled computers were actually at times producing too much information, generating agonizingly detailed reports and printing them on reams of paper. What had promised to be a boom to workplace productivity was for several years a bust. This unexpected outcome became known as Solow’s productivity paradox, thanks to the economist’s observation of the phenomenon.

“You can see the computer age everywhere but in the productivity statistics,” Solow wrote in a New York Times Book Review article in 1987.

New data on how C-suite executives are—or aren’t—using AI shows history is repeating itself, complicating the similar promises economists and Big Tech founders made about the technology’s impact on the workplace and economy. Despite 374 companies in the S&P 500 mentioning AI in earnings calls—most of which said the technology’s implementation in the firm was entirely positive—according to a Financial Times analysis from September 2024 to 2025, those positive adoptions aren’t being reflected in broader productivity gains.

A study published this month by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted.

However, firms’ expectations of AI’s workplace and economic impact remained substantial: Executives also forecast AI will increase productivity by 1.4% and increase output by 0.8% over the next three years. While firms expected a 0.7% cut to employment over this time period, individual employees surveyed saw a 0.5% increase in employment.

Solow strikes back

In 2023, MIT researchers claimed AI implementation could increase a worker’s performance by nearly 40% compared to workers who didn’t use the technology. But emerging data failing to show these promised productivity gains has led economists to wonder when—or if—AI will offer a return on corporate investments, which swelled to more than $250 billion in 2024.

“AI is everywhere except in the incoming macroeconomic data,” Apollo chief economist Torsten Slok wrote in a recent blog post, invoking Solow’s observation from nearly 40 years ago. “Today, you don’t see AI in the employment data, productivity data, or inflation data.”

Slok added that outside of the Magnificent Seven, there are “no signs of AI in profit margins or earnings expectations.”

Slok cited a slew of academic studies on AI and productivity, painting a contradictory picture about the utility of the technology. Last November, the Federal Reserve Bank of St. Louis published in its State of Generative AI Adoption report that it observed a 1.9% increase in excess cumulative productivity growth since the late-2022 introduction of ChatGPT. A 2024 MIT study, however, found a more modest 0.5% increase in productivity over the next decade.

“I don’t think we should belittle 0.5% in 10 years. That’s better than zero,” study author and Nobel laureate Daron Acemoglu said at the time. “But it’s just disappointing relative to the promises that people in the industry and in tech journalism are making.”

Other emerging research can offer reasons why: Workforce solutions firm ManpowerGroup’s 2026 Global Talent Barometer found that across nearly 14,000 workers in 19 countries, workers’ regular AI use increased 13% in 2025, but confidence in the technology’s utility plummeted 18%, indicating persistent distrust.

Nickle LaMoreaux, IBM’s chief human resources officer, said last week the tech giant would triple its number of young hires, suggesting that despite AI’s ability to automate some of the required tasks, displacing entry-level workers would create a dearth of middle managers down the line, endangering the company’s leadership pipeline.

The future of AI productivity

To be sure, this productivity pattern could reverse. The IT boom of the 1970s and ’80s eventually gave way to a surge of productivity in the 1990s and early 2000s, including a 1.5% increase in productivity growth from 1995 to 2005 following decades of slump. 

Economist and Stanford University’s Digital Economy Lab director Erik Brynjolfsson noted in a Financial Times op-ed the trend may already be reversing. He observed that fourth-quarter GDP was tracking up 3.7%, despite last week’s jobs report revising down job gains to just 181,000, suggesting a productivity surge. His own analysis indicated a U.S. productivity jump of 2.7% last year, which he attributed to a transition from AI investment to reaping the benefits of the technology. Former Pimco CEO and economist Mohamed El-Erian also noted job growth and GDP growth continuing to decouple as a result in part of continued AI adoption, a similar phenomenon that occurred in the 1990s with office automation.

Slok similarly saw the future impact of AI as potentially resembling a “J-curve” of an initial slowdown in performance and results, followed by an exponential surge. He said whether AI’s productivity gains would follow this pattern would depend on the value created by AI. 

So far, AI’s path has already diverged from its IT predecessor. Slok noted in the 1980s, an innovator in the IT space had monopoly pricing power until competitors could create similar products. Today, however, AI tools are readily accessible as a result of “fierce competition” between large language model-buildings driving down prices.

Therefore, Slok posited, the future of AI productivity would depend on companies’ interest in taking advantage of the technology and continuing to incorporate it into their workplaces. “In other words, from a macro perspective, the value creation is not the product,” Slok said, “but how generative AI is used and implemented in different sectors in the economy.”

106 points | 52 comments | original link

Comments

virgildotcodes | 2026-02-18 01:41 UTC
DaedalusII | 2026-02-18 02:04 UTC
If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness
jeron | 2026-02-18 02:11 UTC
it turns out it's really hard to get a man to fish with a pole when you don't teach them how to use the reel
throwawaysleep | 2026-02-18 02:21 UTC
Or give them a stick with twine and a plastic fork as a hook, as is the case with Copilot.
Banditoz | 2026-02-18 02:26 UTC
If AGI is coming, won't there just be autofishers and no one will ever have to fish again, completely devaluing one's fishing knowledge and the effort put in to learn it?
conductr | 2026-02-18 02:33 UTC
In regards to copilot, they’ve also been led on a fishing expedition to the middle of a desert
chaos_emergent | 2026-02-18 02:12 UTC
100% All of the people who are floored by AI capabilities right now are software engineers, and everyone who's extremely skeptical basically has any other office job. On investigating their primary AI interaction surface, it's Microsoft Co-Pilot, which has to be the absolute shittiest implementation of any AI system so far. As a progress-driven person, it's just super disappointing to see how few people are benefiting from the productive gains of these systems.
dboreham | 2026-02-18 02:15 UTC
This isn't my experience. I see many non-software people using AI regularly. What you may be seeing is more: organizations with no incentive to do things better never did anything to do things better. AI is no different. They were never doing things better with pencil and paper.
DaedalusII | 2026-02-18 02:17 UTC
I think anthropic will succeed immensely here because when integrated with Microsoft365 and especially Excel it basically does what co-pilot said it would do.

The moment of realisation happen for a lot of normoid business people when they see claude make a DCF spreadsheet or search emails

claude is also smart because it visually shows the user as it resizes the columns, changes colours, etc. Seeing the computer do things makes the normoid SEE the AI despite it being much slower

yodsanklai | 2026-02-18 02:25 UTC
I'm a SWE who's been using coding agents daily for the last 6 months and I'm still skeptical.

For my team at least, the productivity boost is difficult to quantify objectively. Our products and services have still tons of issues that AI isn't going to solve magically.

It's pretty clear that AI is allowing to move faster for some tasks, but it's also detrimental for other things. We're going to learn how to use these tools more efficiently, but right now, I'm not convinced about the productivity gain.

dimitri-vs | 2026-02-18 02:28 UTC
IMO Copilot was "we need to give these people rope, but not enough for them to hang themselves". A non technical person with no patience and access to a real AI agent inside a business is a bull in a china shop. Copilot Cowork is the closest thing we have to what Copilot should have been and is only possible now because models finally got good enough to be less supervised.

FWIW Gemini inside Google apps is just as bad.

tehjoker | 2026-02-18 02:06 UTC
There is probably a threshold effect above which the technology begins to be very useful for production (other than faking school assignments, one-off-scripts, spam, language translation, and political propaganda), but I guess we're not there yet. I'm not counting out the possibility of researchers finding a way to add long term memory or stronger reasoning abilities, which would change the game in a very disorienting way, but that would likely mean a change of architecture or a very capable hybrid tool.
DaedalusII | 2026-02-18 02:19 UTC
the greatest step change will be when mainstream business realise they can use AI to accurately fill in PDF documents with information in any format

filling in pdf documents is effectively the job of millions of people around the world

bubblewand | 2026-02-18 02:14 UTC
My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)
mr_toad | 2026-02-18 02:37 UTC
So management basically have no clue and want you to figure out how to use AI?

Do they also make you write your own performance review and set your own objectives?

Herring | 2026-02-18 02:15 UTC
My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.
Haven880 | 2026-02-18 02:26 UTC
I think both. Most organizatuons lack someone like Steve Jobs to prime their product lines. Microsoft is a good example where you see their products over the years are mostly meh. Then meetings are pervasive and even more so in most companies due to msteam convenience. But currently they faced reduced demands due softer market as compare 2-3 years ago. If you observed that no effect while they layoff many and revenue still hold or at least no negative growth, I would surmise that AI is helping. But in corporate, it only counta if directly contributed sales numbers.
amrocha | 2026-02-18 02:27 UTC
The where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?
Herring | 2026-02-18 02:33 UTC
> 4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026.

https://newsletter.semianalysis.com/p/claude-code-is-the-inf...

com2kid | 2026-02-18 02:41 UTC
Seemingly every day on Show HN?

Also small businesses aren't going to publish blog posts saying "we saved $500 on graphic design this week!"

8note | 2026-02-18 02:29 UTC
operationally, i think new startups have a big advantage on setting up to be agent-first, and they might not be as good as the old human first stuff, but theyll be much cheaper and nimble for model improvements
kamaal | 2026-02-18 02:43 UTC
Start ups mostly move fast skipping the necessary ceremony which large corps have to do mandatorily to prevent a billion dollar product from melting. Its possible for start ups because they don't have a billion dollar to start with.

Once you do have a billion dollar product protecting it requires spending time, money and people to keep running. Because building a new one is a lot more effort than protecting existing one from melting.

kjellsbells | 2026-02-18 02:31 UTC
Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.
canyp | 2026-02-18 02:55 UTC
L2? I'm hot L1 material, dude.

But I like your and OP's analogy. Also, the productivity claims are coming from the guys in main memory or even disk, far removed from where the crunching is taking place. At those latency magnitudes, even riding a turtle would appear like a huge productivity gain.

sebmellen | 2026-02-18 02:20 UTC
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.

Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.

What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted LLM takeover.

But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.

DaedalusII | 2026-02-18 02:25 UTC
when the work involves navigating a bunch of rules with very ambiguous syntax, AI will automate them to the point computers automated rules based systems with very precise syntax in the 1990s

https://hazel.ai/tax-planning

this software (which i am not related to or promoting) is better at investment planning and tax planning than over 90% of RIAs in the US. It will automate RIA to the point that trading software automated stock broking. This will reduce the average RIA fee from 1% per year to 0.20% or even 0.10% per year just like mutual fund fees dropped in the early 00s

lich_king | 2026-02-18 02:26 UTC
> making slides/decks to communicate those thoughts,

That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?

It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.

sebmellen | 2026-02-18 02:30 UTC
Yeah, but this is self-correcting. Eventually it will get to a point where the data that you use to prompt the LLM will have more signal than the LLM output.

But if you get deep into an enterprise, you'll find there are so many irreducible complexities (as Stephen Wolfram might coin them), that you really need a fully agentically empowered worker — meaning a human — to make progress. AI is not there yet.

LPisGood | 2026-02-18 02:29 UTC
I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?

If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.

sebmellen | 2026-02-18 02:57 UTC
Well that’s why AI will not replace the software engineer!
maininformer | 2026-02-18 02:20 UTC
Thousand of companies to be replaced by leaner counterparts that learned to use AI towards greater employment and productivity
SilverElfin | 2026-02-18 02:21 UTC
These surveys don’t make sense. Ask the forward thinking companies and they’ll say the opposite. The flood of anti AI productivity articles almost feel like they’re meant to lull the population into not seeing what’s about to happen to employment.
throwawaysleep | 2026-02-18 02:25 UTC
Eh, try using Microsoft Copilot in Word or PowerPoint. It is worthless. If your experience with AI was a Microsoft product, you would think it was a scam too.
yoyohello13 | 2026-02-18 02:52 UTC
Yeah Microsoft has consistently been bragging about how so much code is written by AI, yet their products are worse than ever. Seems to indicate “using AI” is not enough. You have to be smart about when and where.
pram | 2026-02-18 02:25 UTC
It’s funny because at work we have paid Codex and Claude but I rarely find a use for it, yet I pay for the $200 Max plan for personal stuff and will use it for hours!

So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?

crazygringo | 2026-02-18 02:26 UTC
Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].

Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.

And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.

[1] https://en.wikipedia.org/wiki/Productivity_paradox

kamaal | 2026-02-18 02:38 UTC
One part of the system moving fast doesn't change the speed of the system all that much.

The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.

If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.

kace91 | 2026-02-18 02:43 UTC
The comparison seems flawed in terms of cost.

A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.

Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.

meager_wikis | 2026-02-18 02:50 UTC
If anything, the 'scariness' of an old computer probably protected the company in many ways. AI's approachability to the average office worker, specifically how it makes it seem like it easy to deploy/run/triage enterprise software, will continue to pwn.
gruez | 2026-02-18 02:52 UTC
How viable are the $20/month subscriptions for actual work and are they loss making for Anthropic? I've heard both of people needing to get higher tiers to get anything done in Claude Code and also that the subscriptions are (heavily?) subsidized by Anthropic, so the "just another $20 SaaS" argument doesn't sound too good.