Massive AI Spending doesn't mean AGI is close
AI companies are being valued like Saas companies - they are not SaaS companies. Is a gigantic AI crash coming? I've been thinking about AI and the feverish market excitement we're seeing in AI investment. I'm a builder of AI systems and use them constantly. I want strong AGI. However, I'm skeptical of both:
a) the feasibility of AGI in the next 10 years, and
b) the economics of AI investment
Focusing on the economics of AI investment, my core thesis is this:
We are treating AI companies like high-margin SaaS companies, but their underlying economics are built on a multi-layered chain of subsidies. There is massive investment in AI. Investors require a return to keep investing. The only plausible return is to get AGI. AGI is highly unlikely. So a gigantic AI crash is likely.
So much is possible if price is not important. We were able to do stuff almost 60 years ago that we can't do today, like put people on the moon, because we just put unbelievable amounts of resource into the Apollo program. It's technically possible, but not at a feasible cost. It feels that way with AI right now. There are a few things where the economics just aren't making sense to me. AGI looks like the only one route out, and it doesn't look like it's happening.
Plateau?
(Artificial Analysis)
(Artificial Analysis)
The Subsidy Chain of Delusion - AI companies are not CRUD companies The way I see it, you've got these three, maybe four, tiers of providers in the world of AI.
At the end of the chain selling stuff, you've got application companies like Cursor, who appear to be the fastest-growing SaaS company ever by some margin.
But this isn't CRUD software where compute is cheap. Compute is not cheap. A huge proportion of their revenue is going straight to Anthropic. I'd be shocked if they had anything other than a pretty low gross margin. This is not SaaS.
But Anthropic itself is losing huge amounts of money, subsidized by VC cash so it can subsidize its customers. Anthropic then pays the big cloud providers. Anthropic has an alliance with AWS; OpenAI has a similar one with Microsoft. And these cloud providers are hemorrhaging cash - spending $50 to $100 billion a year each on AI cloud buildout. All of this money flows to Nvidia, the only truly profitable player in this chain.
This is like the opposite of the US defense industry's cost-plus model. Here, you're paying vastly less than the true cost.
When you pay for Cursor, you're paying a tiny fraction of what it actually costs to produce. The true gross margins here are significantly negative.
Which means prices need to come up enormously. I wouldn't be surprised if they had to be 20 times higher. Would you pay $400 a month for Cursor? What about $4000 a month?
The Escape Hatch Is Jammed The only way out of this is if the cost per unit of intelligence declines faster than the enormous capex required to achieve it. The astronomical valuations we're seeing must be based on the idea that the intelligence we have today is just the start (See the plateau diagrams above). But those valuations also critically depend on that future intelligence being economically feasible.
All we see is that costs are increasing while intelligence seems to be plateauing.
Cost per unit of intelligence needs to come down.
But this looks more promising!
Source
Until you see this:
Source
To hit AGI, intelligence needs to scale more than linearly with cost! We do not have unlimited free energy.
(For all the diagrams in this piece, remember that 'benchmaxxing' occurs. Model companies know the popular benchmarks and their incentive is to score as highly as possible on them. In the case of Meta’s Llama 4, even training explicitly on test sets)
The most valuable tasks are multi-step. Models are appalling at multi-step tasks See how terrible frontier models are at multi-step actions, like browsing the web. Remember a 95% accuracy per step over a 20-step process means you only succeed 36% of the time (0.95^20).
You need a very high accuracy rate to succeed over a multi-step process. E.g., 99.9% is much better; over 20 steps, you get 98% (0.999 ^ 20)
This is a huge problem for creating real economic value.
Also, distilled models and open-source competitors like Qwen3 and Kimi K2, are already very close to the big players. This commoditization makes the moats of companies like OpenAI and Anthropic look even more precarious.
The Big Tech Paradox: Why They Win Even if AI Loses So if the economics are so shaky, why are the big players - Meta, Google, Microsoft - throwing in tens of billions of dollars a year?
My friend and I discussed this, and I think I was wrong about one thing. This isn't just a bet for them; it's an existential necessity. If Meta declines to participate and Google achieves a breakthrough, Meta is dead. A few tens of billions a year is a small price to pay for a chance of a multi-trillion-dollar galactic empire.
Another point is that this investment is disposable. It's all coming out of surplus cash from their unbelievably profitable core businesses.
This leads to a crucial point: we can't infer the health of the AI industry from their actions. Their behavior - spending tens of billions - would be similar whether the probability of AGI success is 100% or 2%. We are reading their actions as a signal that the probability is close to 100%, but it could be closer to 2%.
And if there is a huge AI crunch? These companies are going to be fine in the long-term. The AI industry won't, but they will.
Here’s the crazy win-win part: they might even be better off. If Meta is making, say, $70 billion a year in profit but spending $40 billion of it on AI capex that turns out to be worthless, what happens when it stops? That $40 billion in spending vanishes, and their earnings more than double overnight. The stock might get hammered for reduced growth potential, but it would be bolstered by earnings going through the roof. It could roughly net out.
The hyperscalers - Google, Meta, Microsoft - are just extremely good businesses, completely independently of AI.
The model companies, however, are crippled without their compute backers. And this creates the final paradox: an AI crash is a win-win for the hyperscalers. It provides an incredibly good acquisition opportunity. If the model companies go bust, Microsoft can just finish the job of acquiring OpenAI. AWS and Google can eat Anthropic.
It all seems to point in the same direction: the current AI frenzy is built on a foundation of shaky economics that seems unlikely to last. The big players are insulated and even stand to gain from a crash that would wipe out the pure-play AI companies. As Warren Buffett says, be fearful when others are greedy. And right now, people look very, very greedy.
--
That’s all so far. I’ll likely research more of the points, and add some more diagrams. Let me know if you’ve got any counterpoints. Would very much like to add your name to the acknowledgements.