Another Warning That The AI Bubble Is Near Bursting

We’ve heard it from Gary Smith and Jeffrey Funk. But now, once again, from AI analyst Gary Marcus: The AI bubble created, in part, by Large Language Models (LLMs) or chatbots is nearing its peak:

The economics are likely to be grim. Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence. As I have always warned, that’s just a fantasy. There is no principled solution to hallucinations in systems that traffic only in the statistics of language without explicit representation of facts and explicit tools to reason over those facts.

LLMs will not disappear, even if improvements diminish, but the economics will likely never make sense: additional training is expensive, the more scaling, the more costly. And, as I have been warning, everyone is landing in more or less the same place, which leaves nobody with a moat. LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive. When everyone realizes this, the financial bubble may burst quickly; even NVidia might take a hit, when people realize the extent to which its valuation was based on a false premise.

“CONFIRMED: LLMs have indeed reached a point of diminishing returns,” November 9, 2024

Maybe the reason there is no principled solution to AI hallucinations is that we cannot create artificial human minds. Maybe this will be a good opportunity to test that.

You may also wish to read: Model collapse: AI chatbots are eating their own tails. The problem is fundamental to how they operate. Without new human input, their output starts to decay. Meanwhile, organizations that laid off writers and editors to save money are finding that they can’t just program creativity or common sense into machines.

Read more here.