OpenAI proudly launched version 4.5 of ChatGPT at the end of February, describing it as “better” and more powerful than ever. But observers—and OpenAI’s own boss—acknowledged that the application lacked the computing power to continue growing at the same speed as before. What if these companies were hitting a wall?
The question isn’t just being asked for ChatGPT 4.5 or the possible ChatGPT 5.0, which many were expecting this winter. The issue was raised at the end of January, when a Chinese competitor, DeepSeek, arrived on the market. Its generative AI was cheaper to develop and requires much less computing power—while being free for its users.
Could it be possible, asks the popular magazine The New Scientist , for example, that we are witnessing “the end of the rapid progress of this technology and perhaps even the bursting of a bubble?”
Companies like OpenAI are tight-lipped about the computing power they have at their disposal. But one thing is certain: at the launch of ChatGPT 4.5 on February 27, CEO Sam Altman made no secret of the fact that it was an “expensive model” and that the company was reaching the maximum capacity of its GPUs (Graphics Processing Units), the computer chips that give computers their ability to process these astronomical amounts of data.
The warning signs had been there for a while. In April 2023, Gary Marcus, a professor of psychology and neural science at New York University, said he believed that “large language models,” like ChatGPT, were already entering a “period of diminishing returns.” This was partly because of the growth that relied too heavily on an ultra-rapid increase in the planet’s computing power, and partly because the content used to “train” these applications—literally, all the text and images humans produce—was growing more slowly than the machines needed.
And that’s without taking into account the growing environmental footprint and the fake news spread by these AIs. By the end of 2024, observers were noting that the performance of the latest versions of conversational agents—for example, in law or medical exams—had shown no dramatic progress compared to the 2023 versions.
But for the most vocal critics, it’s much more than a slowdown in growth. Technology journalist Ryan Broderick called it a complete scam in late January: all the “great sages” of Silicon Valley “decided that AI was the future” and that it definitely had to be American. They “immediately started throwing other people’s money at the furnace, promising that it would eventually bring about the new revolution that they, incidentally, were pioneering. And this weekend, thanks to DeepSeek, we learned not only that they never needed all that money to build the future, but that they weren’t even that good at building it.”
It may be too early to decide between optimists and pessimists. But where they do agree is in recognizing that AI growth based solely on ultra-rapid growth in computing power is unsustainable. “The current way of training and deploying large language models is grossly inefficient,” Sasha Luccioni of the AI company Hugging Face comments in New Scientist. “Of course it’s destined to hit a wall.”