Virtually every month we see announcements of new AI models that surpass previous ones, the latest ones even having reasoning functions, but these advancements are having a worrying negative aspect: AI hallucinations.
Hallucinations in artificial intelligence, especially LLMs, are one of serious problems that big AI tech giants are facing. This occurs when these models offer type of information or reasoning that makes a firm statement, but in reality it is not accurate or true.
For example, an AI might confidently generate a fake historical quote or invent a non-existent scientific study, presenting it as fact. It might claim that Albert Einstein won a Nobel Prize for his theory of relativity — when in fact, he received it for his work on the photoelectric effect.
The worrying thing is that the new OpenAI models, both the o3 and the o4-mini, generate more hallucinations than ever compared to previous models.
According to OpenAI’s internal testing, both o3 and o4-mini, which feature reasoning capabilities, hallucinate more frequently than the company’s previous reasoning models, even compared to traditional models like GPT-4o.
But the worrying thing is that even OpenAI doesn’t know exactly why this is happening, and if things continue this way, its future models might not be reliable.
In the latest white paper on o3 and o4-mini, OpenAI notes that more research is needed to understand why hallucinations worsen as reasoning models expand.
They point out that o3 amazed by exhibiting a 33% hallucination rate in answering the questions in PersonQA, an internal company benchmark for measuring the accuracy of a model on people. This represents approximately twice the rate of hallucinations in previous reasoning models.
On the other hand, o4-mini performed even worse on PersonQA with hallucinations 48% of the time.
Likewise, external tests like those conducted by Transluce, a nonprofit AI research lab, have also found evidence that these new models tend to invent more than ever: “Our hypothesis is that the ty[e of reinforcement learning used for the ‘o’ series models may amplify problems that are typically mitigated (but not entirely eliminated) by standard post-training processes,” said Neil Chowdhury, a researcher at Transluce.
While these hallucinations can sometimes help models be more imaginative and creative, if the goal is clear, concise, and accurate information, they should have a virtually nonexistent hallucination rate.
One way to increase the accuracy of models is with web search functions, but sometimes not everything found on the web is accurate or trustworthy.
“Addressing hallucinations in all of our models is an area of ongoing research, and we’re constantly working to improve their accuracy and reliability,” OpenAI spokesperson Niko Felix told Techcrunch.