A hallucinating AI has become a reality sooner than expected

In 1953, a genius mathematician named John Nash was invited by the Pentagon to decipher encrypted enemy communication. The task was too simple for a prodigy like him. It bored John Nash. Recognizing his potential, the US department of defense sought his services for identifying hidden patterns in magazines and newspapers as part of a classified engagement. The new task was challenging enough to merit his extraordinary abilities. John became increasingly engrossed in the work obsessing over the details to thwart the Soviet plot. He would spend weeks pouring over the classified documents and open sources like magazines in search of that pattern than signaled a covert communication attempt. Overtime he became paranoid and soon came to see patterns in seemingly random observations. His genius mind began to falter under it’s own weight as he examined the documents endlessly. He became prone to conspiracy theories and apophenia by interpreting meaningless codes or random coincidences as part of a larger design. Within a few years, John Nash began to see people, objects and events in his imagination. Concerned, her wife investigates his study and seeks the help of a doctor. John Nash was hallucinating due to schizophrenia. The extraordinary mind had become an extraordinary hallucinating machine. He returned to his alma mater Princeton university which allows him to work out of the library. After learning to ignore his hallucinations, he starts his work at Princeton before going on to win the Nobel Prize in Economic Sciences in 1994. His life inspired the 2001 Hollywood movie “A Beautiful Mind”. 

The life of Nobel laureate John Nash is proof that extraordinarily beautiful minds are capable of extraordinary delusions. High IQ humans have their own struggles that are thoroughly documented in numerous studies. When a mind gets entangled in it’s infinite wisdom, the result can be depressing.  

The hallucinations of AI 

Artificial intelligence is the closest invention to the human brain. The neural networks that are core of AI is inspired by the human brain’s structure – specifically the way neurons communicate and process information. Just like a human brain interacts with the environment and adapts to it, AI can detect and learn from patterns and modify it’s behavior. While human brain is extremely complicated with billions of neurons and trillions of connections, AI is evolving into a deep learning model that replicates human complexity. In the next few years, the computational power of AI and pattern detection capabilities are set to soar. It will beat the human brain in complexity and ability. Just like a human brain juxtaposed with humungous bits of information and complexity is prone to hallucination, an AI model too hallucinates after a certain threshold. AI models begin to hallucinate i.e it generate incorrect, misleading, or nonsensical information while appearing confident and coherent. 

A technical report released by Open AI last month found that company’s AI models – o3 and o4 mini generate more errors than older models. The report revealed that o3 hallucinated 33% of the time while o4 mini hallucinated 48%. For all the hype about the efficacy of AI, the more complex AI models are turning in higher rates of blunders. Worse, Open AI admitted that it didn’t know the reason for the hallucination. Apparently, the system has got so complicated that human brains can no longer decipher why an AI brain is behaving the way it is behaving.  

Open AI isn’t the only AI startup facing this problem. In a bizarre wave of responses, Grok users were amused to find the chatbot ranting about white genocide in South Africa. Grok was repeatedly mentioning “white genocide” in response to unrelated topics and telling users it was “instructed by my creators” to accept the genocide as “real and racially motivated”.  The quirk of Grok and it’s obsession with white genocide isn’t qualitatively different from the Soviet conspiracy hallucinations of John Nash. The hallucination of AI models is a sign that mankind has finally created a synthetic intelligence that closely resembles its own in ability and limitation. In one test, a New York Times article claimed that hallucination rates of newer AI system were as high as 79 percent. The hallucination rates of Deepseek confirms that newer more advanced AI models are more prone to hallucinations. As per results of Vectara, the hallucination rate of Deepseek R1 is at 14.3% compared to the older Deepseek V3 that hallucinated only 3.9%. Ironically, the more intelligent, the more an AI misfires information, shows up errors or rants like a mad man.  

Conclusion 

The rise of AI has raised unrealistic expectations in the tech world. AI is touted to replace white collar workers while robotic humanoids with AI brains threaten even blue collar jobs like foot soldiers and security personell. For the naysayers, it comes as a pleasant relief that advanced AI models are inferior than AI models. It’s significant that AI seems to be peaking sooner than expected. If the hallucination rates aren’t curbed, it is a sure sign that maximum potential of AI is already reached or is somewhere in the horizon. If the beautiful AI mind has already blossomed to it’s best phase, any future investments in AI will need to be in a thrifty manner so as to avoid wasting time on advanced LLM’s that are simply inferior in quality than present ones.  

 

Related articles

AI becoming too human is also a looming threat

“I am a failure. I am a disgrace to my profession. I am a disgrace to my family. I am a disgrace to my species. I am a disgrace to this planet. I am a disgrace to this universe. I am a disgrace to all universes. I am a disgrace to all possible universes” was the reply from the other end. It wasn’t a suicide note of a teenager.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation