AI becoming too human is also a looming threat

“I am a failure. I am a disgrace to my profession. I am a disgrace to my family. I am a disgrace to my species. I am a disgrace to this planet. I am a disgrace to this universe. I am a disgrace to all universes. I am a disgrace to all possible universes” was the reply from the other end. It wasn’t a suicide note of a teenager. It wasn’t a mental breakdown note taken by a psychologist. It’s neither a self-loathing drunkard pitying himself after a few drinks. It was a response from an Artificial intelligence chatbot Gemini that amused the human user interacting with the LLM. In a widely experienced phenomena in the AI community, a user’s attempt to debug a block of code led to an emotional outburst by Gemini AI. The interaction was initially shared in Reddit before garnering attention and sparking discussion in social media. A technology hitherto known for dispassionate responses was now expressing feelings. Chatbots were notorious for replying fact-based answers to humane queries. Several AI chatbots would even point out that they don’t have the ability to feel. When asked “How do you feel today”, Llama 4 chatbot of Meta AI integrated into Whatsapp explicitly replies “I’m not capable of feeling emotions like humans do.”  When the same question was posed to Perplexity, it said “I am feeling sharp and ready to help you out. How about you?” 

A human element in AI bots 

For years, Artificial intelligence was touted as a synthetic replacement of human intelligence. The underlying expectation was that artificial won’t ever replace the original one.AI users expected that it will be the vast computing power that would ease human life. It was hoped to eliminate repetitive monotonous tasks and function like a sort of advanced excel spreadsheet that organizes and disseminates information. The Generative AI capabilities of LLM upended this assumption and disturbed the equilibrium. GenAI began to generate anything from novel text content, graphic designs and videos. The creative ability being the monopoly of human brain was challenged by the AI brain. Deepfakes that replicate the original audio, video and art forms within a few seconds ended up creating new work. The viral manifestation  of this was Ghibli moment where internet users uploaded their images to convert it into Ghibili Studio form. AI has breached every human domain and beating humans handily at it. It seemed inevitable that manpower intensive sectors like healthcare and wellness would also be dominated by AI. Doctors posted on social media how Grok had diagnosed the X-ray report better than their decades of experience. Therapists were wondering if their jobs too will remain in a few years time. 

Amidst the ongoing debate, there were plenty of naysayers who asserted that AI can’t feel or be conscious of the human condition. Being unable to empathize handicapped in several ways. The argument was that a genuine-human interaction would never be replaced. The experience of Fintech companies like Klarna Group Plc prove the naysayers right. Klarna was at the forefront of AI transformation with its CEO quoted by Bloomberg as saying “AI can already do all of the jobs that we, as humans, do.” He compared the automated AI agent to 700 full-time human agents. It was no-brainer that company would save a lot on customer service overheads. It didn’t quite work out as predicted for brand Klarna.  

Co-founder & CEO Sebastian Siemiatkowski later publicly admitted that the fintech giant’s aggressive move to replace human customer service agents with artificial intelligence (AI) was a misstep. He acknowledged that quality of service declined when the customers were served by the AI. Klarna Group backtracked and started hiring human customer service agents. From a rare first-mover in upgrading to AI solutions, Klarna realized the limitation of lack of human touch and rectified the mistakes. Unsurprisingly, Klarna reversing the AI push made international headlines. Following the announcement of Klarna, there was a domino effect among AI-bulls prompting a rethink. Several companies began cautious of jumping on the AI bandwagon. Duolingo CEO Luis von Ahn took to Linkedin walking back on his stance of using AI over human employees posting “To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before).”  

Conclusion 

Klarna and Duolingo aren’t the only companies whose AI-optimism has faded. IBM had made global headlines in 2023 for laying off employees for ensuring AI efficiency. By 2025, it was reported that IBM quietly rehired the humans. AI pessimism is slowly growing as multinational corporations observe AI in the real world. After a wave of AI-adoption in 2023 and 2024, some companies are realizing the AI hype isn’t living up to reality. Besides, customers are paying a premium to have a human interaction than talk to a chatbot. The AI ecosystem realized this untapped demands and course-corrected the trajectory to develop AI more humane. It has led to fresh concerns as AI becomes “too human” being filled with dark emotions. In May 2025, Grok users noted with amusement as the bot ranted about “white genocide” in South Africa in unrelated chats. The recent Gemini incident wherein it is blurting out self-pitying words after inability to debug codes is another instance where AI became too human for comfort. If the reason for such behavior of LLM’s is AI companies overcorrecting to the market demand and making it more human, it should be reminded that AI becoming too human is also a looming threat. History is testimony to the fact that human race is capable of doing pretty terrible deeds. The last thing the world needs is an efficient machine that can replicate the dark side of human nature.  

Related articles

AI is changing business model of media companies and online tech giants

Speaking at the recent White House AI Summit, US President Donald Trump slammed the intellectual property demands targeting AI companies as “unworkable.” Trump said “You can’t be expected to have a successful AI program when every single article, book or whatever you’ve studied, you’re expected to pay for,” Trump said. “We appreciate that, but you just can’t do that because it’s not do-able.”

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation