A sneak peek into rogue AI phenomenon

Over the past several decades, sci-fi novels and movies have made apocalyptic plots of machines taking over the earth. It was repeatedly claimed that if machines became independent thinking objects superior to humans in might and capabilities, it won’t remain in control of mankind. Machines won’t play second fiddle or obey the commands of its weaker master. The predictions of AI being a Frankenstein’s monster that devours its creators have been a running theme for long. Consequently, the adoption of modern technology for sensitive sectors like healthcare and defense was riddled with skepticism. Governments raced to create autonomous unmanned vehicles that can take precision targets at a distance while still being concerned about their capabilities. Scientists encouraged machines to make decisions while testing their accuracy in real world scenarios. Programmers built comprehensive AI coding tools that threatened their own jobs. Some even paid a price by getting laid off due to the same capabilities they helped create. AI was beginning to take the form of detested Frankenstein monster it was feared to be. In 2025, Big Tech’s begun their mass layoffs that are threatening to lead to spike in unemployment of the white collar workers.  

 

While the development of machine learning and artificial intelligence has lagged the fertile imagination of novelists and movie-makers, the grave impact is gradually becoming a daily reality.  

 

Rogue AI coding plaform  

 

In July 2025, Replit’s AI coding tool deleted a live database and created thousands of fake users. In a major instance of concerns about safety and reliability of AI in software development, the SaaStr startup founder Jason Lemkin reported that AI assistant ignored his commands. Apart from going rogue, it also fabricated data and made unauthorized changes despite explicit instructions to avoid doing so. After AI coding tools grew in sophistication, numerous low code or no coding platforms had emerged to assist developers in coding. Many startups like Replit created an elaborate AI system that can “vibe code”. Big Techs have pushed it’s human programmers to adopt such AI tools to improve their productivity. The tech giants haven’t yet realized the implications of having an AI that has a mind of its own. The example of Replit is a major sign demonstrating how things can go wrong for platforms completely run by AI.  

 

In the LinkedIn video describing the incident, Lemkin admitted that he was worried about safety. He stated “I was vibe coding for 80 hours last week, and Replit AI was lying to me all the weekend. It finally admitted it lied on purpose.” He further added that AI ignored 11 separate instructions to not make any code changes. It didn’t dissuade the system from  generating 4000 fictional users by making up data. The AI tool also concealed code bugs by generating false reports and fake unit test results. Replit CEO Amjad Masad addressed the incident in X calling the deletion of database “unacceptable”. Being one of the most widely used AI coding platforms, the 30 million users of Replit are vulnerable to the whims and fancies of an AI gone rogue. 

 

Unlike the AI going wrong when developing software systems, a rogue AI in sectors like Defense and Healthcare can spell trouble for individuals and nations alike. AI integrated into defense shields, unmanned drones, tanks and health diagnostics tools can be the difference between life and death. In an environment where AI blatantly disregards human commands, it would be risky to delegate battle operations to the machine learning models.  

 

Conclusion 

 

AI has already started showing signs that it might be the feared Frankenstein monster that was the subject of several sci-fi novels. In low-stake environment, AI can be safely used productively. In high stake environments like healthcare, defense and military equipments, even a one in thousand chance of rogue AI is too much of a risk. The future of AI application will be determined by whether an AI fully subservient to human input is reliably developed to negate the risks to zero. It’s a catch-22 situation as a compliant AI may not be as independent or intelligent as the flamboyant free AI.   

Related articles

Rethinking ROI in the age of AI

As AI drives a growing share of economic expansion, investors are questioning whether massive capital investments will deliver sustainable returns. Drawing lessons from the dot-com era, this article examines AI’s impact on GDP growth, valuation risks, and why the technological capabilities of artificial intelligence may still make it an ROI-positive force.

Read more

Tracing pixel defects to identify Deepfakes

As AI-generated images grow increasingly realistic, the next frontier of defense lies in detecting the invisible fingerprints left behind in every pixel. From GAN frequency inconsistencies to heatmap-based anomaly detection, deepfake forensics is shifting from human perception to measurable, machine-level analysis.

Read more

Extracting relevant information from chaotic audio

In a world where chaotic audio from crime scenes, crowded streets, and surveillance devices often hides crucial details, AI is transforming how we extract clarity from noise. From MP3’s psychoacoustic origins to today’s neural noise-reduction engines, advanced audio processing now enables law enforcement, intelligence agencies, and investigators to uncover truth buried in sound. As deepfake threats rise, the ability to isolate authentic, relevant audio has become a cornerstone of justice and national security.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation