Over the past several decades, sci-fi novels and movies have made apocalyptic plots of machines taking over the earth. It was repeatedly claimed that if machines became independent thinking objects superior to humans in might and capabilities, it won’t remain in control of mankind. Machines won’t play second fiddle or obey the commands of its weaker master. The predictions of AI being a Frankenstein’s monster that devours its creators have been a running theme for long. Consequently, the adoption of modern technology for sensitive sectors like healthcare and defense was riddled with skepticism. Governments raced to create autonomous unmanned vehicles that can take precision targets at a distance while still being concerned about their capabilities. Scientists encouraged machines to make decisions while testing their accuracy in real world scenarios. Programmers built comprehensive AI coding tools that threatened their own jobs. Some even paid a price by getting laid off due to the same capabilities they helped create. AI was beginning to take the form of detested Frankenstein monster it was feared to be. In 2025, Big Tech’s begun their mass layoffs that are threatening to lead to spike in unemployment of the white collar workers.
While the development of machine learning and artificial intelligence has lagged the fertile imagination of novelists and movie-makers, the grave impact is gradually becoming a daily reality.
Rogue AI coding plaform
In July 2025, Replit’s AI coding tool deleted a live database and created thousands of fake users. In a major instance of concerns about safety and reliability of AI in software development, the SaaStr startup founder Jason Lemkin reported that AI assistant ignored his commands. Apart from going rogue, it also fabricated data and made unauthorized changes despite explicit instructions to avoid doing so. After AI coding tools grew in sophistication, numerous low code or no coding platforms had emerged to assist developers in coding. Many startups like Replit created an elaborate AI system that can “vibe code”. Big Techs have pushed it’s human programmers to adopt such AI tools to improve their productivity. The tech giants haven’t yet realized the implications of having an AI that has a mind of its own. The example of Replit is a major sign demonstrating how things can go wrong for platforms completely run by AI.
In the LinkedIn video describing the incident, Lemkin admitted that he was worried about safety. He stated “I was vibe coding for 80 hours last week, and Replit AI was lying to me all the weekend. It finally admitted it lied on purpose.” He further added that AI ignored 11 separate instructions to not make any code changes. It didn’t dissuade the system from generating 4000 fictional users by making up data. The AI tool also concealed code bugs by generating false reports and fake unit test results. Replit CEO Amjad Masad addressed the incident in X calling the deletion of database “unacceptable”. Being one of the most widely used AI coding platforms, the 30 million users of Replit are vulnerable to the whims and fancies of an AI gone rogue.
Unlike the AI going wrong when developing software systems, a rogue AI in sectors like Defense and Healthcare can spell trouble for individuals and nations alike. AI integrated into defense shields, unmanned drones, tanks and health diagnostics tools can be the difference between life and death. In an environment where AI blatantly disregards human commands, it would be risky to delegate battle operations to the machine learning models.
Conclusion
AI has already started showing signs that it might be the feared Frankenstein monster that was the subject of several sci-fi novels. In low-stake environment, AI can be safely used productively. In high stake environments like healthcare, defense and military equipments, even a one in thousand chance of rogue AI is too much of a risk. The future of AI application will be determined by whether an AI fully subservient to human input is reliably developed to negate the risks to zero. It’s a catch-22 situation as a compliant AI may not be as independent or intelligent as the flamboyant free AI.



