Will fear psychosis of deepfakes lead to mass exodus of women?

Will fear psychosis of deepfakes lead to mass exodus of women?

In 2020, Facebook launched a new safety feature in India called “Lock your profile”. The new feature was in response to persistent feedback of users especially women with privacy concerns. Locking a profile locked out strangers from accessing photos, timelines and posts of the Facebook profile. In a single change of settings, Indian women could restrict their network to genuine friends and eliminate potential stalkers. The content of profile would be visible only after the friend request is accepted. In the initial days, it was estimated that “Lock your profile” feature will remain limited to a certain type of users. With a rise in fake images, sock puppet accounts and cybercrimes in social media platforms, users increasingly gravitated towards a secure locked profile than an identity vulnerable to misuse. Platforms like Instagram too had the option to make accounts private limiting the posts to accepted followers.

In the past few years, the rise of GenAI tool is upsetting the apple cart of risks in social media. It is no longer safe to just lock your account because even a single image is enough for malicious actors. Using GenAI tools, it’s easy to create fake pics with bare minimum input and simple prompt. Generative AI refers to creative ability of AI to replicate an image, audio or video with remarkable precision. GenAI tools can create text content, clone voice or generate videos that are fake. Deepfake porn created by nudify apps are emerging as the biggest concern for social media users. Such tools are capable of creating a deepfake of any individual with just their image, voice clips and video samples.

Deepfake menace

In October 2025, investigators in Sydney were alerted about explicit deepfake photos in the internet using the faces of female high school students in the city. The NSW police began investigating the case after families learned that digitally altered photos were circulating online, after a male student was sent one of the images and told the school. As the matter became public, the parents and students panicked wondering if their deepfakes are amongst the ones circulating online. The easy availability of deepfake tools are leading to fear psychosis among internet users especially female users who actively post images and videos online.

In the UK, a man was charged for altering the image of his former classmate by taking two images from her Instagram account. By creating digitally manipulated image of her in a state of undress, the offender sent it to his friends. The victim felt humiliated and reported it to the police. Instances like this aren’t isolated ones. Deepfake porn victims are rising by the day as nudify apps crop up online. Telegram has several bots that can undress a person in the image instantly. As the number of free AI tools rise, the cases of deepfakes are set to rise exponentially creating stress and trauma for the innocent victims. Even a single social media post is a risk that can go wrong if malicious actors chose to perpetrate deepfake crimes.

As per a report, deepfakes have surged 550% from 2019. Over 75% of Indians had viewed a deepfake content in the past year. Deepfake porn are rising in social media platforms. YouTube is flooded with AI-generated content forcing it to change the monetization rules for low-effort AI videos that are mass produced. While the AI-generated media may itself be harmless, it’t hindering the creative potential of the content creators. The biggest victim of deepfake porn in India was an Instagram account named Babydoll Archi who captivated social media users and garnered 1.4 million views overnight. It became a viral sensation within days as an Indian woman’s identity was stolen for creating deepfakes and improving reach. Revenge porn has become easy due to deepfaking tools and inadequate oversight by social media companies that host such content.

As the average user becomes aware of the dangerous misuse of their images and videos, a fear psychosis is gripping the internet users. Indian woman today are conscious of the risks posed by jilted lovers, ex-boy friends or random social media followers. It won’t be surprising if they chose to ditch social media activity for a safer online experience. Social media companies so far have proved inadequate in addressing the concerns. They don’t have the technical mechanism to verify the deepfakes at the stage of uploading. Once the content achieves virality and spreads across various platforms, it has already traumatized the victim and rendered tech giants incapable of filtering out such harmful posts. A digital exodus of Indian women is a looming possibility if the deepfakes continue to rise at this rate.  

Related articles

Tracing pixel defects to identify Deepfakes

As AI-generated images grow increasingly realistic, the next frontier of defense lies in detecting the invisible fingerprints left behind in every pixel. From GAN frequency inconsistencies to heatmap-based anomaly detection, deepfake forensics is shifting from human perception to measurable, machine-level analysis.

Read more

Extracting relevant information from chaotic audio

In a world where chaotic audio from crime scenes, crowded streets, and surveillance devices often hides crucial details, AI is transforming how we extract clarity from noise. From MP3’s psychoacoustic origins to today’s neural noise-reduction engines, advanced audio processing now enables law enforcement, intelligence agencies, and investigators to uncover truth buried in sound. As deepfake threats rise, the ability to isolate authentic, relevant audio has become a cornerstone of justice and national security.

Read more

Why food-delivery apps need deepfake detection AI?

As AI-generated images grow increasingly realistic, the next frontier of defense lies in detecting the invisible fingerprints left behind in every pixel. From GAN frequency inconsistencies to heatmap-based anomaly detection, deepfake forensics is shifting from human perception to measurable, machine-level analysis.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation