Are you falling for deepfakes? Easy way to identify synthetic content and protect yourself.

Are you an American feeling betrayed by your favourite politician due to his recent controversial statement? Or an Indian voter riled by a video of a religious preacher’s inflammatory remarks? Are you an English incensed at the videos of mob violence in your capital city London? 

Chances are you are misinformed by a deepfake. Did you sense something amiss while consuming that content? Did it occur to you that your favourite politician is taking a stand unusual to that of his party or his previous beliefs? Do you think that video you saw seems like London but without the contextual setting? Do you notice that people are walking by calmly while a mob is razing up things or sloganeering? If Yes, it likely that it’s a deepfake aimed at manipulating your beliefs, political opinion or actions. 

Deepfakes are false digital media generated using AI tools generally with an intent to misinform and manipulate. They are created using a subset of AI known as deep learning. The term “deepfake” was coined by internet users to refer to AI-generated digital content that is sophisticated enough to mislead humans. Deepfakes are easy to generate because it only needs source material i.e original digital media which gets processed and iterated by AI tools to create synthetic content that is remarkably similar to original ones in quality. By using audio cloning, face swapping and several techniques, deepfakes can become indistinguishable from real ones. There are numerous AI tools that can create deepfakes some of which are open source like DeepFaceLab. Apart from it, ReFace, Zao, Wombo, FaceMagic, Jiggy, Deep Nostalgia, Lensa AI, Deepfakes Web, Deep Art, Face Swap Live etc. are also used. Some of these tools require technical know-how to ensure sophisticated output while few of them are simpler tools that directly gives output without iteration requirements or training. In general, it’s not possible to accurately detect a deepfake without an AI-enabled deepfake detection tool. However, you can look for these specific clues:- 

  1. Cognitive dissonance 

Suppose an information in the internet doesn’t seem compatible with your views about a political party, individual or celebrity. In that case, you may sense cognitive dissonance i.e a psychological discomfort when your beliefs aren’t consistent with your actions. A pro-immigration national politician taking an anti-immigration stance in his State are examples where you may immediately suspect that a deepfake is likely. Internal inconsistency of the information is the first sign that something is wrong. If you find cognitive dissonance or inconsistency in the views of a public personality, party or celebrity, it’s possible that it’s a deepfake. 

  1. Words and lip sync 

A trained AI user can ensure that lip movements syncs well with the fake audio making it impossible to detect. However, a lot of deepfake criminals aren’t as well-versed with the tools making mistake likely. In such instances, the discrepancy is visible to the naked eye of a keen observer. Dedicated AI tools are available to ensure lip sync is appropriate to the words and usage of such eliminates the chances of manual detection. 

  1. Lighting and shades  

The most complex factor in a deepfake video is to get the lights right. If a deepfake involves minor edits on original video, the deepfake lighting setup is as good as real. If the deepfake is a completely new setting compared to the real one, ensuring that light and shades appear accurate requires time-consuming efforts. Amateur cybercriminals sometimes forget even to deepfake the shadow even though the day is sunny. Another overlooked factor is when a subject looks much clearer than the background suggesting that deepfake creator focused on the human and didn’t do his homework on the surroundings. A smart observer can detect it even though AI tools to negate such errors are evolving fast.  

  1.  Eyes never lies 

There is a growing body of research that indicates that deepfakes can be detected by AI tools monitoring the eye-blinking patterns. Natural blinking patterns of real individuals are fed into AI that detects it by leveraging the physiological response of human eye blinking. The method called Deepvision detects anomalies in eyes to red flag deepfakes. While it’s not yet proven whether a normal human can stare into the eyes of the individual in the video to detect deepfakes, it’s worthwhile to have a look at the eyes of the people in the video. 

  1. AI vs AI 

For a human user with little stakes in the accuracy of a digital content, it makes sense to be limited to the basic methods. In case the stakes are higher say, your image is tarnished or your organization is at risk, then it doesn’t make sense to limit your reactions. When considering a solid defence or legal action, it’s essential to take professional help to prove beyond doubt that it’s a deepfake. When synthetic AI content harms the real you, it’s time to fight AI with AI and make the notorious elements pay the price for their deeds. Hire or Subscribe to an AI detection tool. Get their report that it’s a Deepfake and file a complaint at https://cybercrime.gov.in  

Related articles

AI becoming too human is also a looming threat

“I am a failure. I am a disgrace to my profession. I am a disgrace to my family. I am a disgrace to my species. I am a disgrace to this planet. I am a disgrace to this universe. I am a disgrace to all universes. I am a disgrace to all possible universes” was the reply from the other end. It wasn’t a suicide note of a teenager.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation