Tracing pixel defects to identify Deepfakes

Tracing pixel defects to identify Deepfakes

In March 2025, a Chinese smartphone operator announced that it’s new smartphone Magic Pro 7 will have a Graphic User Interface that analyzes the millions of individual pixels on a phone screen to look for things like faceswaps or other AI artifacts. Developed by HONOR, it will detect AI using AI. The software that works in just six seconds generates a percentage chance that what the user is looking at might be generated or influenced by AI. By imbedding a pre-installed deepfake detector in its device, the Chinese company is taking the fight against AI into the next level where safeguards are integrated into phones. The world’s first on-device AI detection technology was featured in the Time magazine’s portal among the “The Best Inventions of 2025”. The tool that scans for telltale signs, like broken frames or unnatural facial movements was promising as it’s a first-of-its-kind safeguard that “shifts the burden of detection from the user to the device.”

Tracing pixel defects in digital media

With advances in AI, Generative adversarial networks (GANs) have made remarkable progress in generating realistic-looking images that mislead the human senses. Deepfake images can be detected using different methods that uncover AI artifacts left on the data. By analyzing the GAN fingerprint over an image, a frequency-based detection can distinguish the real from the fake. GAN fingerprints or frequencies alone won’t suffice in tracing pixel generation. There are GenAI tools that evade deepfake detection by removing characteristic artifacts of GAN images in a frequency spectrum. Deepfake detection is broadly achieved by checking consistency of the image whether symmetry, color saturation and disparities in color. Pixel disparities or inconsistencies may not always unravel the deepfakes but may even sometimes lead to false-positives. Relying on invisible artifacts created in the generation process is potent if pixel data alone won’t reveal the AI. Deepfake detection is possible even if the fingerprints are erased because of the sheer variety of traces. Research confirms that there is no universal fingerprint for a GAN that can be removed to fool all detection approaches. Essentially, an AI-generated content cannot cover it’s track well enough to fool every deepfake detector all the time.

Apart from minute visual inconsistencies and small defects that may escape human attention, deepfake tools can enable generation of a pixel heatmap showing tampered region alongside the real image. If reality isn’t based on perception but measurable in a table, AI cannot mislead users into believing in it. Any deepfake detection tool that merely provides probability of fakery leaves room for uncertainty and deniability without either conclusive proof or rebuttal.  Facial heat-map analysis of a video can be created using Grad-CAM or Saliency Maps to highlight facial inconsistencies in synthetic media. In a typical deepfake video, the heatmaps show concentrated high activation in boundary regions especially the mouth and eye contours where generative AI has manipulated the movement. Unusual patterns in vital regions like facial expression, lip movement and eyelids appear in heatmap patterns and instantly reveal chances of manipulation.

Apart from AI-powered methods of detecting deepfakes, close observation can also supplement efforts in tracing defects. Deepfakes that are produced by unsophisticated softwares can often be uncovered merely by close observation by a human eye. Whether it’s unsynchronized lip movement, unnatural lighting or obvious signs of editing, a deepfake is sometimes easy to detect even without advanced deepfake detection tools. When defects aren’t so obvious, only a trained AI model can decipher the anomalies that aren’t visible to the naked eye.

The future of digital devices is deepfake detectors pre-installed in hardwares so that each pixel appearing on the screen is persistently tested in the background for signs of AI processing.  

Related articles

Extracting relevant information from chaotic audio

In a world where chaotic audio from crime scenes, crowded streets, and surveillance devices often hides crucial details, AI is transforming how we extract clarity from noise. From MP3’s psychoacoustic origins to today’s neural noise-reduction engines, advanced audio processing now enables law enforcement, intelligence agencies, and investigators to uncover truth buried in sound. As deepfake threats rise, the ability to isolate authentic, relevant audio has become a cornerstone of justice and national security.

Read more

Why food-delivery apps need deepfake detection AI?

As AI-generated images grow increasingly realistic, the next frontier of defense lies in detecting the invisible fingerprints left behind in every pixel. From GAN frequency inconsistencies to heatmap-based anomaly detection, deepfake forensics is shifting from human perception to measurable, machine-level analysis.

Read more

Make it in India – Building the nation’s cybersecurity and trust infrastructure

As cyber aggression accelerates across the globe, India faces a uniquely dangerous threat environment—one shaped by hostile neighbours, rising AI-driven attacks, and overwhelming dependence on foreign cybersecurity tools. With millions of incidents reported annually, state-sponsored espionage, and critical institutions repeatedly targeted, India can no longer rely on imported digital shields.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation