Why food-delivery apps need deepfake detection AI?

Why food-delivery apps need deepfake detection AI?

 

About a year ago, food delivery platform Zomato revealed several customers complaining of misleading AI-generated images in the platform. On August 18, 2024 CEO Deepinder Goyal tweeted “We urge our restaurant partners to avoid using AI for dish images in restaurant menus from now onwards — we will actively start removing such images from menus by the end of this month. And will stop accepting AI generated dish images (as much as we can detect them using automation).” The AI-generated images seemed delicious but led to customer dissatisfaction as reality failed to live up to the deepfaked expectations. It was a breach of trust to display attractive mouth-watering burger when the delivered food just looked normal. It felt like overpromising and under-delivering resulting in low ratings and higher refunds.

Artificial intelligence has been actively used in business functions and marketing campaigns are no exception. AI can save expensive and time-consuming photo shoots. With a single detailed prompt to a Gen AI tool, any creator can obtain the desired pictures for marketing the product. AI can blur the line between real and fake by simply producing the “ideal”. The delivered burger may not exactly look like the “ideal burger” appearing in marketing posters. The perfection of an image of delicious sandwich or chicken tikka masala may not correspond to the supplied product. AI can deliver aesthetic perfection and turn a marketing pic into a liability of unmet expectations. The deepfake problem isn’t limited to the marketing department defrauding the customers. Deepfakes are enabling frauds on the companies themselves.

The risk came to limelight when a recent tweet by a Doordash customer showed how he edited the burger to make it look raw to claim a refund. By posting the deepfake image along with the real image, he popularized the vulnerability to deepfakes that food platforms face. Any customer can change the form, texture or look of the delivered product and claim dissatisfaction to seek the refund. Editing the image to get the money back is an easy scam that will rise due to availability of AI tools. The economics of food delivery platforms makes it cost-prohibitive to return the food before issuing refund. The food delivery platforms have just two alternatives – issue the refund without question or scan the customer image for deepfake. A deepfake detection tool is a necessity for food delivery platforms to avoid a scenario where refund request rises by showing AI-generated deepfakes.

Using deepfaked pics to seek food refund from Doordash, Uber Eats and similar quick-delivery platforms enhances the financial risk of the companies. It also erodes trust of the company in their customer base and will likely result in discontinuation of refund policies. Most quick commerce platforms with low-value orders will prefer no refund policy over the possibility of surging refunds. It will fuel discontent among genuine customers who receive faulty products or burnt food. The only tool to prevent this spiraling problem is to ensure deepfake detection tools in the customer support module. Whenever a customer uploads a snapshot of the product, the customer service executive at the other end should be capable of detecting the AI-generated ones. After the customer support system filters out deepfaked food images, the AI can automatically inform the restaurant of quality issues in the food to forward the refund request. A fair model protects the interests of customers without falling prey to bad faith actors using deepfaked images.

With rising prevalence in Gen AI tools, images and videos are no longer substantive proof for anything. The existing refund policies of e-commerce platforms, online services and delivery apps needs to factor in the associated risks. Deepfakes are misleading for customers who blindly believe in the hyper-realistic images and place orders. It is equally misleading to the customer service agents of the platforms that can be misled by fake images. Deepfake detection is a must-have tool for all online authentication.

Related articles

Tracing pixel defects to identify Deepfakes

As AI-generated images grow increasingly realistic, the next frontier of defense lies in detecting the invisible fingerprints left behind in every pixel. From GAN frequency inconsistencies to heatmap-based anomaly detection, deepfake forensics is shifting from human perception to measurable, machine-level analysis.

Read more

Extracting relevant information from chaotic audio

In a world where chaotic audio from crime scenes, crowded streets, and surveillance devices often hides crucial details, AI is transforming how we extract clarity from noise. From MP3’s psychoacoustic origins to today’s neural noise-reduction engines, advanced audio processing now enables law enforcement, intelligence agencies, and investigators to uncover truth buried in sound. As deepfake threats rise, the ability to isolate authentic, relevant audio has become a cornerstone of justice and national security.

Read more

Make it in India – Building the nation’s cybersecurity and trust infrastructure

As cyber aggression accelerates across the globe, India faces a uniquely dangerous threat environment—one shaped by hostile neighbours, rising AI-driven attacks, and overwhelming dependence on foreign cybersecurity tools. With millions of incidents reported annually, state-sponsored espionage, and critical institutions repeatedly targeted, India can no longer rely on imported digital shields.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation