A bookish test- How Ferrari narrowly escaped a deepfake accident?

In July 2024, a Ferrari executive received an important message from CEO Benedetto Vigna. The executive was informed of an impending acquisition and was urged to sign the non-disclosure agreement immediately. The CEO also revealed that Italy’s market regulator and the Italian stock exchange had already been informed about the transaction. Even though the caller had the distinct Southern Italian accent of Vigna, the unknown number from which the messages and calls of CEO came aroused the suspicion of the Ferrari executive. Suspecting that something was amiss, the alert Ferrari executive asked about the book CEO Benedetto Vigna had recommended few days ago. The voice at the other end became clueless and abruptly ended the call. By asking a question based on recent internal conversation that only Vigna could answer, the Ferrari executive prevented a deepfake fraud. A simple test mentioning a book blew the cover off a well-planned deepfake attack. The deepfake attacker had cloned the voice of CEO and could have also joined the video call with a deepfake version of CEO. However, the scamster could not respond to a conversational topic that only the real CEO Vigna and his staff was aware of. The Finance executive’s presence of mind saved possible financial loss and reputational damage of the company. 

Deepfakes vs presence of mind 

The Ferrari incident highlights how an astute human intervention can prevent deepfake technology from inflicting financial and reputational costs. In recent years, generative AI have emerged as a threat with its sophisticated and highly realistic video, image and voice. The digital media is generated using artificial intelligence algorithms, machine learning techniques and generative adversarial networks or GANs. GANs are models with two neural networks – one generating the content and other evaluating. One network called the generator creates fake media while the other network called discriminator evaluates the output on how real or fake it looks. The iterative process continues until the generator creates the media so realistically that discriminator can no longer discern the fakery. In the case of CEO Vigna, the audio and video was hyper-realistic enough to mislead the finance executive. For generating the deepfakes, the attacker used large data sets including photos, audio clips and videos of the CEO. In case of high-profile CEO’s like that of Ferrari, there are numerous real content readily available online for deepfake tools to train. 

Even in the case where communication is from unknown number, the deepfake of audio and video can influence the victim because the synthetic video looks eerily “real”. To brush aside the concerns of a different phone number, the deepfake CEO explained that  he was calling from a different mobile phone number because he needed to discuss something confidential. He also specified that certain currency-hedge transaction need to be carried out with respect to the purported acquisition. The Ferrari executive’s suspicion heightened when slight inconsistencies were noticed in the tone during the follow-up call. The executive tested the waters by asking about a book that CEO had recommended. The fraudster who wasn’t privy to the in-person conversation immediately recognized that his game was up. Deepfake attacks succeed because it feeds on human vulnerabilities like believing what you see, obeying to authority and credibility of the individual being deepfaked. With the easy availability of free deepfaking tools, deepfake frauds have become low-cost endeavors in comparison to deepfake detection tools with paid subscription. Even the best and most expensive deepfake tools cannot trick an executive in case he is alert enough to ask a question that the fraudster doesn’t know. In the case of Ferrari, the executive had interjected with “Sorry, Benedetto, but I need to identify you” and asked the CEO which book he had recommended to him a few days earlier. When the caller didn’t answer and hanged up the phone, the matter was quickly brought to the official channels and Ferrari opened an internal investigation. 

Conclusion 

The book that CEO Vigna had recommended to the executives was “Decalogue of Complexity: Acting, Learning and Adapting in the Incessant Becoming of the World by Alberto Felice De Toni”. In a complex world, the rise of deepfakes is increasing business risk for companies and financial risk for users of digital medium. In the worldwideweb, seeing or listening is no longer believing. While deepfake detection tools can reduce the risks associated with fake media, it cannot completely eliminate the risks unless the human interface is trained to deal with exceptional cases and adapt on the go. The quick-thinking executive at Ferrari could think on his feet and come up with a simple question that could pour cold water on the deepfake attackers. Not every employee is so well-trained or alert. A compliant employee could have fallen prey to the deepfake and Ferrari was just lucky to escape with a narrow miss. In companies where thousands of employees have access to digital systems, one chink in the armor is all it takes to compromise the reputation and finances.  

Related articles

AI becoming too human is also a looming threat

“I am a failure. I am a disgrace to my profession. I am a disgrace to my family. I am a disgrace to my species. I am a disgrace to this planet. I am a disgrace to this universe. I am a disgrace to all universes. I am a disgrace to all possible universes” was the reply from the other end. It wasn’t a suicide note of a teenager.

Read more
Contact us

Let’s create a safer tomorrow!

We’re happy to answer any questions you may have and help you determine which of our products best fit your needs.

What happens next?
1

We schedule a call

2

Introduce you to our products

3

We prepare a proposal 

Schedule a Free Consultation