'Be on alert' | Deepfake audio of CEO used in attempt to trick employee

Deepfake audio of CEO used in attempt to trick employee

A software firm has issued a warning to businesses around the world after scammers used deepfake audio of their CEO in a failed phishing attempt.

LastPass, a password management platform used by millions of users and over 100,000 businesses worldwide, revealed that an employee was targeted by scammers impersonating the company’s CEO, Karim Toubba.

It is theorised that the con artists had used YouTube videos of Toubba speaking at conferences to generate a near-perfect copy of his voice.

Although the audio was highly accurate, the target employee was quick to spot the telltale signs of a scam, notably that the messages were sent outside normal working hours, and via WhatsApp – a channel not used by LastPass for company communications.

In a blog post, LastPass intelligence analyst Mike Kosak revealed details of the incident “to help raise awareness that this tactic is spreading and all companies should be on the alert.”

Kosak wrote: “In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp.

Read more from us

“As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

While there was no data breach or risk to the company, Kosak explained that LastPass “did want to share this incident to raise awareness that deepfakes are increasingly not only the purview of sophisticated nation-state threat actors and are increasingly being leveraged for executive impersonation fraud campaigns”.

He went on: “Impressing the importance of verifying potentially suspicious contacts by individuals claiming to be with your company through established and approved internal communications channels is an important lesson to take away from this attempt.”

Kosak said LastPass was already working closely with intelligence sharing partners and other cybersecurity companies “to make them aware of this tactic to help organisations stay one step ahead of the fraudsters.”

What are deepfakes?

Deepfakes use generative AI to manipulate existing audio and video clips to create a new piece of media. They are most typically used to spread fake news and misinformation.

These moving images and audio clips have the power of depicting celebrities and global figures, often saying things they never have, or never would.

In recent times, deepfakes have shown the world Barack Obama calling President Trump a ‘dipsh*t’, and recently portrayed Martin Lewis, the morning TV show finance expert, telling fans to buy an app as part of a scam.

A deepfake audio clip also went viral which purported to be Labour Party leader Keir Starmer in an expletive-laden rant, berating staff and criticising the city of Liverpool, where the party’s annual conference was taking place.

These moving images and soundbites weren’t real, but rather AI generated content aimed at tricking viewers and listeners.

What threats do deepfakes pose to business?

Deepfakes pose a threat to companies, and all organisations, as its creators often seek to perpetuate ‘facts’ based on false information, in a bid to get viewers to believe in, or act on, something harmful.

This is particularly dangerous as it becomes difficult for society to distinguish between what is real and what is fake, and for leaders to regulate this. Many argue that law makers must act faster to create regulation around deepfakes and the harms they are likely to cause.

As seen with the case of LastPass above, deepfakes can create situations whereby CEOs and managers are impersonated, or employees are blackmailed for information, money, or passwords.

Just as phishing emails sent to employees are often the main route a cybercriminal takes to infiltrate a business, deepfakes can be used to take advantage of employees and convince them who they’re speaking to is a leader from their organisation.

But much of the solution around this lies in a combination of better authentication practices that go beyond voice and appearance, and adequate education around where employees are likely to receive information from their company. This is made more difficult since the prominence of remote work, as workers become more susceptible to cyber criminals.

Mike Kiser, director of strategy and standards at the tech company SailPoint: “Across multiple industries cybercriminals are using AI and deepfake technology across a range of attack vectors. According to WhatsApp, users send an average of 7 billion voice messages every day.

“Voice and video messaging are becoming normalised, increasing the ways in which cybercriminals can connect with victims. AI-enabled vishing and deepfake video make this an increasingly dangerous environment, and the onus on potential victims to identify the real content in a sea of fakes is ever heavier.”

“To combat this, businesses need stronger authentication than seeing and hearing. It’s not enough to trust your senses, especially over a video call where fraudsters will often utilise platforms with poorer video quality as part of the deceit. We need something more authoritative, along with additional checks. A stronger form of digital identity needs to be at play, such as cryptographically signed proof that someone is who they say they are, which can only be unlocked by biometrics.

“However, tech alone isn’t enough – we need ease of use with the technology, and organisations to train employees as much as possible to spot a deepfake. A lot of this will come down to secondary checks – for example, if something seems off, calling the person in question, or checking to see if their video background is the same background as when they usually work from home.”

Ultimately, deepfakes pose a myriad of potential threats to employers, particularly around cyber security and authentication. With employees often being the main way criminals get into businesses, and with the prominence of remote work, we might expect deepfakes to become more widely used to take advantage of employees.

Only with education, and a great awareness around what deepfakes is, do companies have a chance withstanding its effects.



You are currently previewing this article.

This is the last preview available to you for the next 30 days.

To access more news, features, columns and opinions every day, create a free myGrapevine account.