AI concerns | What are 'deepfakes', and how could they impact your workforce?

What are 'deepfakes', and how could they impact your workforce?

In May this year, Microsoft’s president, Brad Smith, said his biggest concern around artificial intelligence is deepfakes.

Deepfakes first emerged on the Internet in 2017 and is powered by a deep learning method known as generative adversarial networks (GANs), hence the name ‘deepfakes’.

Deepfakes, which has been likened to a more sophisticated Photoshop for moving images, has shown the world Barrack Obama calling the former-President Donald Trump a ‘dipsh*t’, and has most recently portrayed Martin Lewis, the morning TV show finance expert, telling fans to buy an app as part of a scam. These moving images weren’t real, but an AI generated video aimed at tricking viewers.

These moving images have the power of depicting very realistic moving images of celebrities and global figures, often saying things they never have, or never would, but are becoming more and more prominent.

From our premium content

In a speech addressing how to best regulate the technology, the Smith warned: “We’re going to have to address the issues around deepfakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians.”

Experts have predominantly focused on the impact deepfakes could have on democracy, but this tech is likely to also have an impact on businesses and their workforce.

This is compounded by the fact that research shows employee negligence is the biggest cause of cyber-attacks, as deepfakes become more prominent, how could this tech be used to take advantage of workforces and impact businesses?

What are some of the threats deepfakes pose?

Deepfakes pose a threat to companies, and all organisations, as its creators often seek to perpetuate ‘facts’ based on false information, in a bid to get viewers to believe in, or act on, something harmful.

This is particularly dangerous as it becomes difficult for society to distinguish between what is real and what is fake, and for leaders to regulate this. Many argue that law makers must act faster to create regulation around deepfakes and the harms they are likely to cause.

For businesses, deepfakes can give way to situations whereby CEOs and managers are impersonated, or employees are blackmailed for information, money, or passwords.

Just as phishing emails sent to employees are often the main route a cyber criminal takes to infiltrate a business, deepfakes can be used to take advantage of employees and convince them who they’re speaking to is a leader from their organisation.

But much of the solution around this lies in a combination of better authentication practices  that go beyond voice and appearance, and adequate education around where employees are likely to receive information from their company. This is made more difficult since the prominence of remote work, as workers become more susceptible to cyber criminals.

Mike Kiser, director of strategy and standards at the tech company SailPoint: “Across multiple industries cybercriminals are using AI and deepfake technology across a range of attack vectors. According to WhatsApp, users send an average of 7 billion voice messages every day. Voice and video messaging are becoming normalised, increasing the ways in which cybercriminals can connect with victims. AI-enabled vishing and deepfake video make this an increasingly dangerous environment, and the onus on potential victims to identify the real content in a sea of fakes is ever heavier.”

“To combat this, businesses need stronger authentication than seeing and hearing. It’s not enough to trust your senses, especially over a video call where fraudsters will often utilise platforms with poorer video quality as part of the deceit. We need something more authoritative, along with additional checks. A stronger form of digital identity needs to be at play, such as cryptographically signed proof that someone is who they say they are, which can only be unlocked by biometrics.

“However, tech alone isn’t enough – we need ease of use with the technology, and organisations to train employees as much as possible to spot a deepfake. A lot of this will come down to secondary checks – for example, if something seems off, calling the person in question, or checking to see if their video background is the same background as when they usually work from home.”

Read more from us

Nick France, CTO of Sectigo, says that this could have an impact on voice and face recognition employees currently use to authenticate themselves.. He says: “I deep fake technology has advanced far beyond what most people realise, as malicious actors can now produce convincing deep fakes that can bypass conventional voice recognition systems. Unfortunately, anything about your physical appearance can be replicated, i.e eyes, face, voice. This is no longer something that only exists in films, as more people are now capable of creating convincing deep fakes. The rising prevalence of deepfake technology poses a significant threat to the effectiveness of certain biometric authentication methods used within businesses."

Ultimately, deepfakes pose a myriad of potential threats to employers, particularly around cyber security and authentication. With employees often being the main way criminals get into businesses, and with the prominence of remote work, we might expect deepfakes to become more widely used to take advantage of employees. Only with education, and a great awareness around what deepfakes is, do companies have a chance withstanding its effects.



You are currently previewing this article.

This is the last preview available to you for 30 days.

To access more news, features, columns and opinions every day, create a free myGrapevine account.