Swift, Biden, & the $25M scam | Protecting your employees in the era of deepfakes

Protecting your employees in the era of deepfakes

Earlier in February, HR Grapevine shared the story of a global organization that lost $25 million to scammers after an employee received instructions from their CFO on a video call that turned out to be entirely deepfaked.

Baron Chan Shun-ching, Acting Superintendent of the Hong Kong Police Force, where the employee works, stated the deepfake was created using publicly available audio and video.

Wishing to alert the public to these “new deception tactics,” Chan warned that this type of scam marked a development in the sophistication of deepfake technology. “In the past, we would assume these scams would only involve two people in one-on-one situations, but we can see from this case that fraudsters are able to use AI technology in online meetings, so people must be vigilant even in meetings with lots of participants," he said.

As well as greater sophistication, we are now seeing more cases of deepfake scams – the Hong Kong Police Force says it has investigated 20 cases of such AI-enabled fraud – and other dangerous applications of deepfake technology are becoming prevalent.

Taylor Swift was the victim of AI-generated, sexually explicit deepfakes in a case of digital sexual assault. The images were viewed by millions before being taken down. Another deepfake also depicted Swift falsely endorsing Trump for the upcoming U.S. election, which has been dubbed “the deepfake election”. The concerns are well founded amid incidents such as a deepfake robocall impersonating Joe Biden that dissuaded voters from attending a New Hampshire Primary.

What once was considered fictional, is now an easily accessible reality, says Taylor Bradley, Head of HRBPs, L&D, and Compensation at Turing, and a member of CNBC's Workforce Executive Council. He argues this present huge risks of economic espionage, social engineering, blackmail, and market manipulation for employees and investors.

“Employees will be targeted by hyper-realistic yet manufactured blackmail materials (see the Taylor Swift case) or receive a phone call from a bad actor pretending to be the "CEO" using realistic AI generated speech (see the Biden robocalls),” Bradley states. “Investors will undoubtedly be impacted by an edited video of a corporate officer disclosing falsified data meant to manipulate a stock.”

So, what can HR do to protect employees and organizations alike?

It’s time for HR to act against deepfakes

Superintendent Chan advises employees to regularly check and confirm details through the company’s standard communication channels, and to ask specific questions during video calls to determine whether the participants are real or who they claim to be. This advice is a good example of where HR should start: Education.

“Prioritizing awareness is essential, along with establishing a clear line of communication for employees to report any encounters with deepfake,” explains Bradley. Employees should be trained on warning signs that the audio or video they are consuming may not be legitimate, and how to check for deepfakes using methods such as the above.

This can help workers to flag when they see something suspicious just as they would a scam email. Audio calls and video calls – including job interviews and regular weekly meetings – even with multiple participants are now at risk of being doctored or created using AI. Warning signs include odd appearances, strange lighting, and off-sync audio compared with lip movements.

It's worth noting that each company is different, and there is no one-size-fits-all solution. “Your training program should be customized to address your company's specific threat matrix, focusing on raising awareness of the most probable use case,” adds Bradley.

Thankfully, there is now more awareness of this technology as deepfake incidents and scams enter the zeitgeist, but employers can take no risks. Training on deepfake technology, what it can do, and how to spot it, is a fundamental place for HR to begin.

HR should also work closely with IT and Security compliance teams to ensure the correct frameworks, procedures, policies, and checks are in place for any decision-making involving sensitive data or financial information. “Engage in regular one-on-one discussions with your Heads of InfoSec, recognizing that the human element is often the weakest link exploited by bad actors,” suggests Bradley.

Authentication measures including multi-factor authentication and biometrics are increasingly commonplace, but HR can partner with the security teams responsible for this technology to make sure employees are using it effectively to avoid deepfake scams.

Create a safety-first culture

This also involves creating a culture of open communication where employees are free and encouraged to speak up on potential security concerns. The employee in the $25 million scam case felt reassured when they saw what they thought were familiar faces on the video meeting and didn’t raise concerns until a week later. People leaders and managers must create the right channels for sharing concerns about deepfakes or other scams and help employees feel safe and supported in using them.

“Foster a just culture that promotes timely reporting of exploits, even in cases where someone falls victim to a scam,” advises Bradley. “With your legal counsel's approval, anonymize and share instances of real or prevented security breaches to facilitate organizational learning. Maintain a realistic outlook on the existence of these threats and collaborate with experts within your organization to effectively mitigate them.”

Particularly as deepfake scams become more popular, they will also become more sophisticated. Scammers will consider new methods and make fresh advancements with the help of AI technology. Whilst it’s useful to reflect on past case studies and create protection against similar scams, creating a safety-first culture could be crucial to avoid the next innovation in deepfake technology, whatever form it may take.

Employers must also set up workplace policies prohibiting employees from using deepfake technology at work, particularly if it involves using sensitive employee or customer data. Any use of deepfake technology by an employee will likely breach some type of data privacy regulation, which therefore poses a significant risk to employers.

If 2024 is to become the year of deepfake, employees must be protected. Alongside avoiding multi-million-dollar scams and data breaches, this may help to create a society where digital sexual abuse and democratic infringements are simply not accepted.

Be the first to comment.

Sign up for a FREE myGrapevine account to have your say.