A $25 Million Bank Scam and an Edited Biden Video Spotlight Digital Fakery’s Deep Impact
A clever video con cost a financial firm millions and red flags about AI’s impact on the presidential election are flying high. But even smaller companies need to be wary of deepfake issues.
Joe Biden speaks during a campaign event in Las Vegas on February 4, 2024.. Photo: Getty Images
Deepfake technology is one of the buzziest tech memes at the moment, for all the wrong reasons. While the tech has incredible promise, and rides on the back of surging AI cleverness, it’s being used maliciously-as Taylor Swift can attest, and a finance worker in Hong Kong just found out.
The financial scam that impacted a multinational firm (whose identity has not been revealed) was arguably very, very clever. It involved a video conference call, where one of the members was apparently the company’s chief financial officer. During the call, the actual employee was pressed to make several money transfers totalling $200 million Hong Kong dollars, equating to about $25.6 million U.S. Believing he actually was acting on company orders, he promptly did so.
Some details of the actual event remain unclear. It’s been claimed that all other people on the call were AI creations-including the purported CFO. Other sources suggest that some people participating were real. Nevertheless, the appearance of other callers whose faces the targeted employee recognized apparently allayed his fears that something felt “off” about the CFO’s presence. The deepfaker criminals behind the call allegedly used footage from previous video calls and also faked the callers’ voices. The scam was only uncovered when the targeted worker later checked in with head office.
Separately, Meta-the parent company that owns Facebook-was chastised by its independent oversight board for how it handled a manipulated video of President Biden. In original footage, Biden was shown pinning an “I voted” sticker onto his granddaughter’s chest. But in some maliciously edited footage, which spread on Facebook, Biden’s movements were faked to appear he was touching her inappropriately. Despite the controversy, Meta left the edited video on its servers, stating that it didn’t violate its rules on deepfakes. The oversight board is said to have agreed with this interpretation of the rules, but then called for these rules to be clarified. The board’s co-chair Michael McConnell noted “the volume of misleading content is rising,” and that “the quality of tools to create it is rapidly increasing.”
This statement underscores the seriousness of what’s at stake after recent news that ElevenLabs’ impressive voice cloning technology was used in a scam targeting potential New Hampshire voters. President Biden’s voice was cloned and used to produce a deepfake audio message that urged voters not to vote. The New York-based startup’s tech is designed to be used for a myriad of purposes, including creating voices for ads, customer-facing phone helplines and even for adding believable voices to video game characters. On the strength of its artificial voice tech, the company just won an $80 million funding round. ElevenLabs has now reportedly identified the account behind the Biden audio deepfake and suspended the user.
Deepfakes were also in the news recently when explicit faked images of Taylor Swift surged across social media websites, forcing X, formerly Twitter, to briefly suspend users’ ability to search for the singer’s name. The potential impact of legitimately created “fake” actors’ images and voices was a negotiating point in last year’s SAG-AFTRA actor’s strike. Digital deception has even prompted new small business efforts to protect actors’ IP. In this case Hollywood agency WME partnered with Chicago-based software maker Vermillio to allow actors to embed tracking so that they can identify when their likeness is being used without permission.
In a smaller business, the IT team may be limited in size and expertise, and digital security efforts may also be limited. These faked content scandals show that now is the time to talk to your staff about the potential risks of either accidentally using deepfaked content, or of being exposed to deepfake scams that may impact your business.
Weekly roundup of the latest in tech news