Dive Brief:
- British engineering group Arup lost approximately $25 million after scammers used AI-manipulated “deepfakes” to falsely pose as the group’s CFO and request transfers from an employee to bank accounts in Hong Kong, according to a report by the Financial Times.
- According to Hong Kong police, a staff member received a message claiming to be from Arup’s CFO — based in the U.K. — regarding a “confidential transaction,” the Financial Times said. Following a video conference with the false CFO and other AI-generated employees, the staff member then made a number of transactions to five different Hong Kong bank accounts, before discovering the fraud.
- While CFOs may not be keeping such incidents top of mind presently, that will change as they become more frequent. As keepers of the financial keys of a business — and therefore as potential targets of impersonation by enterprising scammers — ensuring awareness is a critical first step, Matthew Miller, principal, cybersecurity services at Big Four firm KPMG US told CFO Dive. The staff needs to be aware of threats such as the potential impersonation of executives and how such fraud could impact critical business processes, he said.
Dive Insight:
The Arup incident comes as concerns regarding so-called deepfakes — images, audio or video falsely created or otherwise manipulated with AI — rise among both business leaders and regulators. News of the scam was first reported in February, when Hong Kong police identified that a major company had been the target of a deepfake deception without mentioning Arup by name. People familiar with the matter informed the Financial Times this month that Arup had been the target, according to a Thursday report.
Arup notified the Hong Kong police in January that an incident of fraud had occurred and confirmed to the Financial Times that false voices and images were used, but declined to give further details regarding the scam as the incident is still being investigated. Arup did not immediately respond to requests for comment from CFO Dive.
Altogether, the transactions sent by the staff member — who discovered the fraud after following up with the company’s headquarters — to the perpetrators of the scam totaled HKD$200 million, or approximately $25 million USD.
AI-supported deepfakes could potentially be used for a number of malicious use cases, such as allowing fraudsters to bypass authentication requirements to gain access to legitimate customers’ accounts or impersonating individuals in an enterprise with the power to authorize money transfers, Miller said.
While such types of fraud are not new, the emergence of generative AI and other tools fueling them allows fraudsters to scale such scams massively; effectively, “it changes the economics on fraud,” he said. “And once fraudsters start making money, they fuel their fraud components with that funding to be able to make more money, so that does concern me, quite significantly.”
It’s important for CFOs to gain awareness and to start looking closer at the fraud controls they have in place. Finance chiefs should revisit “some of your business processes where you could be susceptible to deepfake social media type attacks,” and to make sure that “you have the proper controls and monitoring in place to hopefully prevent those risks,” Miller said.
The Arup incident is far from the first event where deepfakes were used to impersonate an individual with critical access or clout; just last week, the Hong Kong Securities and Futures Commission warned that deepfake images of Tesla head Elon Musk were being utilized by a cryptocurrency firm called Quantum AI, for example.
The SFC “suspects that Quantum AI uses AI-generated deepfake videos and photos of Mr. Elon Musk on its website and through social media to deceive the public that Mr. Musk is the developer of Quantum AI’s underlying technology,” they said in a recent press release.
There is also a lot of fear surrounding the potential socioeconomic impacts of this type of technology, Miller said. Lawmakers across numerous countries have already called for restrictions on AI’s use as worries mount around how it could be put to malicious use during elections.
During his State of the Union address in March, President Joseph Biden highlighted the threat of AI-manipulated deepfakes, encouraging Congress to address the potential “peril” represented by the emerging technology, CFO Dive previously reported. Later that week, lawmakers in the European Union passed the “EU AI Act,” legislation which established requirements for AI use and detailed potentially harsh penalties for failure to comply.