Generative AI
March 6, 2024

Nowadays, more than one in four businesses ban their staff from implementing generative AI. However, that offers no defence against thieves who use it to dupe staff members into disclosing private information or paying fictitious bills.

Utilising ChatGPT or its dark web equivalent, FraudGPT, fraudsters may quickly and easily produce convincing deepfakes of business executives using their voice and image, as well as realistic films of profit and loss statements, phoney IDs, and false identities.

The numbers are alarming. According to 65% of participants in a recent poll conducted by the Association of Financial Professionals, their companies had experienced actual or attempted payment fraud in 2022. 71% of people who lost money had their email compromised. The analysis found that larger companies with $1 billion in annual revenue were the most vulnerable to email fraud.

Phishing emails are among the most prevalent types of email scams. These phoney emails invite recipients to click on a link that takes them to a fake website that seems real, but they pose as reliable sources like eBay or Chase. The prospective victim is prompted to log in and submit their data. Once thieves obtain this data, they can use it to access bank accounts or even commit identity theft.

It’s somewhat comparable to spear phishing but more focused. Rather than being sent out in bulk, the emails are directed to a particular person or organisation. The offenders may have looked up names of coworkers, managers, or even supervisors in addition to employment titles.

Scams from the past have become more sophisticated

Naturally, these frauds are nothing new, but generative AI makes it more difficult to distinguish between the actual thing and the fake. Until recently, it was simple to identify unusual writing, weird fonts, and grammar errors. 

Advancements in Generative AI Enable Financial Scammers to Deceive Work Emails With Greater Sophistication-image01

Cybercriminals can now create convincing spear phishing and phishing emails from anywhere in the world by using ChatGPT or FraudGPT. They can even use their visage in a video call or their voice for a fictitious phone conversation to mimic a CEO or other manager in a company.

This is what transpired recently in Hong Kong when a finance staff member believed he had received a communication from the chief financial officer of the company, who is located in the UK, requesting a transfer of USD 25.6 million. After a video conversation with the CFO and other coworkers, he recognised, that the employee’s initial suspicions that the email might be a phishing attempt were dispelled.

It turns out that the call was a deepfake. The deception was only revealed to him after he verified with the head office. But by then, the funds had been moved.

“It’s actually pretty amazing how much work goes into making them credible,” said Christopher Budd, director of cybersecurity company Sophos.

The rapid evolution of technology is demonstrated by the high-profile deepfakes involving public people in recent times. A fraudulent investment scam from last summer featured a deepfaked Elon Musk endorsing a fictitious platform.

Additionally, there were deepfake videos with talk show host Bill Maher, former Fox News host Tucker Carlson, and CBS News anchor Gayle King ostensibly discussing Musk’s new investment platform. These videos are shared on Facebook, YouTube, and TikTok, among other social media sites.

People are finding it more and more simple to manufacture false identities. Using either material that has been stolen or information that has been created using generative AI, according to Andrew Davies, worldwide head of regulatory relations at the regulatory technology company ComplyAdvantage.

Cybercriminals can leverage the wealth of information available online to craft incredibly convincing phishing emails. A principal security researcher at Netacea, a cybersecurity business that focuses on automated threats, Cyril Noel-Tagoe stated that “large language models are trained on the internet, know about the company and CEO and CFO.”

Bigger businesses in danger in the realm of payment apps and APIs

The scope of the issue is growing due to automation and the proliferating number of websites and apps processing financial transactions, even though generative AI lends credibility to the concerns.

According to Davies, “the transformation of financial services is one of the real catalysts for the evolution of fraud and financial crime in general.” There weren’t many electronic methods for transferring money ten years ago. 

The majority involved conventional banks. The proliferation of payment options, such as PayPal, Zelle, Venmo, Wise, and others, levelled the playing field and increased the number of targets for criminal activity. Application programming interfaces, or APIs, are being used by traditional banks more frequently to link platforms and apps. These APIs can also be attacked.

Criminals leverage automation to expand after using generative AI to swiftly produce texts that seem genuine. It’s a game of numbers. Millions of dollars may be at stake if Davies were to execute 1,000 spear phishing emails or CEO fraud attempts and discover that just 10% of them were successful.

A counterfeit account creation bot hit 22% of the companies examined, according to Netacea. In the financial services sector, this increased to 27%. 99% of businesses that discovered an automated bot attack reported seeing a rise in bot attacks in 2022. The likelihood of a major increase was higher for larger businesses; 66% of those with revenue of $5 billion or more reported a “significant” or “moderate” increase. 

Although companies across all sectors acknowledged the occurrence of fraudulent account registrations, the financial services sector was the most frequently targeted, with 30% of targeted companies reporting that 6% to 10% of newly created accounts are fraudulent.

A more thorough identity investigation is required

There’s a chance that some highly motivated assailants have insider knowledge. Noel-Tagoe stated that although criminals have become “very, very sophisticated,” “they won’t know the internal workings of your company exactly.”

Advancements in Generative AI Enable Financial Scammers to Deceive Work Emails With Greater Sophistication-image02

Employees can find ways to confirm, even though they might not be able to know immediately if the CEO or CFO’s request for a money transfer is legitimate. According to Noel-Tagoe, businesses had to have specialised protocols in place for money transfers. Therefore, if they usually get requests for money transfers via an invoicing platform rather than via email or Slack, try reaching out to them via a different method and getting confirmation.

Businesses are also attempting to separate true identities from ones that have been deeply forged by using more intricate authentication procedures. Currently, as part of the procedure, digital identification firms frequently need an ID and possibly a live selfie. Companies may soon ask users to distinguish between live video and pre-recorded content by blinking, saying their name, or doing some other action.

Companies may need some time to adapt, but in the interim, cybersecurity experts claim that generative AI is fueling an increase in incredibly realistic financial schemes. As someone who has worked in technology for 25 years, Budd of Sophos remarked, “This ramp-up from AI is like putting jet fuel on the fire.” “It’s something I’ve never seen before.”

In summary, the evolving capabilities of generative AI are empowering financial scammers to craft increasingly convincing email scams, posing a significant threat to corporate cybersecurity. As organisations navigate this landscape, heightened vigilance and robust security measures are imperative to safeguard against these sophisticated attacks.

(Tashia Bernardus)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.