Imagine receiving an email that appears to be so genuine, it’s almost like your best friend sent it—except it’s a scam. That’s the unsettling reality painted by the United Kingdom’s cybersecurity agency, which recently raised a red flag about a new breed of scam emails. These aren’t your run-of-the-mill phishing attempts; they’re powered by AI, making them as sly and cunning as ever.
As this agency has warned, AI will make it harder to distinguish between emails that are sent by scammers and malicious actors and those that are legitimate. This includes messages that request that computer users reset their passwords.
The National Cyber Security Centre (NCSC) claimed that because AI technologies are becoming more sophisticated, consumers will find it difficult to recognize phishing emails, which deceive users into sending over passwords or personal information.
Is ChatGPT a friend or foe?
With chatbots like ChatGPT and free versions known as open-source models, generative AI—a technology that can generate convincing text, speech, and graphics from simple hand-typed prompts—has become broadly accessible to the general public.
In its most recent evaluation of AI’s effects on the cyber threats that the UK faces, the NCSC, a division of GCHQ, predicted that over the following two years, AI would “almost certainly” increase the number of cyberattacks and intensify their impact.
It claimed that the technology supporting chatbots, generative AI, and big language models, will make it more difficult to recognize several attack vectors, including spoof communications and social engineering—a term used to trick people into disclosing sensitive information.
Even those with a fundamental awareness of cybersecurity will find it difficult by 2025 to distinguish attempts at phishing, spoofing, or social engineering, much alone to assess the veracity of an email or password reset request due to the advent of generative AI and massive language models.
According to the NCSC, ransomware attacks, which had affected organizations like the British Library and Royal Mail in the previous year, were also anticipated to rise.
It issued a warning, claiming that amateur hackers and cybercriminals now have convincing approaches to potential victims by producing fictitious “lure documents” with contents that were created or edited by chatbots and lacked the translation, spelling, or grammar errors that often identify phishing attacks.
UK cyber security agency warns that generative AI is a double-edged sword in the fight against ransomware
It did state, however, that generative AI, which has shown to be a capable coding tool, would assist in sorting through and identifying targets rather than increasing the efficacy of ransomware code.
The UK’s data watchdog, the Information Commissioner’s Office, reports that there were 706 ransomware instances in the country in 2022 as opposed to 694 in 2021.
The government cautioned that state actors most likely possessed enough malware, short for malicious software, to train an artificial intelligence model designed specifically to produce new code that might evade security safeguards. According to the NCSC, training such a model would require using data that was taken from the target.
The NCSC research states that among cyber threat actors, “highly capable state actors are almost certainly best placed to harness the potential of AI in advanced cyber operations”.
According to the NCSC, AI will also be used defensively, with it being able to identify threats and create safer systems. The research was released concurrently with new advice from the UK government, encouraging businesses to better prepare for and recover from ransomware attacks. According to the NCSC, the “Cyber Governance Code of Practice” attempts to put information security on par with financial and legal management.
UK faces a ransomware tsunami and urgent reforms are needed, warns former NCSC head
Cybersecurity experts, however, have demanded more aggressive measures. “An incident of the severity of the British Library attack is likely in each of the next five years,” according to Ciaran Martin, the former head of the NCSC, “unless public and private groups radically modify how they tackle the threat of ransomware”.
Martin stated in a newsletter that the UK needs to reevaluate its response to ransomware, which should include tougher regulations on ransom payments and an abandonment of “fantasies” of “striking back” against criminals operating out of hostile countries.
Steering the future of cybersecurity
Cybercriminals are constantly coming up with new ways to take advantage of AI’s growing capabilities. It is the UK cybersecurity agency that is raising the flag to alert us, not to frighten us. It’s critical to remain watchful, educated, and accepting of the constantly changing landscape of cyber threats.
Governments are essential in creating and implementing laws that encourage the use of security procedures. Technology businesses need to invest in research and development to create cutting-edge technologies capable of identifying and mitigating AI-driven attacks, while cybersecurity agencies need to regularly upgrade their plans to resist the ever-evolving methods of malicious actors.
(Tashia Bernardus)