The threat of ransomware could soon grow thanks to the rise of artificial intelligence.
That’s according to a new report by the United Kingdom’s National Cyber Security Centre, which said in a Wednesday (Jan. 24) press release that AI is already being used in illegal cyber activity and will likely increase the impact and volume of such attacks.
“Among other conclusions, the report suggests that by lowering the barrier of entry to novice cybercriminals, hackers-for-hire and hacktivists, AI enables relatively unskilled threat actors to carry out more effective access and information-gathering operations,” the NCSC said in the release. “This enhanced access, combined with the improved targeting of victims afforded by AI, will contribute to the global ransomware threat in the next two years.”
The NCSC, a branch of the British intelligence agency GCHQ, said in the release that analysis by the National Crime Agency indicated that cybercriminals have begun to develop criminal generative AI and to offer “GenAI-as-a-Service” to anyone who can pay.
“Yet, as the NCSC’s new report makes clear, the effectiveness of gen AI models will be constrained by both the quantity and quality of data on which they are trained,” the release said.
The release also said that it is “unlikely that in 2024 another method of cybercrime will replace ransomware due to the financial rewards and its established business model.”
The NCSC’s findings follow a year that saw a jump in ransomware attacks, with major companies, banks, hospitals and government agencies reporting a 51% increase in such incidents.
“These attacks disrupted financial trading, caused shortages of essential products like Clorox wipes, and targeted critical infrastructure,” PYMNTS wrote last month. “However, due to the lack of transparency surrounding these incidents, reliable figures on the number of data breaches, the extent of the damage and the hackers responsible remain elusive.”
AI and machine learning can also help organizations defend against cybercriminals.
The PYMNTS Intelligence report “Fraud Losses From Impersonator Scam Double for Largest US Banks” found that 66% of financial institutions were using AI and ML technologies to protect their systems, a 34% increase from the prior year.
“FIs using AI or ML reported lower rates of the two most common scams,” PYMNTS wrote. “By way of example, 22% of FIs not using these technologies experienced bank tech-support impersonation scams, but 18% of those using these AI or ML faced the same scams.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.