A report published on Wednesday unveiled that state-backed hackers from Russia, China, and Iran have been leveraging tools developed by Microsoft-backed OpenAI to sharpen their hacking skills and deceive their targets. The findings come amidst mounting concerns over the exploitation of artificial intelligence (AI) for malicious purposes, reported Reuters.
According to Microsoft’s report, hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments have been utilizing large language models, a form of AI, to refine their hacking campaigns. These sophisticated computer programs analyze vast amounts of text data to generate responses that mimic human language, enabling hackers to craft more convincing and targeted attacks.
Microsoft, in response to these revelations, has implemented a blanket ban on state-backed hacking groups from accessing its AI products. Tom Burt, Microsoft’s Vice President for Customer Security, emphasized the company’s stance in an interview with Reuters, stating, “We just don’t want those actors that we’ve identified… to have access to this technology.”
Related: UK Competition Watchdog Investigates Microsoft-OpenAI Partnership
Despite the damning allegations, diplomatic officials from Russia, North Korea, and Iran have remained tight-lipped, failing to address the accusations. In contrast, China’s U.S. embassy spokesperson, Liu Pengyu, vehemently opposed the claims, advocating for the responsible deployment of AI technology to benefit humanity.
The disclosure that state-backed hackers are exploiting AI tools to augment their espionage capabilities underscores growing apprehensions surrounding the unchecked proliferation of such technology. Senior cybersecurity officials in the West have been sounding the alarm on the misuse of AI tools for nefarious purposes, with concerns intensifying as evidence emerges of their exploitation by rogue actors.
While the report sheds light on the concerning convergence of state-sponsored hacking and AI, it also underscores the pressing need for robust regulations and safeguards to mitigate the risks associated with the misuse of advanced technologies.
Source: Reuters
Featured News
Clifford Chance Expands Global Antitrust Team with New Partner
Dec 6, 2024 by
CPI
Spain’s Financial Regulator Awaits Antitrust Decision on BBVA’s Hostile Bid for Sabadell
Dec 5, 2024 by
CPI
RealPage Seeks Dismissal of DOJ Antitrust Suit, Citing Legal Flaws
Dec 5, 2024 by
CPI
EU Competition Chief Signals Potential Google Breakup Amid Big Tech Scrutiny
Dec 5, 2024 by
CPI
Turkey Closes Antitrust Probe into Meta’s Threads-Instagram Practices
Dec 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Moats & Entrenchment
Nov 29, 2024 by
CPI
Assessing the Potential for Antitrust Moats and Trenches in the Generative AI Industry
Nov 29, 2024 by
Allison Holt, Sushrut Jain & Ashley Zhou
How SEP Hold-up Can Lead to Entrenchment
Nov 29, 2024 by
Jay Jurata, Elena Kamenir & Christie Boyden
The Role of Moats in Unlocking Economic Growth
Nov 29, 2024 by
CPI
Overcoming Moats and Entrenchment: Disruptive Innovation in Generative AI May Be More Successful than Regulation
Nov 29, 2024 by
Simon Chisholm & Charlie Whitehead