A report published on Wednesday unveiled that state-backed hackers from Russia, China, and Iran have been leveraging tools developed by Microsoft-backed OpenAI to sharpen their hacking skills and deceive their targets. The findings come amidst mounting concerns over the exploitation of artificial intelligence (AI) for malicious purposes, reported Reuters.
According to Microsoft’s report, hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments have been utilizing large language models, a form of AI, to refine their hacking campaigns. These sophisticated computer programs analyze vast amounts of text data to generate responses that mimic human language, enabling hackers to craft more convincing and targeted attacks.
Microsoft, in response to these revelations, has implemented a blanket ban on state-backed hacking groups from accessing its AI products. Tom Burt, Microsoft’s Vice President for Customer Security, emphasized the company’s stance in an interview with Reuters, stating, “We just don’t want those actors that we’ve identified… to have access to this technology.”
Related: UK Competition Watchdog Investigates Microsoft-OpenAI Partnership
Despite the damning allegations, diplomatic officials from Russia, North Korea, and Iran have remained tight-lipped, failing to address the accusations. In contrast, China’s U.S. embassy spokesperson, Liu Pengyu, vehemently opposed the claims, advocating for the responsible deployment of AI technology to benefit humanity.
The disclosure that state-backed hackers are exploiting AI tools to augment their espionage capabilities underscores growing apprehensions surrounding the unchecked proliferation of such technology. Senior cybersecurity officials in the West have been sounding the alarm on the misuse of AI tools for nefarious purposes, with concerns intensifying as evidence emerges of their exploitation by rogue actors.
While the report sheds light on the concerning convergence of state-sponsored hacking and AI, it also underscores the pressing need for robust regulations and safeguards to mitigate the risks associated with the misuse of advanced technologies.
Source: Reuters
Featured News
FTC and State Attorneys General Sue John Deere Over Repair Restrictions in Antitrust Case
Jan 15, 2025 by
CPI
Enbridge Wins Legal Battle Against Ducere’s Antitrust Allegations
Jan 15, 2025 by
CPI
GOP Pushes for Antitrust Authority Consolidation Under DOJ in New Legislation
Jan 15, 2025 by
CPI
Canadian Government Approves Bunge-Viterra Merger with Conditions
Jan 15, 2025 by
CPI
SEC Sues Elon Musk Over Delayed Disclosure of Twitter Stock Ownership
Jan 15, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand