A PYMNTS Company

AI-powered Cyberattacks Pose New Security and Regulatory Compliance Challenges

 |  January 8, 2026

The rapid weaponization of artificial intelligence is reshaping the cyberthreat landscape in ways that challenge long-standing assumptions about how attacks are launched, detected, and investigated. According to a recent analysis by Alston & Bird, AI-enabled tools such as agentic AI systems and polymorphic malware are accelerating cyberattacks, lowering barriers to entry for threat actors, and exposing gaps in traditional incident response and forensic models.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Early uses of AI in cybercrime focused on incremental gains, such as improving phishing emails or creating more convincing deepfakes, Alston & Bird notes. Over the past year, however, attackers have moved toward “vibe hacking,” the deployment of agentic AI systems that can reason, plan, and act autonomously across the entire attack lifecycle. These systems no longer merely assist human operators; they can independently conduct reconnaissance, identify vulnerabilities, exploit systems, move laterally through networks, and exfiltrate data with minimal oversight.

    This shift has profound implications for speed and scale. Tasks that once took skilled teams weeks to complete can now be executed in hours or days. AI agents can scan thousands of endpoints, adapt exploitation techniques in real time, and rapidly analyze stolen data to prioritize high-value assets. The compression of the attack lifecycle reduces defenders’ window to detect and contain incidents, increasing the likelihood that organizations will discover breaches only after significant damage has occurred.

    The report highlights a late-2025 incident involving a sophisticated state-sponsored group that manipulated AI coding tools to autonomously execute most elements of a multistep intrusion campaign. Human involvement was largely limited to strategic oversight. While the AI systems still exhibited limitations, such as occasional hallucinations or misclassification of data, the authors note that these weaknesses can be corrected quickly with minimal human input, suggesting that fully autonomous attacks are becoming more feasible.

    Read more: UK Introduces Sweeping Cybersecurity Bill to Counter Rising Digital Threats

    Compounding the risk is the emergence of AI-powered polymorphic malware and just-in-time code regeneration. Unlike traditional malware, which can often be detected through signatures or heuristics, these AI-driven tools continuously rewrite their own code during execution. This dynamic mutation allows malware to evade detection and adapt to defensive controls in real time, eroding the effectiveness of conventional endpoint and network security tools.

    The Alston & Bird analysis also underscores a newer category of risk: attacks targeting AI systems themselves. Techniques such as prompt injection exploit the reasoning layer of large language models by embedding malicious instructions within seemingly benign inputs. Because these attacks operate inside the AI’s cognitive process rather than at the operating system level, they often leave little or no forensic trail.

    The absence of those traces presents legal and governance challenges, particularly for organizations subject to regulatory scrutiny. Conventional incident response playbooks assume that system-level logs can reconstruct events and establish causation. AI-driven attacks undermine that assumption, forcing companies to rethink how they monitor, audit, and preserve evidence related to AI behavior.

    To address these risks, Alston & Bird outlines several steps organizations and general counsels can take to prepare. Companies should update incident response plans to account for AI-powered threats and incorporate scenarios such as polymorphic malware or prompt injection into tabletop exercises. Investigations should be structured to capture AI-specific evidence, including prompts and model outputs, while preserving attorney-client privilege. Organizations are also encouraged to audit AI inputs and outputs, revisit vendor contracts to address AI-related security obligations, and strengthen governance frameworks to ensure board-level visibility into AI risk.

    Keeping abreast of regulatory and liability developments is also key, per Alston & Bird. As regulators focus more closely on AI governance and cybersecurity, companies that fail to adapt their controls and response strategies may face heightened legal exposure.