A PYMNTS Company

Impersonation of Secretary of State Using AI Sparks Calls for Tighter Controls

 |  July 8, 2025

US officials are raising alarms over the growing threat of artificial intelligence in national security after a fraudster used an AI-generated voice to impersonate Secretary of State Marco Rubio, contacting foreign ministers and government officials in a sophisticated disinformation attempt.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    According to Reuters, the impersonator reached out via the encrypted messaging app Signal in mid-June, leaving voicemails and messages designed to mimic the top U.S. diplomat. The goal, per a diplomatic cable reviewed by Reuters, was likely to manipulate officials into revealing sensitive information or granting access to restricted systems.

    Per Reuters, at least three foreign ministers, a U.S. governor, and a member of Congress were among the targets of the deceptive campaign. In some instances, the attacker sent invitations to continue communication over Signal, potentially hoping to exploit trust in the platform’s perceived security.

    “The actor likely aimed to manipulate targeted individuals using AI-generated text and voice messages with the goal of gaining access to information or accounts,” the cable stated. The cable, dated July 3, was sent to all U.S. diplomatic and consular missions globally, advising personnel to alert external partners about ongoing impersonation threats.

    While no specific actor has been identified in the latest incident, the cable also referenced a separate phishing campaign from April, which officials have attributed to a cyber actor linked to Russia’s Foreign Intelligence Service. In that case, the attacker crafted emails using a fake “@state.gov” domain and mimicked official U.S. State Department branding, targeting think tanks, former diplomats, and activists in Eastern Europe. According to Reuters, this earlier campaign demonstrated a high degree of familiarity with the State Department’s internal systems and documentation.

    Related: Google’s Gemini AI Preinstallation Deal with Samsung Sparks Antitrust Concerns

    A senior State Department official, speaking anonymously to Reuters, confirmed that the agency is actively investigating the June impersonation and emphasized the department’s ongoing commitment to strengthening its cybersecurity defenses. “The Department takes seriously its responsibility to safeguard its information,” the official said, noting that preventive measures are continually updated in response to emerging threats.

    Despite no direct cyber breach being reported, the cable warned that information could still be at risk if targeted individuals inadvertently shared data with the impostor. “There is no direct cyber threat to the department from this campaign,” it stated, “but information shared with a third party could be exposed if targeted individuals are compromised.”

    This latest episode arrives amid a broader wave of concern about AI-driven impersonation. Just weeks ago, The Wall Street Journal reported on an ongoing federal investigation into a similar attempt to impersonate White House senior adviser Susie Wiles, highlighting the expanding reach of AI in deception campaigns.

    The State Department’s warning underscores a growing need for vigilance as artificial intelligence becomes more capable of replicating human speech and behavior.

    Source: Reuters