Generative AI Fabrications Are Already Spreading Misinformation

artificial intelligence

As buzz around generative artificial intelligence (AI) grows, critics have called for more guardrails.

Observers are particularly worried about the potentially harmful effects of the landmark technology’s ability to generate entirely false, or synthetic, media — be it images, text, video and more.

On Monday (May 22), an image showing a plume of black smoke near the Pentagon went viral on social media.

The image’s wide circulation stoked fears of an explosion or attack on the U.S. Department of Defense, and briefly sent U.S. stocks lower.

At least, that is, until it was confirmed that the image showing the explosion and damage was AI-generated.

The fake photo first appeared on Facebook before being shared by multiple Twitter accounts with hundreds of thousands and even millions of followers, including Russia’s state-controlled news network RT, the financial news site ZeroHedge, and a paid account called Bloomberg Feed that, while unaffiliated with Bloomberg News, is designed to appear as though it is.

Bloomberg Feed’s Twitter account has been suspended, and many of the other accounts responsible for the image going viral have since removed their posts.

“There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public,” tweeted the fire department in Arlington, Virginia.

The misinformation event, which unfolded over 24 hours, underscores the growing need for a way to protect against and verify AI-generated content.

Neither Meta Platforms nor RT immediately replied to a request for comment by PYMNTS.

Twitter auto-responded to PYMNTS’ request with its now-standard poop emoji.

When AI Goes Wrong, It Can Go Quite Wrong

Misinformation is older than the internet, but for years the best defense has been simply relying on one’s own common sense. The ability of generative AI to craft entirely lifelike and believable content is rapidly changing that paradigm.

Just last week, Sen. Richard Blumenthal of Connecticut opened a session of the U.S. Senate Committee of the Judiciary titled, “Oversight of AI: Rules for Artificial Intelligence,” by playing an AI-fabricated recording of his own voice, made from comments written by ChatGPT and developed using actual audio from his speeches.

Blumenthal went on to argue that while in that instance the generative AI tool produced an accurate reflection of his views, it could just as easily have produced “an endorsement of Ukraine’s surrendering or Vladimir Putin’s leadership,” something he called “really frightening.”

Sam Altman, CEO of OpenAI, the company behind ChatGPT, also spoke at the hearing and emphasized to the assembled lawmakers that AI technology needs proper oversight to prevent possible harm.

“I think if this technology goes wrong, it can go quite wrong,” Altman said in his testimony. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

See also: Washington Races to Develop and Implement Effective AI Policy

Sens. Michael Bennet of Colorado and Peter Welch of Vermont have introduced a bill, the Digital Platform Commission Act, proposing the creation of a new, five-member federal agency, the Federal Digital Platform Commission, to regulate AI and other transformative technologies

The stakes around misinformation are growing as AI gets better at its fabrications and it becomes harder for everyday consumers to spot misinformation in an already crowded and real-time content landscape.

Read also: Why Generative AI Is a Bigger Threat to Apple Than Google or Amazon

Lawmakers in Europe voted earlier this month to adopt a draft of AI regulations, which included restrictions on chatbots such as ChatGPT along with a ban on the use of facial recognition in public and on predictive policing tools.

IBM Chief Privacy and Trust Officer Christina Montgomery also urged U.S. policymakers to formalize disclosure requirements to ensure Americans “know when they are interacting with an AI system” or AI-generated content.

There is a clear and growing need for regulation that can protect consumers while also ensuring a hands-off approach to growth and innovation. One way to do so is by enacting guardrails around the provenance of data used in large language models (LLMs), making it obvious when an AI model is generating synthetic content, including text, images and even voice applications, and flagging its source.

Complicating matters is the fact that AI tools now advance in the span of days and weeks while government bodies tend to move at a much slower pace, generally years or even decades.