Google to Require Election Ads to Disclose AI-Generated Content

Google logo

Google has announced a new policy that will mandate election advertisers to disclose when their ads have been manipulated or created using artificial intelligence (AI) tools.

The policy, set to be effective from mid-November, will be applicable to election advertisers across Google’s platforms, Bloomberg reported Wednesday (Sept. 6), citing a Google notice to advertisers.

Reached for comment by PYMNTS, a Google spokesperson provided an emailed statement that said the company has provided additional levels of transparency for election ads for years, such as “paid for by” disclosures and a library of additional information about election ads seen on its platforms.

“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” the Google statement said. “This update builds on our existing transparency efforts — it’ll help further support responsible political advertising and provide voters with the information they need to make informed decisions.”

Under this updated policy, advertisers will be obligated to include prominent language on their modified election ads, explicitly stating that the content has been computer-generated or edited using AI technology, according to the Bloomberg report.

The intention behind this requirement is to inform viewers that they are viewing content that has been artificially created or altered, the report said. However, minor adjustments such as image resizing or color and brightness enhancements will not fall under the disclosure requirements.

Google’s decision to implement this policy stems from the growing prevalence of AI tools, including its own, that are capable of producing synthetic content, per the report. 

This policy does not extend to non-paid advertising videos uploaded to Google-owned YouTube, even if they are uploaded by political campaigns, according to the report.

Currently, two other tech giants — Meta Platforms, the parent company of Instagram and Facebook, and X, formerly known as Twitter — do not have specific disclosure rules for AI-generated ads, the report said.

In June, U.S. Senator Michael Bennet, a Democrat from Colorado, addressed a letter to major technology and generative AI companies, calling for them to label AI-generated content and limit the spread of fake or misleading material. Bennet also underscored the importance of Americans knowing when AI is being used to shape political content.

PYMNTS Intelligence has found that no foolproof methods to detect and expose AI-generated content exist today. Going forward, companies that use generative AI will draw scrutiny from regulators, according to “Is That Content Generated by AI or Humans? Hard to Tell,” a PYMNTS and AI-ID collaboration.