Visa The Embedded Lending Opportunity April 2024 Banner

OpenAI to Implement Digital Credentials to Prevent Misuse in Elections 

deep fakes, OpenAI, Dall-E, elections

OpenAI is taking steps to prevent the misuse or exploitation of its artificial intelligence (AI) technology in upcoming global elections. 

The company is continuously updating its approach to address the challenges associated with the use of AI tools in elections, OpenAI said in a Monday (Jan. 15) blog post

One of OpenAI’s key initiatives is focused on preventing abuse, according to the post. The company proactively anticipates and prevents potential abuses, such as misleading “deepfakes,” scaled influence operations and chatbots impersonating candidates. 

OpenAI rigorously tests its systems, engages users and external partners for feedback, and incorporates safety mitigations to minimize harm, the post said. For example, OpenAI’s AI model for image generation, DALL·E, has built-in guardrails to decline requests involving generating images of real people, including candidates. 

Transparency is another crucial aspect of OpenAI’s approach to election integrity, per the post. OpenAI is working on better transparency around image provenance, enabling voters to assess the trustworthiness of images and understand the tools used to produce them. 

The company plans to implement digital credentials developed by the Coalition for Content Provenance and Authenticity, which encode details about an image’s provenance using cryptography, the post said. OpenAI is also experimenting with a provenance classifier to detect images generated by DALL·E, even after common modifications. 

Improving access to reliable voting information is also a priority for OpenAI, according to the post. 

In collaboration with the National Association of Secretaries of State (NASS) in the United States, OpenAI’s AI system, ChatGPT, directs users to CanIVote.org, the authoritative website on U.S. voting information, the post said. This partnership aims to provide users with accurate procedural election-related information. 

PYMNTS Intelligence has found that the most serious challenge presented by generative AI is likely to be differentiating the creations of AI from the original work of humans. The implications of this challenge stretch from fraud to the value of human creativity, according to “Preparing for a Generative AI World,” a PYMNTS and AI-ID collaboration. 

In another recent effort around these issues, Meta said in November that it is imposing new controls on AI-generated ads ahead of the 2024 U.S. presidential election. Advertisers around the world who post political ads or information about a social issue or an election will have to disclose whether that material has been digitally created or changed, including through AI. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.