Is It Real, or Is It AI?

Generative AI can potentially create misinformation, undermine the value of original materials and destroy trust. Is there a solution to ensure authenticity?

Download the PYMNTS and AI-ID May 2023 "Generative AI Tracker: Preparing for A Generative AI World"

Generative artificial intelligence (AI) is so new, many of its potential problems may not even be on anyone’s radar. AI detection solutions began popping up as soon as generative AI hit the mainstream, but they are often easily fooled.

Already, the world has grappled with concerns about all the data that goes into a large language model (LLM), including both training and user inputs, and questions about copyright infringement and privacy. What comes out of an LLM, however, can be just as dangerous as what goes in.

Generative AI has already been coupled with other technologies to create phone scams or otherwise trick people into thinking an AI output is real. Moreover, the technology is likely to become only more sophisticated, opening the door to all kinds of AI-generated materials being passed off as something else, whether that be a person’s voice, a certified document or an image of an event that never happened.

Detecting Authenticity of Materials

As soon as there was AI-generated content, there were people working on ways to spot those materials and differentiate them from authentic and human-generated products. A number of tools, such as GPTZero, have emerged, many of them integrated into plagiarism detectors or other applications such as Copyleaks. Nevertheless, the effectiveness of AI detection often comes into question.

Some AI content-generating platforms, such as Content at Scale, offer detection as part of the service and boast that their AI-generated content will pass detectors. As generative AI becomes ever more ubiquitous, the need to differentiate human-generated materials will only become greater.OpenAI has said that its own AI detector misses 74% of AI content and even classifies 9% of human-generated content as likely to be AI-generated. At the same time, generative AI is getting better, and it seems unlikely that AI content detection will gain ground with text, let alone the unknown number of other materials AI can and will generate.

Solutions to This Problem May Still Take Time

Currently, there is no straightforward way to differentiate AI-generated content from authentic materials. The greatest threats appear to be faked news stories and altered photos or real-sounding human speech. As the technology advances, however, the sky may be the limit for what AI can create.

Some have proposed watermarking AI content, but much of what has been discussed is either theoretical or still in development. Most importantly, any solution that identifies generative AI materials at the source will require the cooperation of those doing the generating.

Regulators and policymakers are aware of these risks, and they are already working on how to reduce or even eliminate this problem. However, given the novelty of the technology and the uncertainty about how it will evolve, regulators likely will opt for transparency obligations instead of blanket prohibitions.