AI or Human? It Can Be Hard to Tell

How do we decide whether a piece of text was created by a human, artificial intelligence (AI) or a combination of the two? Some studies find that more than half of individuals cannot accurately identify content made by AI chatbots. That percentage jumps to 66% with content generated by GPT-4, the latest language model powering ChatGPT.

The implication for regulators, businesses and the public at large is that many are susceptible to AI manipulation without realizing it. This can have troubling and profound consequences for society, as AI content has the potential to influence large swaths of the population based on false premises.

The tools to spot AI-generated content are less than reliable.

As generative AI grows, the countermeasures being developed to detect artificially generated content are close behind. Companies such as Originality.ai, Turnitin and Copyleaks are designing tools that discern between unique content and content compiled from online material.

These tools are promising but not infallible, as their accuracy can be difficult to measure and varies widely. In some cases, these solutions can be highly unreliable. OpenAI recently made news for retiring its AI classifier due to its “low rate of accuracy” in detecting artificial content.

It is in the public interest to distinguish human- from AI-generated content.

As AI models grow in size and complexity, the ability to distinguish AI-produced content from human-sourced material becomes increasingly difficult. The same can be said for fraud: The more sophisticated the model, the more challenging it is to spot fraudulent activity.

However, exposing artificial content is worth the effort and is critical to the integrity and survival of industries and organizations that depend on authentic, trustworthy information.