An IP Expert Deconstructs Artificial Intelligence’s Right to Create

Generative artificial intelligence (AI) capabilities are transforming the world.

But will their ability to generate fabricated video, audio and text-based content based on copyrighted material in real time run up against a brick wall of preexisting laws? Or does something have to give?

After all, individuals and organizations with questionable intent have been spoofing and taking advantage of images, likenesses, and more since nearly the dawn of mass-market advertising, and fan-fiction sites are among some of the world’s most visited and engaged.

“There are a lot of ways that the existing doctrines can be applied to this new technology,” Christian Mammen, partner and chair of Womble Bond Dickinson’s U.S. Intellectual Property Litigation Group, told PYMNTS as part of the “TechReg Talks” series.

The modern concept of intellectual property (IP) has been around since at least the 17th century, although it wasn’t until the late 20th century that it was encoded across the majority of the world’s legal systems.

Still, as AI technology continues to evolve, it is becoming clear that existing IP and copyright concepts, such as fair use, may need to adapt in order to effectively handle cases related to generative AI.

“I don’t know that we need a full-blown overhaul of the laws just to accommodate this new technology. But there may be some places where it’s worth having a conversation about tinkering with the law or modifying the laws in certain ways,” Mammen said.

Read moreHow AI Regulation Could Shape Three Digital Empires

The Input Output Dynamic

In the fast-paced world of generative AI, the ability to mimic someone else’s style on a massive scale has ignited a passionate debate about the expansion of digital moral rights and prompted the need for a thoughtful conversation about modifying existing laws to ensure that creators’ rights are upheld in the digital realm.

Mammen explained that a lot of the current issues surrounding AI as they relate to copyright contexts include “fair use cases” where AI platforms used copyrighted material as part of the data set inputs used to train their AIs; as well as the question of whether the output of an AI product or model is itself able to be copyrighted, or protected by copyright.

“There are also cases involving creating visual images using an AI that are in the style of some famous artists as well as potentially textual works that are stylistically similar to known writers,” he added. “In the context of generative AI where there’s the possibility of re-creating a [known style] on an industrial scale, it raises the question whether we ought to be talking about expanding some sort of digital moral rights in our stylistic characteristics.”

The issue of patent rights also comes into play when considering AI-created inventions. Traditionally, U.S. courts have held that a human inventor is required for a patent. However, the emergence of AI-generated inventions challenges this notion.

Mammen noted that there “is an active [legal] discussion underway” about whether or not an AI can be issued a patent.

Read also: AI Regulations Need to Target Data Provenance and Protect Privacy

Regulations Around the World

While the U.S. has taken a relatively light-touch approach to AI regulation, the European Union (EU) has adopted a more comprehensive regulatory framework focused on data privacy and personal rights. This poses the question of whether the U.S. should follow the EU’s lead in adopting a broad-reaching piece of AI regulation, commonly known as the “Brussels Effect.”

Even China has moved forward with an interim set of rules that go into effect this August.

“It’s a pattern that we saw in the regulation of privacy, where the EU adopted a more comprehensive regulation and the U.S. has taken a lighter-touch approach that’s then been gradually filled in with state-by-state regulations,” Mammen said.

“The challenge or the things to balance are the speed with which this technology is evolving and whether any legislation can make its way through the legislative process with sufficient speed and expertise to meaningfully and appropriately regulate the technology, setting aside the political differences that might arise about how it should be regulated,” he added, noting that areas like biometric privacy are being increasingly regulated by a number of states in the U.S.

In the absence of government regulation, the involvement of standard-setting bodies and private entities in rulemaking and auditing AI applications becomes significant.

Mammen suggested that large technology companies may voluntarily limit the acceleration and deployment of new technologies to allow society to evaluate their impact.

“It may make sense for some of the large technology companies to voluntarily limit the extent to which new technologies are being accelerated and deployed while society as a whole has an opportunity to evaluate that technology,” he said, highlighting separately that a robust public conversation was also important.

That’s because finding the right balance between regulation and technological advancement will be a challenge that requires public oversight and engagement.

After all, while the future of AI technology remains unknowable — its effective oversight hinges on a comprehensive conversation that considers the ethical, legal, and societal implications of this rapidly evolving field.