Deepfakes Are Biggest AI-Related Threat, Says Microsoft President

AI deepfakes

The president of Microsoft says deepfakes are his biggest AI-related worry.

Speaking Thursday (May 25) in Washington on how to regulate artificial intelligence (AI), Brad Smith said there needs to be a way for people to tell real content from AI-generated material, especially when that material could have been created for illicit purposes.

“We’re going have to address the issues around deep fakes,” said Smith, whose comments were reported by Reuters.

“We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians.”

Smith also called for measures to “protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI,” as well as for licensing of the most key forms of AI to protect physical, cyber and national security.

His comments come as AI enjoys a surge in popularity that — as noted here Thursday — would have seemed like science fiction a decade ago.

Then came late 2022 and the rise of generative pre-trained transformers (GPT), marking a dramatic acceleration in the dream of AI software that could mimic human interactions. With it came a wave of investment in the technology, as well as debates on how to regulate it.

“While the world has been discussing potential problems with privacy and copyright created by generative AI, the most serious challenge is likely to be differentiating AI’s creations from the original work of humans,” PYMNTS wrote. “The implications stretch from fraud to something as basic as the value of human creativity.”

“While AI grows in sophistication, AI detectors struggle to identify content from even early versions of AI text generators,” PYMNTS noted.

How AI will be regulated is something being debated now in the U.S. and all over the world. The U.S. Senate last week heard testimony from Sam Altman, CEO of ChatGPT maker OpenAI, who testfied that “regulation of AI is essential.”

Altman told senators his company is “eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits.”

And last weekend, the heads of the G7 nations issued a bulletin promoting talks aimed at reaching a “common vision and goal of trustworthy AI.”

Meanwhile, deepfake technology has been a popular investment among the venture capital set, who invested $187.7 million into companies working on deepfake technology last year — up from $1 million in 2017.