A Chinese app that lets users convincingly swap their faces with film or TV characters has rapidly become one of the country’s most downloaded apps, triggering privacy concerns, Reuters reported Monday (Sept. 2).
Users provide a series of selfies in which they blink, move their mouths and make facial expressions, which the app uses to realistically morph the person’s animated likeness onto movies, TV shows or other content.
But there are privacy concerns with the user agreement, which said people uploading their pictures to ZAO agree to “surrender the intellectual property rights to their face, and permit ZAO to use their images for marketing purposes,” the Reuters article said.
ZAO said on Weibo that it would address those concerns.
“We thoroughly understand the anxiety people have toward privacy concerns,” the company said. “We have received the questions you have sent us. We will correct the areas we have not considered and require some time.”
There has been growing concern over deepfakes, which use artificial intelligence (AI) to appear genuine. Critics say the technology can be used to create bogus videos to manipulate elections, defame someone, or potentially cause unrest by spreading misinformation on a massive scale.
Deepfakes of this kind can believably impersonate famous personalities and make them say whatever the aspiring faker types. How to deal with this threat come election time is something U.S. politicians are scrambling to regulate. U.S. Rep. Adam Schiff has described it as a source of “nightmarish scenarios” for the 2020 presidential election, Bloomberg reported.
AI and machine learning do play vital roles in stopping fraud and other attacks on social media and elsewhere. In general, the main advantage AI offers — assuming the system is properly configured and deployed — is its ability to look deep into consumer and payments data to find patterns of fraud, and doing so by looking at information across users instead of examining one user at a time.