Researchers have reported concerns about the timely importance of addressing deep fakes and other types of digital deception amid the backdrop of the upcoming presidential election.
The House's Subcommittee on Consumer Protection and Commerce heard digital experts, as well as a representative from Facebook, testify in regards to the measures tech firms are using to fight against deepfakes, and what further actions should occur at the federal level, CNBC reported Wednesday (Jan. 8).
Chairwoman Jan Schakowsky, D-Ill., in her opening remarks bemoaned the “laissez-faire” approach of Congress during the past 10 years in the direction of digital platform moderation. Schakowsky said, according to the report, “The result is Big Tech failed to respond to the great threats posed by deep fakes ... as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate.”
The legislator was reportedly referring to the policy that the social media company rolled out a day prior to prohibiting very manipulated videos made by machine learning or artificial intelligence. Schakowsky, however, acknowledged that the new policy would not cover a recent, doctored video of Speaker of the House Nancy Pelosi on Facebook as well as other platforms.
The video had been slowed down to make her voice seem to be slurred. A Facebook executive at the hearing had noted that the Pelosi video would not be subject to the new deep fake policy but would still be subject to misinformation policies already in place.
Experts reportedly brought attention to the national security as well as the societal implications of manipulated digital media. Lawmakers and the experts, however, reportedly did not come to an agreement on the degree of involvement required by Congress to make sure that tech firms monitor deceptive content in a responsible manner.
Research from cybersecurity firm Deeptrace shows there were almost 14,700 deepfake videos online as of November of 2019, compared to nearly 8,000 in December 2018. Other studies illustrate that deepfakes can be made with a relatively small amount of “input” material, like pictures or videos, connected to algorithms.
In a prior interview with PYMNTS, Reinhard Hochrieser, vice president of product management at authentication services provider Jumio, said when it comes to the emergence of deepfakes, “from an identity verification perspective, we are right at the beginning. But the technology is already extremely solid. It’s like a Hollywood special effect.”