Meta Phases Out Human Moderators as AI Detection Outpaces Teams

Meta illustration

Meta said Thursday (March 19) that over the next few years, it will shift the content enforcement efforts on its apps from the current third-party vendors to the company’s new artificial intelligence systems.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “While we’ll still have people who review content, these systems will be able to take on work that’s better suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as will illicit drugs sales or scams,” the owner of Facebook and Instagram said in a Thursday blog post.

    Meta’s content enforcement efforts encompass content having to do with issues such as terrorism, child exploitation, drugs, fraud and scams, according to the post.

    The company has experimented with AI systems for these efforts over the past year. It found that the systems found 5,000 scam attempts per day that had not been caught by human teams, identified more accounts that attempted to impersonate celebrities and other high-profile people, caught twice as much adult sexual solicitation content, prevented an account takeover by spotting clues that may have been missed by a human, and reduced views of ads with scams and other violations by 7%, the post said.

    People will still play a key role in appeals of account disablement, reports to law enforcement and other critical decisions, per the post.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    “Over the next few years, we’ll be deploying these more advanced AI systems across our apps once we’ve seen them consistently perform better than our current methods of content enforcement, transforming our approach,” Meta said in the post.

    Advertisement: Scroll to Continue

    In another, separate effort, Meta said March 11 that it launched AI-powered anti-scam tools for its platforms Facebook, WhatsApp and Messenger.

    In February, the company announced that it is suing advertisers who have allegedly impersonated celebrities to defraud consumers.

    Meta also announced in its Thursday blog post that it is rolling out a Meta AI support assistant that it previewed in December. The support assistant will be added to the Facebook and Instagram apps for iOS and Android and within the Help Center on Facebook and Instagram on desktop.

    The support assistant can answer questions about account problems and, with the user’s permission, act on requests such as reporting problematic content, managing privacy settings and resetting passwords.

    “It can respond to requests typically in under five seconds, dramatically reducing wait times compared to traditional help center searches or seeking answers on external websites,” Meta said in the post.