YouTube Expands AI Safety Features With New Likeness Detection System

YouTube

YouTube has launched its likeness detection system, a new tool that lets creators identify and request removal of AI-generated videos using their face or voice without consent. According to TechCrunch, the feature is being rolled out first to select members of the YouTube Partner Program after an initial pilot phase.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Creators can verify their identity in the “Likeness” tab of YouTube Studio using a selfie video and government-issued ID. Once verified, they can review flagged content that mimics their likeness and submit removal requests directly. YouTube said participation is voluntary, and users who opt out will no longer be scanned within 24 hours. The system builds on the company’s Content ID infrastructure, which historically has been used to manage copyright claims, extending that protection to likeness and voice replication.

    The update adds a security layer to YouTube’s growing suite of artificial intelligence-driven features. Earlier this year, the platform introduced AI-powered creative tools to help users streamline production, editing and discovery. The new detection tool complements those initiatives by focusing on identity protection as deepfakes and synthetic media become more widespread.

    A CBS News investigation recently found that complaints about deepfake-driven misuse of celebrity and creator likenesses have more than doubled this year. YouTube said its system is designed to detect AI-generated visuals and audio that replicate real individuals without authorization, allowing creators to act before the content spreads.

    YouTube CEO Neal Mohan said the company’s goal is to give creators “choice and control” over how AI interacts with their content. The company described the system as a “consent-first” technology intended to reinforce privacy and transparency within its creator ecosystem. Analysts say the rollout signals a shift among platforms to address AI risks proactively rather than reactively.

    YouTube’s move comes as platforms across the media industry race to balance innovation and identity protection. The company’s approach aligns with its broader AI roadmap, which, as PYMNTS reported, integrates monetization, automation and safety within creator workflows.

    Advertisement: Scroll to Continue

    The likeness detection system will initially be available to a limited group of verified creators before expanding more widely. YouTube said additional privacy controls and transparency updates are planned as the feature scales, positioning the tool as part of a broader shift toward responsible AI governance in digital media.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.