Google Adds Security Layers to Safeguard Agentic Browsing With Chrome

Google has announced new tools it will use to improve the safety of agentic browsing with Chrome.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The tools are designed in part to combat “the primary new threat facing all agentic browsers,” indirect prompt injection, Nathan Parker, Chrome security team, said in a Monday (Dec. 8) blog post.

    Indirect prompt injection, which can appear in malicious sites, in third-party content in iframes, or from user-generated content such as user reviews, can cause the agent to take unwanted actions, including initiating financial transactions or exfiltrating sensitive data, according to the post.

    To combat this threat, Google has added new layers to its existing protections, the post said.

    These include a new user alignment critic that involves a separate model isolated from the untrusted content vetting the actions of the agent, and an extension of Chrome’s origin-isolation capabilities to limit the origins with which the agent can interact to those that are relevant to the task, per the post.

    The layers also include user confirmation for critical steps, real-time detection of threats, and red-teaming and response, according to the post.

    Advertisement: Scroll to Continue

    “The upcoming introduction of agentic capabilities in Chrome brings new demands for browser security, and we’ve approached this challenge with the same rigor that has defined Chrome’s security model from its inception,” Parker said in the post. “By extending some core principles like origin-isolation and layered defenses, and introducing a trusted-model architecture, we’re building a secure foundation for Gemini’s agentic experiences in Chrome.”

    It was reported in November that Google Deepmind, Microsoft, Anthropic and OpenAI were among the tech companies working to stop indirect prompt injection attacks.

    The report said these attacks happen when a third party hides commands inside a website or email to trick AI models into turning over unauthorized information.

    Companies are doing things like hiring external testers and using AI-powered tools to ferret out and prevent malicious uses of their technology, but experts caution that the industry still hasn’t determined how to stop indirect prompt injection attacks, the report said.

    Later in November, Anthropic published research showing its Claude Opus 4.5 model reduced successful prompt injection attacks to 1% in browser-based operations, down from earlier versions that faced higher breach rates when adversaries embedded malicious instructions in web content.