Google details security measures for Chrome Agent features

An increasing number of browsers are experimenting with agent features that will take actions on your behalf, such as booking tickets or purchasing various items. However, these agent capabilities as well come with security threats that may lead to loss of data or money.

Google detailed its approach to managing user security in Chrome using observer models and user consent. The company unveiled the agent's capabilities in Chrome in September and said they would be rolled out in the coming months.

The company said it uses several models to control agent activities. Google said it created the User Alignment Critique using Gemini to analyze action items created by the planner model for a specific task. If the critic model decides that the planned tasks do not serve the user's goal, it asks the planner model to rethink the strategy. Google noted that the critic model only sees metadata of the proposed action, not actual web content.

Additionally, to prevent agents from accessing disallowed or untrusted sites, Google uses agent provenance sets that restrict the model to access read-only sources and read-write sources. A read-only origin is data that Gemini can use for content. For example, on a shopping site, entries are associated with a task, but banners are not. Similarly, Google has stated that an agent can only click and type within specific iframes on a page.

“This separation forces an agent to only access data from a limited set of sources, and that data can only be passed to writable sources. This limits the threat vector of data leaks from different sources. It also gives the browser the ability to enforce some of this separation, for example by not sending data from outside the readable set to the model,” the company said in a blog post.

Google also checks page navigation by inspecting URLs using a different observer model. According to the company, this may prevent navigation to malicious URLs generated by the model.

A screenshot of a Chrome Browser agent model asking the user for permission before paying for an item while shopping.

The search engine giant said it is handing over the reins to users for sensitive tasks as well. For example, when an agent tries to navigate to a sensitive website that contains information such as banking or medical information, it first asks the user. For sites that require login, it will ask the user for permission to use Chrome's password manager. Google has determined that the agent model does not have access to password data. The company added that it will ask users before taking actions such as making a purchase or sending a message.

Techcrunch event

San Francisco
|
October 13-15, 2026

Google said that in addition to this, it also has a rapid injection classifier that prevents unwanted activities and also tests the agent's capabilities against attacks created by researchers.

AI browser developers also pay attention to security. Earlier this month, Perplexity was released new open source content detection model to prevent instant injection attacks against agents.

LEAVE A REPLY

Please enter your comment!
Please enter your name here