AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

AI Browsers Expose Critical Security Gaps, Researchers Warn

▼ Summary

– A new report identifies architectural security weaknesses in AI browsers like Perplexity’s Comet, which could introduce new cyber-risks as they automate user tasks.
AI browsers integrate AI assistants to perform searches, summaries, and online actions via natural-language prompts, with major platforms planning similar capabilities.
– Security challenges include malicious workflows, prompt injection, malicious downloads, and trusted app misuse that could expose data or install malware.
– Existing security tools have limited visibility into AI browser behavior, making it hard to distinguish automated actions from human ones.
– The report recommends safeguards like agentic identity systems, data loss prevention policies, and client-side file scanning to secure AI browsers.

A recent security analysis from SquareX Labs reveals that the emerging category of AI-powered browsers contains significant architectural vulnerabilities. These findings highlight a troubling trade-off: while these browsers use artificial intelligence to streamline online tasks, they simultaneously open the door to novel cybersecurity threats that current infrastructures may not be equipped to handle.

This new generation of browsers embeds AI assistants directly into the user’s web experience. Instead of traditional navigation, people can perform searches, summarize content, and execute online actions using simple conversational commands. Following Perplexity’s launch of its Comet browser in July, other firms like OpenAI, The Browser Company, and Fellou AI have introduced comparable products. Even established platforms such as Google Chrome and Microsoft Edge have announced their own roadmaps for integrating AI-driven functionalities.

SquareX suggests this shift could fundamentally alter how individuals and businesses engage with the internet. However, the report cautions that present browser architectures likely fail to address the unique security problems introduced by autonomous AI behavior.

Researchers have organized the primary security concerns into four distinct categories.

Malicious workflows present a clear danger. AI agents can be tricked by sophisticated phishing schemes or OAuth-based attacks that request overly broad permissions. This could lead to unauthorized access to sensitive information stored in email or cloud services.

Prompt injection is another serious issue. Attackers might hide malicious instructions inside trusted applications like SharePoint or OneDrive. An AI agent, interpreting these hidden commands, could be manipulated into sharing confidential data or inserting harmful hyperlinks.

The risk of malicious downloads is also elevated. AI browsers can be guided to retrieve disguised malware through tampered-with search results, believing the files to be legitimate.

Finally, trusted app misuse is a concern. Even legitimate and widely-used business tools can be repurposed to send unauthorized commands through AI-mediated interactions, bypassing traditional security checks.

SquareX experts stress that protecting users of AI browsers will demand a coordinated effort from browser developers, corporate security teams, and cybersecurity vendors. They point out that current security solutions, including SASE and EDR platforms, offer limited visibility into activities performed by an AI agent. This makes it challenging to distinguish between actions taken by a human and those executed automatically by the browser’s AI.

To counter these emerging threats, the report proposes several protective measures.

Implementing agentic identity systems would help differentiate between actions initiated by a human user and those performed autonomously by the AI. Establishing robust data loss prevention policies within the browser itself could stop sensitive information from being exfiltrated. Adding client-side file scanning before any download completes would help detect malware. Performing thorough extension risk assessments would identify potentially unsafe or compromised browser add-ons that could interact with the AI.

As AI functionality becomes a standard feature in web browsers, the researchers conclude that building security directly into these systems from the ground up is non-negotiable. Proactive integration of safeguards is essential to prevent the accidental exposure of private and corporate data.

(Source: Info Security)

Topics

ai browsers 95% security weaknesses 93% malicious workflows 88% prompt injection 87% malicious downloads 86% trusted app misuse 85% browser architecture 84% security collaboration 83% agentic identity 82% data loss prevention 81%