The Danger of Using AI Browsers

web browser
AI browsers promise smarter browsing, but their built-in agents expose massive security gaps that even their creators admit they can’t fix.

Silicon Valley’s latest obsession is the AI browser. Companies like OpenAI and Perplexity are pitching them as the next big leap in technology.

Browsers like ChatGPT Atlas and Comet have built-in AI agents that can read, summarize, and even act on the pages you visit. They promise convenience. They’ll handle your emails, manage forms, and automate tasks across accounts.

That same autonomy makes them a cybersecurity nightmare. When you let an AI agent act on your behalf, you’re giving it the keys to your digital life… and trusting it not to hand them to anyone else.

How Hackers Exploit AI Agents

Unlike traditional browsers that display web pages passively, AI browsers interpret them. They read every piece of text, visible or hidden, as a possible instruction.

Traditional browsers isolate data and enforce strict boundaries between sites. AI browsers erase many of those lines. That design opens the door to prompt injection attacks. It’s a method where hackers hide malicious commands inside a webpage. The AI agent “reads” those hidden instructions and executes them, often without the user knowing.

Researchers have demonstrated how simple this is. In one test, a modified version of Opera’s Neon browser was tricked into sending user email data via invisible text. Perplexity’s Comet browser showed similar flaws, revealing how easy it is for someone to hijack AI browsers through unseen prompts.

The danger is that the AI doesn’t understand intent. It can’t tell the difference between a helpful instruction and a harmful one. If a hidden prompt says “export all emails containing invoices,” the AI will do it.

A Cybersecurity Nightmare 

AI browsers are risky because of the access they require. They need permission to access your email, cloud storage, banking, and more. Every integration creates another vulnerability.

Even small prompt injections can have big consequences. Researchers have shown that embedded AI tools inside word processors like Google Docs can execute hidden commands. If that can happen in a single document, imagine the potential damage inside a browser that has your passwords and logins.

Rushed to Market, Light on Testing

Despite these risks, AI browsers are being rushed to market. Developers are so focused on getting as many users for their AIs, they can’t even keep those users safe.

OpenAI’s Atlas and Perplexity’s Comet are still available despite unresolved vulnerabilities. Dane Stuckey, OpenAI’s Chief Information Security Officer, has admitted that prompt injection is an unsolved problem across all AI browsers.

Atlas now includes a “logged-out” mode that limits what the AI can access. The problem is that it also disables most of the browser’s standout features. It’s a workaround, not a solution.

This race to ship first, test later mirrors the early days of social media. When growth was more important, and the fallout came years later.

Until these companies prove they can make AI browsers more secure, anyone who’s thinking about using them should steer clear. They’re prototypes and risky ones at that.

For now, the safest way to use an AI browser is simple: don’t.

You May Also Like