The AI Race Has a New Contender
Perplexity is going head-to-head with Google by releasing a new browser called Comet. It’s sleek, AI-powered, and designed to completely reimagine how we browse the web. Instead of hopping between tabs, copying links, and toggling apps, Comet aims to automate and contextualize everything through a built-in AI assistant that knows what you’re reading, what you’re trying to do, and what’s on your screen at all times.
That’s where my alarms started going off.
As I read more about Comet, I stumbled on something that gave me pause: it may install itself at the kernel level. For anyone unfamiliar with that term, that means it’s not just another app sitting on your desktop. It’s reaching into the core of your operating system. The same privileged layer where antivirus tools and rootkits operate.
And that raises serious questions.
What Kernel-Level Access Really Means
The kernel is the brain of your computer’s operating system. It controls how memory is allocated, how files are accessed, and how devices talk to each other. Software that installs at the kernel level doesn’t just run alongside your system. It can monitor or manipulate it.
This level of access can be used for legitimate purposes like malware detection, but it’s also how spyware, surveillance tools, and malicious rootkits hide in plain sight.
When a browser like Comet hooks into the kernel, it can technically see everything happening on your machine. Your open tabs, passwords, messages, files, maybe even what other apps are doing. That’s a massive amount of power. And the problem is… users are not clearly told this is happening.
Transparency Matters And It’s Missing
Most software that installs kernel-level components explicitly tells you. Antivirus programs throw up multiple warnings. You have to give special permissions. But with Comet? According to Perplexity AI, there’s no clear warning during installation. No confirmation prompt. No “this software will interact with your operating system kernel” disclosure.
That’s not just an oversight. That’s a trust issue.
Comet is built on Chromium, the same open-source foundation as Chrome and Edge. But unlike Chrome or Firefox, which run in user space and don’t need deep system access, Comet appears to quietly hook into deeper layers of your OS for the sake of “context awareness.” And while that might allow the AI assistant to be more helpful, it shouldn’t come at the cost of user consent.
Always Watching, Always Learning
Comet’s pitch is enticing: it can summarize text, compare products, schedule meetings, even automate data visualizations. All without you switching tabs. It does this by seeing everything on your screen and storing behavioral data like your scroll speed, hover time, and active tabs.
It’s like having an assistant that learns from your every move. But to do that, it has to constantly collect data.
There’s no true incognito mode. There’s no way to prevent it from watching your session. unless you’re a premium user and manually opt out of some tracking features. Even then, you have to trust that the AI agent isn’t quietly indexing things in the background. That’s not privacy. That’s surveillance dressed up as convenience.
Could It Actually Make the Web Safer?
Here’s the frustrating part: AI could help fight malware and phishing. Comet even includes behavioral threat detection and analysis features that monitor for malicious activity using machine learning. It claims to detect malware in downloads before they run and uses federated learning to personalize without directly storing sensitive personal data.
How do we trust a security feature that won’t even tell users it might be operating at the kernel level?
A kernel-level browser could, in theory, detect rogue extensions scraping your data, like the ones recently exposed in Chrome, Firefox and Edge. It could block phishing sites before they load. It could stop sensitive tokens from leaking. That’s powerful. But again, that level of control needs transparency, not marketing fluff.
The Tradeoff No One Wants to Make
At the heart of all this is a difficult tradeoff: automation vs. autonomy.
If you want an AI browser that acts as your agent, reading what you read, doing tasks on your behalf, and eliminating friction, it needs to see a lot. Possibly everything. That’s the model Comet is betting on.
What if you’re not comfortable giving up that much control? What if you don’t want your browser to act like a digital shadow, tracing every move you make online?
Many users already feel like AI is being forced on them. Baked into search, shoved into their operating systems, or replacing features they actually liked. Adding a browser that quietly plugs itself into your system’s core only deepens that mistrust.
Here’s what makes it worse: Comet is only available to Perplexity’s highest-paying subscribers right now, the ones shelling out $200 a month for premium access. That means their most loyal users are the first guinea pigs for this tech. That’s a risky place to test a model that hasn’t fully earned user trust.
Don’t Trade Privacy for Convenience Without a Fight
It’s too early to say whether Comet will succeed or not. Maybe it’ll revolutionize web browsing. Maybe it’ll fade into obscurity. If it sets a new precedent, where kernel-level monitoring becomes normal in AI-powered browsers, that has ripple effects far beyond Perplexity.
If AI is going to live on our machines, it needs to be invited, not installed quietly.
Trust can’t be reverse-engineered. It has to be built with transparency, choice, and control. Right now, Comet doesn’t check those boxes. It’s sleek and ambitious, yes. But until it clearly tells users how deep it goes and gives them a real way to say no, it’s not something I’d run on my system. Especially not at the kernel level.