You have spent weeks obsessing over your digital perimeter. You have got PGP for your emails, a hardware security key for your logins, and a VPN that routes your traffic through four different countries just to check the weather. You feel invincible. But then, you install a little browser extension to help you find a coupon code for a pizza delivery, and just like that, the front door isn't just unlocked—it is off the hinges. Today's prompt from Daniel hits on exactly this paradox. He is worried about the fact that while we obsess over data being encrypted at rest on a server or in transit across the fiber cables, we are completely ignoring the fact that it eventually has to sit in plain text on our screens, and there is a whole ecosystem of little software parasites waiting to gobble it up.
Herman Poppleberry here, and I have to say, Daniel is pulling on a thread that most security experts find terrifying because it is the ultimate "last mile" problem. We have spent decades perfecting the math to make sure the NSA or some highly sophisticated state actor cannot crack our messages while they are sitting on Google's servers. That is what we call encryption at rest. And we have TLS and SSL to protect that data while it moves from the server to your computer, which is encryption in transit. But the moment that data hits your browser and gets decrypted so you can actually read it, it enters a state that I like to call "the danger zone."
The danger zone. Sounds like a bad eighties movie, but in this case, the monster under the bed is just a Chrome extension that helps you format your LinkedIn posts. By the way, before we get too deep into the weeds, fun fact—Google Gemini 1.5 Flash is actually writing our script today. Hopefully, it does not have any malicious extensions installed in its own logic. But Herman, let's look at the basic threat model here. Most people are worried about the "Quantum Apocalypse" or some genius hacker intercepting their packets. But what Daniel is saying is that the real threat is much more mundane. It is just the stuff we voluntarily invite into our browsers.
It's the "vampire rule" of cybersecurity. A vampire can't come into your house unless you invite them. We are out here inviting dozens of vampires into our digital living rooms every single day because they promise to make our lives five percent more convenient. When you install a browser extension, you aren't just adding a little button to your toolbar. You are often granting that piece of code the ability to "read and change all your data on the websites you visit." That is the standard permission string in the Chrome Web Store, and it is essentially a digital skeleton key.
It’s wild because we would never give a random person on the street a key to our house just because they offered to help us organize our mail. But if an extension says it'll save us three dollars on a pair of socks, we click "Add to Chrome" faster than you can say "identity theft." I want to dig into the technical side of this though. How does an extension actually "see" what I’m seeing? If I’m looking at an encrypted ProtonMail inbox or a PGP-decrypted message, isn't that data supposed to be protected from other processes?
You would think so, but the browser architecture is designed for functionality first, security second. Everything you see on a webpage is part of the Document Object Model, or the DOM. Think of the DOM as the skeletal structure of the page. When your browser receives an encrypted email, it uses your private key to turn that gibberish into readable text, and then it injects that text into the DOM so your eyes can process it. The problem is that many extensions operate via what are called "content scripts." These scripts have direct access to the DOM. They can "scrape" the text right out of the elements on the page before you even have a chance to read the first sentence.
But wait, Herman—if I’m using a highly secure site, like a bank or a medical portal, doesn't the browser put up some kind of wall? I mean, how can a grammar checker see what I’m typing into a secure password field or a sensitive medical form?
It’s actually simpler than you’d hope. To the browser, the grammar checker is just another script running on the page. If the extension has the "all sites" permission, it injects its own code into the page's environment. It’s like a transparent overlay. If you’re typing into a text box, that text box is an element in the DOM. The extension can attach an "event listener" to that box. Every time you press a key, the extension gets a notification of exactly what character was typed. It doesn’t need to hack the bank’s encryption; it’s just standing inside the room with you while you write the check.
So it doesn't matter if the data was encrypted with a trillion-bit key while it was on the server. Once the browser does the work of decrypting it for me, the extension is just standing there with a clipboard taking notes.
Well, not "exactly," I should say it functions as a literal observer. It is sitting inside the same execution context as the page content in many ways. If you are typing your credit card number into a form, a malicious extension can have a "listener" attached to that input field. Every keystroke you make is captured in plain text. It doesn't matter if the website uses HTTPS. HTTPS protects the data between your computer and the server. It does nothing to protect the data between your keyboard and the browser's internal memory.
We saw a really nasty example of this back in 2024 with a popular extension called Copyfish. This was an OCR tool—Optical Character Recognition—that people used to grab text from images. It had over three hundred thousand users. A developer's account was compromised through a phishing attack, and the attackers pushed a malicious update to the extension. Because the users had already granted it permission to access their data, the extension started injecting ad-tracking code and redirecting traffic. But it could have easily been programmed to look for password fields or bank balances.
That Copyfish case is a perfect example of the "supply chain" risk. Even if you trust the original developer, you can't guarantee that the developer won't get hacked or, even worse, sell the extension to a shady data-broking company. This happens all the time. An indie developer makes a cool tool, it gets a million users, and then some "marketing firm" offers them fifty thousand dollars for the rights to the extension. The developer takes the money, walks away, and the new owners push an update that turns the extension into a silent data harvester.
It’s the ultimate bait and switch. You think you’re using a tool to manage your tabs, but you’re actually using a tool to catalog your entire digital life for a company in a jurisdiction where privacy laws are just a suggestion. And Daniel mentioned free VPNs in his prompt. That is a huge one. People think they are being "safe" by using a free VPN extension, but often that extension is just a proxy that is logging every single site you visit. If the product is free, you are the product, and in the world of browser extensions, your data is the highest-value currency there is.
The technical mechanism for these free VPN extensions is often just a simple "PAC" file—Proxy Auto-Config—or a basic web request interceptor. They don't provide the system-level encryption of a real VPN client. They just route your browser traffic through their servers. And since they are sitting inside your browser, they can potentially see the headers, the cookies, and the content of the pages you are visiting. It’s the opposite of privacy. It’s like hiring a private investigator to follow you around so you don't get lost, and then being surprised when he sells your diary to the highest bidder.
But why is the browser so permissive? If I’m Google or Mozilla, why am I allowing a random coupon finder to see my banking cookies? Isn't there a way to say "hey, you can see the text on Amazon, but stay away from Chase.com"?
There is, but it’s remarkably easy to bypass if the user doesn't know what they're looking at. Most users just click "Accept" on the permission pop-up. From the browser's perspective, you just gave the extension permission to be a part of your browsing experience. It’s a bit like a "power of attorney" for your web traffic. The browser assumes that if you installed it, you want it to work everywhere. And for a lot of useful extensions—like dark mode toggles or translation tools—they actually do need to work everywhere to be useful. That’s the catch-22.
I want to talk about the different browser models because not all browsers treat extensions the same way. We have the big three—Chrome, Firefox, and then the Chromium-based alternatives like Brave and Vivaldi. Chrome obviously has the biggest target on its back. Google’s whole business model is data, so their permission model for extensions has historically been... let's say "generous" to the developers.
That’s a polite way of putting it. Chrome’s model has traditionally relied on "install-time permissions." You see a list of things the extension wants to do, you click okay, and it has those powers forever. Google has been trying to move toward "Manifest Version Three," which is their new framework for extensions. They claim it makes things more secure by limiting what extensions can do, particularly when it comes to modifying network requests. But critics argue it also breaks a lot of privacy tools like ad blockers while still leaving the DOM wide open for data scraping.
And then you have Firefox. I’ve always felt like Firefox was the "nerdy" choice, but in this case, being nerdy pays off. They have a much more manual review process for their Recommended Extensions program.
They do. In fact, there was a case where a malicious password manager extension—which is the ultimate irony—was caught by Firefox's human reviewers. It took Chrome's automated systems nearly half a year longer to flag the same extension on their store. That one hundred and eighty-seven day gap is an eternity in the world of data theft. If a hacker has six months of access to your password manager's DOM, they don't just have your password; they have your entire life.
Is it just a matter of manpower? Does Google just have too many extensions to review, or is it a philosophical difference in how they view the "openness" of the web?
It’s both. Google’s Chrome Web Store has hundreds of thousands of extensions. Reviewing every update manually would require an army. So they rely on automated scanners that look for known malware patterns. But as we know, hackers are great at writing "polymorphic" code—code that changes its appearance to avoid detection. Firefox has a smaller ecosystem, which makes manual review more feasible for their "Recommended" tier. But even then, they can’t catch everything. If a developer pushes a clean version and then remotely activates a malicious payload later, even a human reviewer might miss it during the initial check.
What about Brave? They market themselves as the "privacy browser." Do they actually do anything differently when it comes to extensions, or are they just Chrome with a lion logo and a built-in ad blocker?
Brave is interesting because it is built on the Chromium engine, so it inherits most of Chrome's extension architecture. However, they have a "shields" system that sits at a lower level than extensions. They also try to bake a lot of common functionality—like ad blocking and IPFS support—directly into the browser core. The idea is that if the browser does more out of the box, you don't need to install as many third-party extensions. The fewer extensions you have, the smaller your attack surface. Vivaldi takes a similar approach; they include things like a mail client and a calendar directly in the browser so you don't have to go hunting in the Web Store for tools that might be malicious.
That seems like the most logical path forward. If you can't trust the third-party ecosystem, you have to bring the features "in-house." But most people still want their specialized tools. I use a specific extension for color picking when I’m doing web design. I use one for managing my RSS feeds. Am I just doomed to be vulnerable?
Not necessarily, but you have to change your philosophy from "Why not?" to "Why?" Every extension should be treated as a major security risk. Ask yourself: does this extension actually need to read my data on "all websites"? Many extensions request that permission even if they only need to work on one specific site. For example, an AliExpress price tracker only needs to work on AliExpress. In Chrome and Brave, you can actually go into the extension settings and change "Site Access." You can set it to "On Click" or "On Specific Sites."
Wait, so I can tell my AliExpress tracker that it’s only allowed to wake up when I’m actually on AliExpress? That seems like a massive win that nobody is using.
Most people don't even know that menu exists. If you right-click an extension icon and go to "Manage Extension," you can find the permissions section. By setting it to "On Click," the extension is essentially dormant. It has no access to your DOM or your data until you manually click the icon to "activate" it for that session. It’s not a silver bullet—if you click it while you have your bank account open in another tab, a sophisticated malicious extension might still be able to peek—but it drastically reduces the passive background data harvesting.
How does that work in practice? If I have ten tabs open and I click "activate" on an extension in tab number five, can it see tabs one through four?
Generally, no. When you set an extension to "On Click," the browser only grants it permission to the "active" tab. It’s like giving someone a temporary guest pass to one specific room in your house. They can’t wander into the bedroom or the basement. However, if the extension has a background script running, there are still ways it could potentially communicate with the browser's broader memory. It’s much safer than "Allow on all sites," but it’s still not a total sandbox.
It’s like putting your dog on a leash instead of letting him run wild in the neighborhood. He can still bite someone if you bring them close, but he's not out there digging up the neighbor's garden while you're asleep. But let's go back to the "at rest" versus "on screen" thing Daniel mentioned. This is really the heart of the PGP problem. If I use a web-based PGP tool, the extension has to see the private key at some point to do the decryption. If the extension itself is compromised, my private key is gone.
This is why "hardcore" privacy advocates usually suggest doing your encryption and decryption entirely outside of the browser. Use a standalone app like GPG Suite or Kleopatra. Decrypt the message in a separate window, read it, and then close it. The browser should just be a "dumb pipe" for the encrypted text. But that is a huge friction point for the average user. Most people want to click a "decrypt" button inside their Gmail interface. And the moment you do that, you are trusting the browser's memory and every extension with access to that memory.
It’s the convenience trap. We’ve talked about this before with hardware wallets for crypto. The moment you connect your "cold" storage to a "hot" browser extension like MetaMask, you’ve introduced a bridge that a hacker can cross. It doesn't matter how secure the chip in your hardware wallet is if a malicious extension can trick the browser into showing you a fake transaction confirmation screen.
And there I go saying it again, I mean, that's exactly the point. The "UI Redressing" or "Clickjacking" attacks are very real. An extension can overlay a transparent layer over a website. You think you’re clicking "Save" on an email, but you’re actually clicking "Authorize" on a malicious script. Because the extension lives "above" the webpage in the browser's hierarchy, it can manipulate the visual reality of what you are seeing.
That is genuinely terrifying. It’s like living in a house where the windows are actually television screens controlled by a stranger. You think you’re looking at your backyard, but you’re actually looking at a pre-recorded loop while someone steals your car. So, if we’re moving toward a world where more and more of our work happens in the browser—I mean, we’ve got Google Docs, Slack, banking, medical records—the browser is effectively our entire operating system now.
It is. We used to worry about viruses on Windows or Mac. Now, the browser is the OS, and extensions are the "apps." But unlike apps on an iPhone, which are heavily sandboxed and have to ask for permission to access your photos or your location, browser extensions have historically had a very "flat" permission model. Once they are in, they are in.
Let's look at some more "high-profile" cases of this. We talked about Copyfish. What about the "The Great Suspender"? That was a huge one a few years back. It had millions of users because it helped save memory by putting inactive tabs to sleep.
That is a classic "acquisition" horror story. The original developer sold it in 2020. The new owners immediately added a script that could execute arbitrary code from a remote server. This is the "holy grail" for a hacker. If you can force a million browsers to phone home and ask for instructions, you have a botnet that is more powerful than any supercomputer. You can use those browsers to perform DDoS attacks, click on ads to generate fraudulent revenue, or, as we've been discussing, scrape personal information.
And because the "suspender" extension needed to manage your tabs, it naturally needed permission to access all your sites. Users didn't blink an eye when the update came through. They just thought, "Oh, it's just getting better at managing my tabs." It took months for the community to realize that the extension was now phoning home to a mysterious domain in a way that looked very suspicious.
And this brings up an interesting point about "obfuscation." Malicious developers are very good at hiding their code. They don't just write a script that says "send_passwords_to_hacker_dot_com." They hide the malicious logic inside thousands of lines of legitimate-looking code, or they "lazy load" the malicious part so it only triggers after the extension has been installed for a week. This bypasses the automated scans that Google uses to vet extensions.
So, if the automated scans are failing, and the human reviewers at Firefox are overwhelmed, what is the actual solution? Is it just "don't use extensions"? That feels like telling someone the only way to be safe in a car is to never drive.
The solution is "profile separation." This is something I think every listener should implement today. You should have at least two browser profiles—one for your "sensitive" life and one for your "junk" life. Your sensitive profile should have zero extensions. None. Not even an ad blocker if you can help it—though maybe a very trusted one like uBlock Origin if you must. This is where you do your banking, your health insurance, and your private email.
And then you have your "junk" profile where you can go wild with the flight scrapers, the coupon finders, and the "Which Harry Potter character are you?" extensions.
Precisely. By separating them, you are using the browser's built-in process isolation to protect your sensitive data. Even if a malicious extension in your "junk" profile is scraping everything in sight, it cannot "reach over" into your "sensitive" profile and see what’s happening in your bank account. It’s the digital equivalent of having a clean room in a laboratory. You keep the contaminants over there, and you keep the important work over here.
That is such a simple and effective fix, but I bet less than one percent of people actually do it. Most people just have one window with fifty tabs open, ranging from "International Wire Transfer" to "Funny Cat Videos."
And that is exactly what the attackers are counting on. They are counting on your laziness and your desire for a unified experience. Another tip is to use "PWA" versions of apps. Progressive Web Apps, like the ones for Spotify or some email clients, often run in a more restricted environment than a standard browser tab.
I also want to touch on the "Least Privilege" principle. We mentioned this briefly, but it’s worth expanding on. If an extension asks for "read and change all your data," but it’s just a "dark mode" toggle for Wikipedia, you should be very suspicious. A well-written extension should only ask for the permissions it absolutely needs.
There is a great tool called "Extension Monitor" and some others that help you audit what you have installed. But honestly, the best audit is a "scorched earth" approach. Go into your extension list right now and delete everything you haven't used in the last week. You’ll be shocked at the "ghost" extensions that are still sitting there, potentially updating themselves in the background and waiting for their moment to strike.
It’s like cleaning out your fridge. You find that jar of mustard from three years ago and wonder why you were keeping it. Except in this case, the mustard might be recording your keystrokes. What about open-source extensions? If I can see the code on GitHub, does that make it safe?
It makes it safer, but it’s not a guarantee unless you are actually compiling the extension from source yourself. Just because the code on GitHub is clean doesn't mean the version uploaded to the Chrome Web Store is the same code. There have been cases where developers maintained a clean open-source repository while pushing a "poisoned" version to the store to monetize their users.
Man, it really is a jungle out there. We’ve talked about the "at rest" and "in transit" encryption being the "fortress," but the browser extensions are like the Trojan Horse that we just wheel right through the front gate. Daniel also asked about the "Sophistication of encryption and privacy we can bake into platforms." It feels like we are reaching a point of diminishing returns. We can make the encryption ten times stronger, but if the "screen" is the vulnerability, the math doesn't matter anymore.
It’s the "Five Dollar Wrench" problem from that famous XKCD comic. You can have million-dollar encryption, but if someone can hit you with a five-dollar wrench until you give them the password, the encryption is useless. In this case, the browser extension is the digital wrench. It doesn't need to "break" the encryption. It just waits for you to do the hard work of decrypting it for your own eyes, and then it takes a peek over your shoulder.
So, looking forward, do you think the browser manufacturers are going to step up? Google's Manifest V3 is controversial, but is it a step in the right direction for security?
It’s a trade-off. It restricts the "webRequest" API, which was a major vector for malicious extensions to redirect traffic. So yes, it makes it harder for some types of malware to operate. But at the same time, it doesn't really solve the DOM scraping problem. As long as extensions need to "interact" with the page to be useful, they will always have the ability to "see" the page.
And with AI becoming more integrated into extensions—I mean, everyone wants an "AI Assistant" that can read their emails and help them draft replies—the amount of data we are handing over is growing exponentially. We aren't just giving them our credit card numbers anymore; we are giving them the entire context of our professional and personal lives.
That is the next frontier of this threat. If an AI extension is "summarizing" your private PGP emails for you, that AI model—and the company that owns it—now has the plain-text content of your entire conversation history. We are essentially voluntarily feeding our most private data into giant corporate "brains" in exchange for a three-sentence summary. It’s a massive privacy regression disguised as a technological leap.
Think about the "Grammarly" model for a second. To fix your grammar, it has to send your text to its servers. If you’re writing a top-secret legal brief or a love letter, Grammarly’s servers now have that text. They might be the most trustworthy company in the world, but if they get hacked, your data is out there. It’s a chain of trust where every link is a potential point of failure.
And that's exactly why I'm so paranoid about "smart" features. Every time a tool offers to "help" you by processing your data, it's creating a new copy of that data in a new location. In the security world, we call this "data sprawl." The more copies of your data exist, the harder it is to keep it all encrypted and secure. Extensions are the biggest drivers of data sprawl in the modern web.
We’ve covered a lot of ground here, but I want to make sure we give people a clear action plan. Because this stuff can feel overwhelming, and when people get overwhelmed, they usually just give up and do nothing.
Step one: Audit. Go to your extensions page right now and delete anything you don't recognize or don't use daily. Step two: Profile separation. Create a "Secure" profile with zero extensions for banking and sensitive work. Step three: Permission management. For the extensions you do keep, right-click them and limit their site access to "On Click" or "On Specific Sites" whenever possible. Step four: Be skeptical. If a "free" tool is asking for broad permissions, it’s not free. You are paying with your data.
And maybe, just maybe, use a standalone app for your PGP if you are actually worried about state-level actors or high-stakes privacy. Don't let your browser be the "everything" app if you can avoid it. It’s just too big of a target.
The browser is a window to the world, but remember that windows work both ways. People can see in just as easily as you can see out.
That is a deep thought for a donkey, Herman. I think we’ve given Daniel plenty to chew on. This really highlights the fact that digital security isn't just about the math; it’s about the "plumbing" of the systems we use every day. You can have the best lock in the world, but if your pipes are leaking, your house is still going to get ruined.
And if you're looking for a way to run your own secure environments without the overhead of traditional servers, check out Modal. They provide the serverless GPU infrastructure that powers a lot of contemporary AI development, including the pipeline that generates this very show.
Big thanks to Modal for providing the GPU credits that keep our gears turning. And of course, thanks to our producer, Hilbert Flumingtop, for keeping us on track and making sure we don't wander too far into the weeds of DOM architecture.
If you found this useful, or if you're now currently panicking and deleting all your extensions, we'd love to hear from you. You can find us at myweirdprompts dot com for all our previous episodes and links to subscribe.
We're also on Telegram—just search for "My Weird Prompts" to get notified when we drop a new episode. It’s a great way to stay in the loop without needing a browser extension to tell you what's new.
This has been My Weird Prompts. Stay safe out there, and watch what you click.
See ya.