I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
Yeah IT pros and tech aware "power" users can always take these measures but the very availability of poor or maliciously coded extensions and apps in popular app stores makes it a problem considering normies will get swayed by the swanky features the software promises and will click past all misgivings and warnings. Social engineering attacks are impossible to prevent using technical means alone. Either a critical mass of ordinary people need to become more safety/privacy conscious or general purpose computing devices will become more & more niche as the very industry which creates these problems in the first place by poor review will also sell the solution of universal thin-clients and locked down devices, of course with the very happy cooperation of govts everywhere.
The problem is most codebase are huge - millions of lines when you include all the libraries etc.
Often they're compiled with typescript etc making manual review almost impossible.
And if you demand the developer send in the raw uncompiled stuff you have the difficulty of Google/Mozilla having to figure out how to compile an arbitrary project which could use custom compilers or compilation steps.
Remember that someone malicious wont hide their malicious code in main.ts... it's gonna be deep inside a chain of libraries (which they might control too, or might have vendored).
> I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
If you're feeling extra-paranoid, the XPI file can be unpacked (ZIP) and to check over the code for anything suspicious or unreasonably-complex, particularly if the browser-extension is supposed to be something simple like "move the up/down vote arrows further apart on HN". :P
While that doesn't solve the overall ecosystem issue, every little bit helps. You'll know it's time to run away if extensions become closed-source blobs.
Funny enough the article mentions this extension was manially reviewed:
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
What I saw in Mozilla extensions store was anything from using minified code (what is this? it might have been useful in the late 90's on the web, but it surely is not necessary as part of an extension, that doesn't download its code from anywhere), to just full on data stealing code (reported, and mozilla removed it after 2 weeks or so).
I don't trust the review process one bit if they allow minified code in the store. For the same reason, "manual" review doesn't fill me with any extra warm confidence feeling. I can look at minified code manually myself, but it's just gibberish, and suspicious code is much harder to discern.
Also, I just stopped using third party extensions, except for 2 (violentmonkey, ublock), so I no longer do reviews. I had a script that would extract the XPI into a git repository before update, do a commit and show me a diff.
Friendly extension store for security conscious users would make it easy to review source code of the extension before hitting install or update. This is like the most security sensitive code that exists in the browser.
The question is, does Mozilla rigorously review every single update of every featured extension? Or did they just vet it once, and a malicious developer may now introduce data collection or similar "features" though a minor update of the extension and keep enjoying the "recommended" badge by Mozilla?
This may also be the reason for the extension begin "Featured" on the Chrome Web Store: Google vetted it once, and didn't think about it for each update.
That link doesn't answer the question though. It states that the extension is reviewed before receiving the recommended status. It does not state that updates are reviewed.
They do, and it takes longer for updates to Recommended extensions to be reviewed as a result.
This is what the Firefox add-ons team sent to me when one of my extensions was invited to the Recommended program:
> If you’re interested in Control Panel for Twitter becoming a Firefox Recommended Extension there are a couple of conditions to consider:
> 1) Mozilla staff security experts manually review every new submission of all Recommended extensions; this ensures all Recommended extensions remain compliant with AMO’s privacy and security standards. Due to this rigorous monitoring you can expect slightly longer review wait times for new version submissions (up to two weeks in some cases, though it’s usually just a few days).
> 2) Developers agree to actively maintain their Recommended extension (i.e. make timely bug fixes and/or generally tend to its ongoing maintenance). Basically we don't want to include abandoned or otherwise decaying content, so if the day arrives you intend to no longer maintain Control Panel for Twitter, we simply ask you to communicate that to us so we can plan for its removal from the program.
> I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not automated scans.
I think we need both human review and for somebody to create an antivirus engine for code that's on par with the heuristics of good AV programs.
You could probably do even better than that since you could actually execute the code, whole or piecewise, with debugging, tracing, coverage testing, fuzzing and so on.
They look really legitimate on the outside, to the point that there's a fair chance they're not aware what their extension is doing. Possibly they're "victim" of this as well.
If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.
It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.
Old advice, but be careful believing what you encounter on the internet!
Don't be silly. If you wanted to sue these guys you'll have a better shot at dragging an actual person in front of a judge than for 99% of the other crap that's on the chrome web store and doesn't provide you with more than an e-mail address.
> Old advice, but be careful believing what you encounter on the internet!
Don't be rude. "Real person" here might live in any country of the world.
And also, why extension for vpn? I live in country where almost everybody uses vpn just to watch YouTube and read twitter, and none of my friends uses some strange extensions. There are open source software for that - from real vpn like wireguard, to proxy software like nekoray/v2raytun. Browser extension is the last thing I would install to be private.
> What, there's an issue because I'm not being underhanded about it like [that] guy?
Wow you’ve put something into words here I never consciously realized is an unwritten rule. Sounds silly but yea you’re 100% right; that seems to be exactly the game we play.
> you'll have a better shot at dragging an actual person in front of a judge than for 99% of the other crap that's on the chrome web store
Based on what? The same instinct that told you having an address and phone number makes an entity legitimate? The chance the people behind this company live in the US is incredibly low. And even if they do live in the US what exactly would they be getting charged with and who would care enough to charge them?
You run a business from home but do not want to reveal you personal address to the world.
You are from a country that Stripe doesn’t support but need to make use of their unique capabilities like Stripe Connect, then you might sign up for Stripe Atlas to incorporate in the USA so you can do business directly with Stripe. Your US business then needs a US physical address ie virtual office.
That you don’t need an office if your company works remotely? Kind of overkill with a whole office for a company with 3 people working at it and everyone works remotely.
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.
> This company has been on researchers' radar before. Security researchers Wladimir Palant and John Tuckner at Secure Annex have previously documented BiScience's data collection practices. Their research established that:
> BiScience collects clickstream data (browsing history) from millions of users
Data is tied to persistent device identifiers, enabling re-identification
The company provides an SDK to third-party extension developers to collect and sell user data
> BiScience sells this data through products like AdClarity and Clickstream OS
> The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:
Hmm.
> They look really legitimate on the outside
Hmm, what, no.
We have a data collection company, thriving financially on lack of privacy protections, indiscriminant collection and collating of data, connected to eight data siphoning "Violate Privacy Network" apps.
And those apps are free... Which is seriously default sketchy if you can't otherwise identify some obviously noble incentives to offer free services/candy to strangers.
Once is happenstance, twice is coincidence, three (or eight) times is enemy action.
The only thing that could possibly make this look any worse is discovering a connection to Facebook.
Judging from their website, all links eventually point to either the VPN extension download website, or a signup link. I'm not surprised if some nation state supported APT is behind this shit.
Somewhat ironically, this article has significant amounts of AI writing in it. (I've done a lot of AI writing in my own sites, and have been learning how to smother "the voice". This article doesn't do a good job of smothering.)
I am surprised because google review team rejects half of my extensions and apps.
Sometimes things don't make sense to me, like how "Uber Driver app access background location and there is no way to change that from settings" - https://developer.apple.com/forums/thread/783227
If Google would care at all for their users, they'd tell WhatsApp to not require the use of the Contacts permission only to add names to numbers when you don't share the Contacts with the App.
Or they'd tell WhatsApp to allow granting microphone permissions for one single call, instead of requesting permanent microphone permissions. All apps that I know of respect the flow of "Ask every time", all but Meta's app.
That's all opinionated, and the latter is part of the OS, not WhatsApp. Not liking how an app works does not compare to an app exfiltrating data without your consent.
I wish there was another button on those contact permission boxes which would tell the app you've granted permissions. But when they try to read your contacts, send them randomly generated junk. Fake phone numbers. Fake names.
Or even better, mix in some real names and phone numbers but change all the other details. I want data brokers to think I live in 8 different countries. I want my email address to show up for 50 different identities. Good luck sorting that out.
I think what's going on there is that "While using" includes when a navigation app is running in the background, which is visible to the user (via e.g. a blue status bar pill). "Always" allows access even when it's not clear to the user that an app is running.
This might be a case of app permissions just being poorly delineated. E.g. I've seen Android apps require "location data" access just because they want to connect over bluetooth or manage WiFi or something (not entirely sure which one it was specifically) because that is actually the same permission and the wording in the permission modal is misleading.
The permissions model for browser extensions has always been backwards. You grant full access at install time, then cross your fingers that nothing changes in an update.
What we actually need is runtime permissions that fire when the extension tries to do something suspicious - like exfiltrating data to domains that aren't related to its stated function. iOS does this reasonably well for apps. Extensions should too.
The "Recommended" badge helps but it's a bandaid. If an extension needs "read and change all data on all websites" to work, maybe it shouldn't work.
A big problem is also that you can pretty much only grant permission for one specific site or all sites and this very much depends on which of those two options the extension uses.
For example there's no need for the "inject custom JS or CSS into websites" extensions to need permission to read and write data on every single website you visit. If you only want to use them to make a few specific sites more accessible to you that doesn't mean you're okay with them touching your online banking. Especially when most of these already let you define specific URLs or patterns each rule/script should apply to.
I understand that there are still vectors for data exfiltration when the same extension has permissions on two different sites and that "code injection as a service" is inherently risky (although cross-origin policies can already lock this down somewhat) but in 2025 I'd hope we could have a more granular permission model for browser extensions that actually supports sandboxing.
Some people are incapable of internal thought. They have to verbalise/write down their thoughts, so they can hear/read it back, and that's how they make progress. In a way, these people's brain do work like LLMs.
A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.
This is exactly why we need more transparency in analytics tools. When building products that handle user data, the "free" model almost always means you're the product.
The scary part is these extensions had Google's "Featured" badge. Manual review clearly isn't enough when companies can update code post-approval. We need continuous monitoring, not just one-time vetting.
For anyone building privacy-focused tools: making your data collection transparent and your business model clear upfront is the only way to build trust. Users are getting savvier about this.
ISPs are so heavily regulated that the will give any federal or government agency free access to future and past internet connection information that are directly tied to your real identity.
Meanwhile reputable VPN provider like mullvad offer there service without KYC and leave feds empty handed when they knock on there doors.
For the same reason you trust your ISP? It handles all your internet traffic; and depending on where you live, probably has government-mandated back doors, or is willing to cooperate with arbitrary requests from law-enforcement agencies.
That's why TLS exists, after all. All Internet traffic is wiretapped.
And that's why I, personally, rent a VPS, run "ssh -D 9010 myvps" in a background, and selectively point my browser at it via proxy.pac (other apps get socksified as needed; although some stubbornly resist it, sigh).
> I don't understand why so many people are using [Cloudflare].
> "Let us handle all your internet traffic.. you can trust us.. []"
TLS does not help, when most Internet traffic is passed through a single entity, which by default will use an edge TLS certificate and re-encrypt all data passing through, so will have decrypted plain text visibility to all data transmitted.
Yeah, and in your contract with ISP you explicitly agree to file any lawsuit against them in small claims court only. Although you can probably go and complain to FCC about them?
The use case is people that are urged to view something that is blocked (torrent / adult / gambling). They want it now, and they don't want to get involved with some shady company that slaps on a 2 year contract and keeps extending indefinitely. These people instead find "free vpn" in the web store and decide to give it a try.
VPNs are just one example. How many chrome extensions do you have that you don't use all the time, like adblockers, cookie consent form handlers or dark mode?
Only if you've added a signing certificate the VPN controls to your CA chain. But at that point they don't have to do anything as complicated as you described.
TLS means “there’s a certificate”. Yeah, if a VPN/proxy can forge a certificate that the user’s browser would trust, it’s an issue.
But considering those are browser extensions, I think they can just inspect any traffic they want on the client side (if they can get such broad permissions approved, which is probably not too hard).
As someone who has witnessed BiScience tracking in the past, I am not surprised to to hear that they might be involved in all this. They came up in the past when researchers investigated the cyberhaven compromise [1][2]. Though the correlation might not all be there its kind of disappointing
Google needs to act on removing these extensions/doing more thorough code reviews. Reputability is everything, and they can be actually valuable (e.g. LastPass, my own extension Ward)
There has to be a better system. Maybe a public extension safety directory?
I’m not sure there’s much more juice to squeeze here via automated or semi-automated means. They could perhaps be doing these kind of human-in-the-loop reviews themselves for all extensions that hit a certain install count, but that’s not a popular technique at Google.
adblockers on chromium-based browsers were severely crippled by manifest V3. they're fine with extenisons (and apparently malware) as long as users can't effectively block their tracking/ads.
Why does that matter if he's not seeing ads. A severely crippled adblocker means that you would see ads during regular usage.
Additionally, Brave a chromium based browser has adblocking built into the browser itself meaning it is not affected by webextention changes and does not require trusting an additional 3rd party.
Its the reason why they found it because the code was in extension. Before manifest v3, extensions could just load external scripts and there's no way you could tell what they were actually doing.
Even if the extension isn’t malicious, it creates a new attack vector that can affect users. If whatever URL the script is remotely loaded from is compromised, now all users of that extension are vulnerable.
Most browser extensions don’t need to insert script tags that point to arbitrary URLs on the internet. You can inject scripts that are bundled with the extension (you don’t even need to use an actual script tag). This is one part of manifest v3 that I think was actually a good change - ad blockers don’t do this so I don’t think Google had an ulterior motive for this particular limitation.
That is correct. You can not inject external scripts. You can fetch from a remote and inject through the content script though, but the content and service worker code is known at review time.
So you can still do everything you could before, but it’s not as hidden anymore
He may have understood it, but the feelings of anger about it are so overwhelming he had to post anyway, even if it didn't perfectly flow with the conversation.
I'm glad the extension system isn't broken (e.g. extensions being hacked). This is just scammy extensions to begin with. I've been scared of extensions since they were first offered (I did like useing greasemonkey to customize everything back in the 2000's/2010's), but I can't resist privacy badger and Ublock Origin since they are open source (but even then it's still a risk).
So much of what's aimed at nontechnical consumers these days is full of dishonesty and abuse. Microsoft kinda turned Windows into something like this, you need OneDrive "for your protection", new telemetry and ads with every update, etc.
In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.
But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.
This was a nearly poetic way to put it. Thank you for ascribing words to a problem that equally frustrates me.
I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.
The situation won’t be improved for as long as an incentive structure exists that drives the degradation of the user experience.
Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.
We're not totally unempowered here, as folks who know how to tech. We can build open source alternatives that are as easy to use and install as the <epithet>-ware we are trying to combat.
Part of the problem has been that there's a mountain to climb vis a vis that extra ten miles to take something that 'works for me' and turn it into 'gramps can install this and it doesn't trigger his alopecia'.
Rather, that was the problem. If you're looking for a use case for LLMs, look no further. We do actually have the capacity to build user-friendly stuff at a fraction of the time cost that we used to.
We can make the world a better place if we actually give a shit. Make things out in the open, for free, that benefit people who aren't in tech. Chip away at the monopolies by offering a competitive service because it's the right thing to do and history will vindicate you instead of trying to squeeze a buck out of each and every thing.
I'm not saying "don't do a thing for money". You need to do that. We all need to do that. But instead of your next binge watch or fiftieth foray into Zandronum on brutal difficulty, maybe badger your llm to do all the UX/UI tweaks you could never be assed to do for that app you made that one time, so real people can use it. I'm dead certain that there are folks reading this now who have VPN or privacy solutions they've cooked up that don't steal all your data and aren't going to cost you an arm and a leg. At the very least, someone reading this has a network plugin that can sniff for exfiltrated data to known compromised networks (including data brokers) - it's probably just finicky to install, highly technical, and delicate outside of your machine. Tell claude to package that shit so larry luddite can install it and reap the benefits without learning what a bash is or how to emacs.
And still, there is plenty of software that you can't run on anything but Windows. That's a major blocker at this point and projects like 'mono' and 'wine', while extremely impressive, are still not good enough to run that same software on Linux.
I'm not a spy so I don't know, but surely in most scenarios it's a lot easier to just ask someone for some data than it is hack/steal it. 25 years of social media has shown that people really don't care about what they do with their data.
Not really? In 1984 you were made an active participant of the oppression. The thought police and 5 minutes hate all required your active, enthusiastic participation.
Brave New World was apathy: the system was comfortable, Soma was freely available and there was a whole system to give disruptive elements comfortable but non disruptive engagement.
The protagonist in Brave New World spends a lot of time resenting the system but really he just resents his deformity, wanted what it denied him in society, and had no real higher criticisms of it beyond what he felt he couldn't have.
1984 has coercive elements lacking from Brave New World, but the lack of any political awareness or desire to change things among the proles was critical to the mechanisms of oppression. They were generally content with their lot, and some of the ways of ensuring that have parallels to Brave New World. Violence and hate were used more than sex and drugs but still very much as opiates of the masses: encourage and satisfy base urges to quell any desire to rebel. And sex was used to some extent: although sex was officially for procreation only, prostitution was quietly encouraged among the proles.
You might even imagine 1984's society evolving into Brave New World's as the mechanisms of oppression are gradually refined. Indeed, Aldous Huxley himself suggested as much in a letter to Orwell [1].
Huh? Of course they would: It's way less work than defeating TLS/SSL encryption or hacking into a bunch of different servers.
Bonus points if the government agency can leave most of the work to an ostensibly separate private company, while maintaining a "mutual understanding" of government favors for access.
Why wouldn't they? It isn't that you need to, just that obviously you would. You engage with the extension owners by sending an email from a director of a data company instead of as a captain of some military operation. The hit rate is going to be much higher with one of the strategies.
It would have been no less suprising to me had it been a US company but it certainly fits the cultural stereotype of callousness that particular country has been openly displaying in recent years.
Why is a security researcher using a Free VPN? The standard wisdom is "if its free, you're the product". So you're going to proxy all your sensitive traffic through a free thing? Its not great to trust paid services with your data, nevermind free stuff.
Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.
They are not. They found it by searching for extensions that had the capability to exfiltrate data.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
Some people have mentioned that this is a U.S incorporated company (Delaware). Recommend reading Moneyland by Oliver Bullough if you want to know more about the U.S role as the new shell company haven.
What is the economic value of all these AI chat logs? I can see it useful for developing advertising profile. But I wonder if it's also just sold as training data for people try to build their own models?
Pretty easy to match up those logs with browser fingerprinting to identify the actual user. Then you have "do you want to purchase what Mr. Foo Bar is prompting the LLM?"
Thanks, the last fetched page on archive.org is from 2025-01-26 [1], removed after this date and before 2025-02-13. 155,477 users at the moment, 1 star reviews were mostly about not working. It's interesting that the developers didn't care to remove the button directing to the ff add-on page at least several months after the removal. Maybe was some kind of PR compromise, they probably thought that listing it with linking to a broken page was better than not listing at all.
A review page [2] mentions that this add-on is a peer-to-peer vpn, not having its own dedicated servers that already makes it suspicious.
Do we know for how much that type of content sells? Not that I'm interested in entering the market, but the economics of that kind of thing are always fascinating. How much are buyers willing to pay for AI conversations? I would expect the value to be pretty low
I doubt its the actual conversations but the aggregated insights that are valuable.
Think: is my brand getting mentioned more in AI chats? Are people associating positive or negative feelings towards it? Are more people asking about this topic lately?
Let's assume that people are discussing medical conditions in these conversations - I think that insurance companies would be pretty interested to get this kind of data in their hands.
Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.
Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.
I think this is most likely what happened. The update/review process for extensions is broken. Apparently you can add any malicious functionality after you’re in and also keep any badges and recommendations.
> Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
That can be circumnavigated by bundling the conversations into one POST to an API endpoint, along with a few hundred calls to several dummy endpoints to muddy the waters. Bonus points if you can make it look like an normal-passing update script.
It'll still show up in the end, but at this point your main goal is to delay the discovery as much as you can.
As soon as you hijack the fetch function (which cannot be done with WebAssembly alone), it's going to look suspicious, and someone who looks at this carefully enough will flag it.
What would the fallout look like if too many people start to have horror stories about how much their life is destroyed by incriminating or down right nasty or wrong ai chat history. It'll suddenly become a tool where you can't be honest. If it's not already.
Let's say we don't trust ublock. At the very least it is still blocking ad networks which do reduce internet performance and are vectors of exploitation, so it is still adding value whether you trust it or not.
Under the hypothetical that we don't trust ublock, it would be foolish to grant it full access to all data on all websites. It would not be adding value.
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
Trusting Google with your privacy is like putting the fox in charge of the henhouse.
Note that in the profile of a model in Openrouter, under Data Policy, there is a statement as "Prompt Training". Some of model will clearly stated that prompt training is true, even for paid models.
The only extensions I have installed are dark reader and ublock origin. Would be nice if I could disable auto updating for them somehow and run local pinned versions...
From my experience, Google does not do a thorough app review. Reviewers get maybe a few minutes to review and move on due to the volume of apps awaiting review.
Note that this is a pretty blatant GDPR violation and you should report this to the local data protection agency if you are a EU resident and care about this (especially if you've used this extension). Their privacy policy claims the data collection is consent-based and that the app settings also let you revoke this consent. According to the article, the latter isn't the case and the user is never informed of the extent of the collection and the risk of sensitive or specially protected personal information (e.g. sexual orientation) being part of the data they're collecting. Their privacy policy states the collected data is filtered to remove this kind of information but that's irrelevant because processing necessarily happens after collection and the GDPR already applies at the start of that pipeline.
If Urban VPN is indeed closely affiliated with the data broker, a GDPR fine might also affect that company too given how these fines work. There is a high bar for the kind of misconduct that would result in a fine but it seems plausible that they're being knowingly and deliberately deceptive and engaging in widespread data collection that is intentionally invasive and covert. That would be a textbook example for the kind of behavior the GDPR is meant to target with fines.
The same likely applies to the other extensions mentioned in the article. Yes, "if the product is free, you are the product" but that is exactly why the GDPR exists. The problem isn't that they're harvesting user data but that they're being intentionally deceptive and misleading in their statements about this, claim they are using consent as the legal basis without having obtained it[0], and they're explicitly contradicting themselves in their claims ("we're not collecting sensitive information that would need special consideration but if we do we make sure to find it and remove it before sharing your information but don't worry because it's mostly used in aggregate except when it isn't"). Just because you except some bruising when picking up martial arts as a hobby doesn't mean your sparring partner gets to pummel your face in when you're already knocked out.
[0]: Because "consent" seems to be a hard concept for some people to grasp: it's literally analogous to what you'd want to establish before having sex with someone (though to be fair: the laws are much more lenient about unclear consent for sex because it's less reasonable to expect it to be documented with a paper trail like you can easily do for software). I'll try to keep it SFW but my place of work is not your place of work so think carefully if you want to copy this into your next Powerpoint presentation.
Does your prospective sexual partner have any reason to strongly believe that they can't refuse your advances because doing so would limit their access to something else (e.g. you took them on a date in your car and they can't afford a taxi/uber and public transport isn't available so they rely on you to get back home, aka "the implication")? Then they can't give you voluntary consent because you're (intentionally or not) pressuring them into it. The same goes if you make it much harder for them to refuse than to agree (I can't think of a sex analogy for this because this seems obvious in direct human interactions but somehow some people still think hiding "reject all non-essential" is an option you are allowed to hide between two more steps when the "accept all" button is right there even if the law explicitly prohibits these shenanigans).
Is your prospective sexual partner underage or do they appear extremely naive (e.g. you suspect they've never had any sex ed and don't know what having sex might entail or the risks involved like pregnancy, STIs or, depending on the acts, potential injuries)? Then they probably can't give you informed consent because they don't fully understand what they're consenting to. For data processing this would be failure to disclose the nature of the collection/processing/storage that's about to happen. And no, throwing the entire 100 page privacy policy at them with a consent dialog at the start hardly counts the same way throwing a biology textbook at a minor doesn't make them able to consent.
Is your prospective sexual partner giving you mixed signals but seems to be generally okay with the idea of "taking things further"? Then you're still missing specific consent and better take things one step at a time checking in on them if they're still comfortable with the direction you're taking things before you decide to raw dog their butt (even if they might turn out to be into that). Or in software terms, it's probably better to limit the things you seek consent for to what's currently happening for the user (e.g. a checkbox on a contact form that informs them what you actually intend to do with that data specifically) rather than try to get it all in one big consent modal at the start - this also comes with the advantage that you can directly demonstrate when and how the specific consent relevant to that data was obtained when later having to justify how that data was used in case something goes wrong.
Is your now-active sexual partner in a position where they can no longer tell you to stop (e.g. because they're tied up and ball-gagged)? Then the consent you did obtain isn't revokable (and thus again invalid) because they need to be able to opt out (this is what "safe words" are for and why your dentist tells you to raise your hand where they can see it if you need them to stop during a procedure - given that it's hard to talk with someone's hands in your mouth). In software this means withdrawing consent (or "opting out") should be as easy as it was to give it in the first place - an easy solution is having a "privacy settings" screen easily accessible in the same place as the privacy policy and other mandatory information that at the very least covers everything you stuffed in that consent dialog I told you not to use, as well as anything you tucked away in other forms downstream. This also gives you a nice place to link to at every opportunity to keep your user at ease and relaxed to make the journey more enjoyable for both of you.
What sort of argument is that? Just because I need to eat (also let's be real the developers/owners behind this app are not struggling to get food on the table), does excuse me doing unethical/illegal things (and this behaviour is almost certainly illegal in the EU at least).
The guy that holds up people for money in the alley is a human too, people forget, and needs to pay for food and a place to live. Of course they do too.
I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
https://support.mozilla.org/en-US/kb/recommended-extensions-...
I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not just automated scans.
Yeah IT pros and tech aware "power" users can always take these measures but the very availability of poor or maliciously coded extensions and apps in popular app stores makes it a problem considering normies will get swayed by the swanky features the software promises and will click past all misgivings and warnings. Social engineering attacks are impossible to prevent using technical means alone. Either a critical mass of ordinary people need to become more safety/privacy conscious or general purpose computing devices will become more & more niche as the very industry which creates these problems in the first place by poor review will also sell the solution of universal thin-clients and locked down devices, of course with the very happy cooperation of govts everywhere.
The problem is most codebase are huge - millions of lines when you include all the libraries etc.
Often they're compiled with typescript etc making manual review almost impossible.
And if you demand the developer send in the raw uncompiled stuff you have the difficulty of Google/Mozilla having to figure out how to compile an arbitrary project which could use custom compilers or compilation steps.
Remember that someone malicious wont hide their malicious code in main.ts... it's gonna be deep inside a chain of libraries (which they might control too, or might have vendored).
> I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
If you're feeling extra-paranoid, the XPI file can be unpacked (ZIP) and to check over the code for anything suspicious or unreasonably-complex, particularly if the browser-extension is supposed to be something simple like "move the up/down vote arrows further apart on HN". :P
While that doesn't solve the overall ecosystem issue, every little bit helps. You'll know it's time to run away if extensions become closed-source blobs.
You can also, more conveniently, plug an extension's URL into this viewer:
https://robwu.nl/crxviewer/
Funny enough the article mentions this extension was manially reviewed: > A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
I at some point vetted the extensions for myself.
What I saw in Mozilla extensions store was anything from using minified code (what is this? it might have been useful in the late 90's on the web, but it surely is not necessary as part of an extension, that doesn't download its code from anywhere), to just full on data stealing code (reported, and mozilla removed it after 2 weeks or so).
I don't trust the review process one bit if they allow minified code in the store. For the same reason, "manual" review doesn't fill me with any extra warm confidence feeling. I can look at minified code manually myself, but it's just gibberish, and suspicious code is much harder to discern.
Also, I just stopped using third party extensions, except for 2 (violentmonkey, ublock), so I no longer do reviews. I had a script that would extract the XPI into a git repository before update, do a commit and show me a diff.
Friendly extension store for security conscious users would make it easy to review source code of the extension before hitting install or update. This is like the most security sensitive code that exists in the browser.
The question is, does Mozilla rigorously review every single update of every featured extension? Or did they just vet it once, and a malicious developer may now introduce data collection or similar "features" though a minor update of the extension and keep enjoying the "recommended" badge by Mozilla?
This may also be the reason for the extension begin "Featured" on the Chrome Web Store: Google vetted it once, and didn't think about it for each update.
This is just spreading FUD where an answer could have been provided.
> Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
https://support.mozilla.org/en-US/kb/recommended-extensions-...
That link doesn't answer the question though. It states that the extension is reviewed before receiving the recommended status. It does not state that updates are reviewed.
They do, and it takes longer for updates to Recommended extensions to be reviewed as a result.
This is what the Firefox add-ons team sent to me when one of my extensions was invited to the Recommended program:
> If you’re interested in Control Panel for Twitter becoming a Firefox Recommended Extension there are a couple of conditions to consider:
> 1) Mozilla staff security experts manually review every new submission of all Recommended extensions; this ensures all Recommended extensions remain compliant with AMO’s privacy and security standards. Due to this rigorous monitoring you can expect slightly longer review wait times for new version submissions (up to two weeks in some cases, though it’s usually just a few days).
> 2) Developers agree to actively maintain their Recommended extension (i.e. make timely bug fixes and/or generally tend to its ongoing maintenance). Basically we don't want to include abandoned or otherwise decaying content, so if the day arrives you intend to no longer maintain Control Panel for Twitter, we simply ask you to communicate that to us so we can plan for its removal from the program.
That's great! They should put that on the website.
> I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not automated scans.
I think we need both human review and for somebody to create an antivirus engine for code that's on par with the heuristics of good AV programs.
You could probably do even better than that since you could actually execute the code, whole or piecewise, with debugging, tracing, coverage testing, fuzzing and so on.
The article states that Google has done the same for this extension as part of providing its "Featured" badge.
The same applies to code editor extensions!
The company behind this appears to be "real" and incorporated in Delaware.
> Urban Cyber Security INC
https://opencorporates.com/companies/us_de/5136044
https://www.urbancybersec.com/about-us/
I found two addresses:
> 1007 North Orange Street 4th floor Wilmington, DE 19801 US
> 510 5th Ave 3rd floor New York, NY 10036 United States
and even a phone number: +1 917-690-8380
https://www.manhattan-nyc.com/businesses/urban-cyber-securit...
They look really legitimate on the outside, to the point that there's a fair chance they're not aware what their extension is doing. Possibly they're "victim" of this as well.
> They look really legitimate on the outside
If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.
It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.
Old advice, but be careful believing what you encounter on the internet!
Don't be silly. If you wanted to sue these guys you'll have a better shot at dragging an actual person in front of a judge than for 99% of the other crap that's on the chrome web store and doesn't provide you with more than an e-mail address.
> Old advice, but be careful believing what you encounter on the internet!
Try to not be terminally cringe either?
Don't be rude. "Real person" here might live in any country of the world.
And also, why extension for vpn? I live in country where almost everybody uses vpn just to watch YouTube and read twitter, and none of my friends uses some strange extensions. There are open source software for that - from real vpn like wireguard, to proxy software like nekoray/v2raytun. Browser extension is the last thing I would install to be private.
> Don't be rude.
What, there's an issue because I'm not being underhanded about it like that swatcoder guy?
> And also, why extension for vpn?
Why are you asking me that?
>> Don't be rude.
> What, there's an issue because I'm not being underhanded about it like [that] guy?
Wow you’ve put something into words here I never consciously realized is an unwritten rule. Sounds silly but yea you’re 100% right; that seems to be exactly the game we play.
For better or for worse.
> being underhanded about it like that (USER) guy?
HN guidelines: Assume good faith.
> you'll have a better shot at dragging an actual person in front of a judge than for 99% of the other crap that's on the chrome web store
Based on what? The same instinct that told you having an address and phone number makes an entity legitimate? The chance the people behind this company live in the US is incredibly low. And even if they do live in the US what exactly would they be getting charged with and who would care enough to charge them?
https://www.manhattanvirtualoffice.com/
The NY address is a virtual office.
https://themillspace.com/wilmington/
The DE address is a virtual office plus coworking facility.
Wow the virtual office concept is so beyond shady. I wonder if there are any legitimate uses of it?
Many:
You run a business from home but do not want to reveal you personal address to the world.
You are from a country that Stripe doesn’t support but need to make use of their unique capabilities like Stripe Connect, then you might sign up for Stripe Atlas to incorporate in the USA so you can do business directly with Stripe. Your US business then needs a US physical address ie virtual office.
Etc
Virtual offices have been around forever and aren't really an indication of being shady necessarily.
That you don’t need an office if your company works remotely? Kind of overkill with a whole office for a company with 3 people working at it and everyone works remotely.
Amazing.
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.
> This company has been on researchers' radar before. Security researchers Wladimir Palant and John Tuckner at Secure Annex have previously documented BiScience's data collection practices. Their research established that:
> BiScience collects clickstream data (browsing history) from millions of users Data is tied to persistent device identifiers, enabling re-identification The company provides an SDK to third-party extension developers to collect and sell user data
> BiScience sells this data through products like AdClarity and Clickstream OS
> The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:
Hmm.
> They look really legitimate on the outside
Hmm, what, no.
We have a data collection company, thriving financially on lack of privacy protections, indiscriminant collection and collating of data, connected to eight data siphoning "Violate Privacy Network" apps.
And those apps are free... Which is seriously default sketchy if you can't otherwise identify some obviously noble incentives to offer free services/candy to strangers.
Once is happenstance, twice is coincidence, three (or eight) times is enemy action.
The only thing that could possibly make this look any worse is discovering a connection to Facebook.
Israeli company. No doubt some Mossad front.
You can get a mailing address and phone number for like $15/mo. You can incorporate a US business for only a couple hundred dollars.
Is the agent address real?
1000 N. WEST ST. STE. 1501, WILMINGTON, New Castle, DE, 19801
It almost matches this law firms address but not quite.
https://www.skjlaw.com/contact-us/
Brandywine Building 1000 N. West Street, Suite 1501 Wilmington DE 19801
Being a real business doesn't necessarily mean they can be trusted. Real companies do shady stuff all the time.
This also works in reverse: shady companies do real business. While the reason might be different the end result is the same.
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.
BiScience is an Israeli company.
Israel is the new Russia, I guess.
Judging from their website, all links eventually point to either the VPN extension download website, or a signup link. I'm not surprised if some nation state supported APT is behind this shit.
Somewhat ironically, this article has significant amounts of AI writing in it. (I've done a lot of AI writing in my own sites, and have been learning how to smother "the voice". This article doesn't do a good job of smothering.)
I am surprised because google review team rejects half of my extensions and apps.
Sometimes things don't make sense to me, like how "Uber Driver app access background location and there is no way to change that from settings" - https://developer.apple.com/forums/thread/783227
If Google would care at all for their users, they'd tell WhatsApp to not require the use of the Contacts permission only to add names to numbers when you don't share the Contacts with the App.
Or they'd tell WhatsApp to allow granting microphone permissions for one single call, instead of requesting permanent microphone permissions. All apps that I know of respect the flow of "Ask every time", all but Meta's app.
Google just doesn't care.
That's all opinionated, and the latter is part of the OS, not WhatsApp. Not liking how an app works does not compare to an app exfiltrating data without your consent.
They are not comparing it to the data issue. The original issue was lead to further conversation. It’s a valid concern and they make a good point.
I wish there was another button on those contact permission boxes which would tell the app you've granted permissions. But when they try to read your contacts, send them randomly generated junk. Fake phone numbers. Fake names.
Or even better, mix in some real names and phone numbers but change all the other details. I want data brokers to think I live in 8 different countries. I want my email address to show up for 50 different identities. Good luck sorting that out.
I think what's going on there is that "While using" includes when a navigation app is running in the background, which is visible to the user (via e.g. a blue status bar pill). "Always" allows access even when it's not clear to the user that an app is running.
The developer documentation is actually pretty clear about this: https://developer.apple.com/documentation/bundleresources/ch...
This might be a case of app permissions just being poorly delineated. E.g. I've seen Android apps require "location data" access just because they want to connect over bluetooth or manage WiFi or something (not entirely sure which one it was specifically) because that is actually the same permission and the wording in the permission modal is misleading.
The permissions model for browser extensions has always been backwards. You grant full access at install time, then cross your fingers that nothing changes in an update.
What we actually need is runtime permissions that fire when the extension tries to do something suspicious - like exfiltrating data to domains that aren't related to its stated function. iOS does this reasonably well for apps. Extensions should too.
The "Recommended" badge helps but it's a bandaid. If an extension needs "read and change all data on all websites" to work, maybe it shouldn't work.
A big problem is also that you can pretty much only grant permission for one specific site or all sites and this very much depends on which of those two options the extension uses.
For example there's no need for the "inject custom JS or CSS into websites" extensions to need permission to read and write data on every single website you visit. If you only want to use them to make a few specific sites more accessible to you that doesn't mean you're okay with them touching your online banking. Especially when most of these already let you define specific URLs or patterns each rule/script should apply to.
I understand that there are still vectors for data exfiltration when the same extension has permissions on two different sites and that "code injection as a service" is inherently risky (although cross-origin policies can already lock this down somewhat) but in 2025 I'd hope we could have a more granular permission model for browser extensions that actually supports sandboxing.
“ A few weeks ago, I was wrestling with a major life decision. Like I've grown used to doing, I opened Claude”
Is this where we’re at with AI?
People used to cast lots to make major life decisions.
Putting a token predictor in the mix — especially one incapable of any actual understanding — seems like a natural evolution.
Absolved of burden of navigating our noisy, incomplete and dissonant thoughts, we can surrender ourselves to the oracle and just obey.
If this is surprising to you then your circle is fairly unusual.
For example HBR recently reported the number 1 use for ChatGPT is "Therapy/companionship"
https://archive.is/Y76c5
Some people are incapable of internal thought. They have to verbalise/write down their thoughts, so they can hear/read it back, and that's how they make progress. In a way, these people's brain do work like LLMs.
Delegating life decisions to AI is obviously quite stupid but it can really help lay out and question your thoughts even if it's obviously biased.
A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.
Do you just mean external vs internal processing/thinking?
This is exactly why we need more transparency in analytics tools. When building products that handle user data, the "free" model almost always means you're the product.
The scary part is these extensions had Google's "Featured" badge. Manual review clearly isn't enough when companies can update code post-approval. We need continuous monitoring, not just one-time vetting.
For anyone building privacy-focused tools: making your data collection transparent and your business model clear upfront is the only way to build trust. Users are getting savvier about this.
I don't understand why so many people are using / trusting VPNs
"Let us handle all your internet traffic.. you can trust us.. we're free!"
No thank you.
ISPs are so heavily regulated that the will give any federal or government agency free access to future and past internet connection information that are directly tied to your real identity.
Meanwhile reputable VPN provider like mullvad offer there service without KYC and leave feds empty handed when they knock on there doors.
https://mullvad.net/en/blog/mullvad-vpn-was-subject-to-a-sea...
For the same reason you trust your ISP? It handles all your internet traffic; and depending on where you live, probably has government-mandated back doors, or is willing to cooperate with arbitrary requests from law-enforcement agencies.
That's why TLS exists, after all. All Internet traffic is wiretapped.
Because I pay the ISP, it is heavily regulated, and they actually make a lot of money from being an ISP?
I'd be significantly more suspicious by default of ISPs that charge no money.
> That's why TLS exists, after all.
That protects you if you're using standard methods to connect. Installed software gets to bypass it.
And that's why I, personally, rent a VPS, run "ssh -D 9010 myvps" in a background, and selectively point my browser at it via proxy.pac (other apps get socksified as needed; although some stubbornly resist it, sigh).
But it's cumbersome.
You should run VPN on your gateway instead.
> I don't understand why so many people are using [Cloudflare].
> "Let us handle all your internet traffic.. you can trust us.. []"
TLS does not help, when most Internet traffic is passed through a single entity, which by default will use an edge TLS certificate and re-encrypt all data passing through, so will have decrypted plain text visibility to all data transmitted.
I have a contract with my ISP, I can know who runs the company and I can sue the company if they violate anything they promised.
Yeah, and in your contract with ISP you explicitly agree to file any lawsuit against them in small claims court only. Although you can probably go and complain to FCC about them?
TLS doesnt hide IP addresses
A lot of people from poor countries where they can't access a lot of websites/services and also can't pay for a VPN use these "free" VPNs
but other than that I would never trust anything other than Mullvad/IVPN/ProtonVPN
The use case is people that are urged to view something that is blocked (torrent / adult / gambling). They want it now, and they don't want to get involved with some shady company that slaps on a 2 year contract and keeps extending indefinitely. These people instead find "free vpn" in the web store and decide to give it a try.
VPNs are just one example. How many chrome extensions do you have that you don't use all the time, like adblockers, cookie consent form handlers or dark mode?
Yeah free VPN is totally a problem, but there's TLS so at least those users aren't getting their bank account information stolen.
TLS works when app is installed somewhere else, but not in browser itself. Browser actually handles TLS termination.
Does tls means certificate pinning ? Can't a vpn alter dns queries to return a proxy website to your bank, using a forged certificate ?
Only if you've added a signing certificate the VPN controls to your CA chain. But at that point they don't have to do anything as complicated as you described.
TLS means “there’s a certificate”. Yeah, if a VPN/proxy can forge a certificate that the user’s browser would trust, it’s an issue.
But considering those are browser extensions, I think they can just inspect any traffic they want on the client side (if they can get such broad permissions approved, which is probably not too hard).
As someone who has witnessed BiScience tracking in the past, I am not surprised to to hear that they might be involved in all this. They came up in the past when researchers investigated the cyberhaven compromise [1][2]. Though the correlation might not all be there its kind of disappointing
[1] https://secureannex.com/blog/cyberhaven-extension-compromise.... [2] https://secureannex.com/blog/sclpfybn-moneitization-scheme/ (referenced in the article)
Google needs to act on removing these extensions/doing more thorough code reviews. Reputability is everything, and they can be actually valuable (e.g. LastPass, my own extension Ward)
There has to be a better system. Maybe a public extension safety directory?
I’m not sure there’s much more juice to squeeze here via automated or semi-automated means. They could perhaps be doing these kind of human-in-the-loop reviews themselves for all extensions that hit a certain install count, but that’s not a popular technique at Google.
Do you think Google wants to have the extensions system, given that this is how people block ads?
adblockers on chromium-based browsers were severely crippled by manifest V3. they're fine with extenisons (and apparently malware) as long as users can't effectively block their tracking/ads.
Adblockers are still working fine though? I’m on chrome with ublock and I’m not seeing any ads.
you're not using ublock, you're using ublock lite. it cannot do dynamic filtering, script blocking, or url parameter removal, among other limitations.
Why does that matter if he's not seeing ads. A severely crippled adblocker means that you would see ads during regular usage.
Additionally, Brave a chromium based browser has adblocking built into the browser itself meaning it is not affected by webextention changes and does not require trusting an additional 3rd party.
Tracking is also very important. Blocking scripts is very useful
I wouldn’t be surprised if it goes away - it’s very “old Google”. We’re moving more towards walled gardens.
Google is doing code review on extensions?
I’m not sure, but whenever I cut a new release I upload my extension code and it goes through a review period before they publish.
Is this even a problem that code review could find? Once they have your conversation data what happens then isn't part of the plug-in.
I wish Congress spent as much time fighting about issues like this vs trying to break up Google. This is far more impact.
Articles like this do a decent job of bringing awareness, but we all know Google will do absolutely nothing
I thought manifest v3 was supposed to make chrome extensions secure?
Its the reason why they found it because the code was in extension. Before manifest v3, extensions could just load external scripts and there's no way you could tell what they were actually doing.
> extensions could just load external scripts and there's no way you could tell what they were actually doing.
I do think security researchers would be able to figure out what scripts are downloaded and run.
Regardless, none of this seems to matter to end users whether the script is in the extension or external.
Even if the extension isn’t malicious, it creates a new attack vector that can affect users. If whatever URL the script is remotely loaded from is compromised, now all users of that extension are vulnerable.
nothing stopping server side logic: if request.ip != myvictim, serve no malicious payload.
Wait, does that mean Manifest v3 is so neutered that it can't load a `<script>` tag into the page if an extension needed to?
If so, I feel like something that limited is hardly even a browser extension interface in the traditional sense.
Most browser extensions don’t need to insert script tags that point to arbitrary URLs on the internet. You can inject scripts that are bundled with the extension (you don’t even need to use an actual script tag). This is one part of manifest v3 that I think was actually a good change - ad blockers don’t do this so I don’t think Google had an ulterior motive for this particular limitation.
That is correct. You can not inject external scripts. You can fetch from a remote and inject through the content script though, but the content and service worker code is known at review time.
So you can still do everything you could before, but it’s not as hidden anymore
Let me ask you this way: How do you think they make money?
I believe you may be missing the sarcasm of the post you are responding to.
He may have understood it, but the feelings of anger about it are so overwhelming he had to post anyway, even if it didn't perfectly flow with the conversation.
I’m here to inform you that you perhaps missed the second-order sarcasm of the post you responded to. Hopefully the chain ends here.
I am afraid you may have missed a third order of sarcasm. It sometimes called Incepticasm.
I'm glad the extension system isn't broken (e.g. extensions being hacked). This is just scammy extensions to begin with. I've been scared of extensions since they were first offered (I did like useing greasemonkey to customize everything back in the 2000's/2010's), but I can't resist privacy badger and Ublock Origin since they are open source (but even then it's still a risk).
So much of what's aimed at nontechnical consumers these days is full of dishonesty and abuse. Microsoft kinda turned Windows into something like this, you need OneDrive "for your protection", new telemetry and ads with every update, etc.
In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.
But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.
This was a nearly poetic way to put it. Thank you for ascribing words to a problem that equally frustrates me.
I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.
The situation won’t be improved for as long as an incentive structure exists that drives the degradation of the user experience.
Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.
We're not totally unempowered here, as folks who know how to tech. We can build open source alternatives that are as easy to use and install as the <epithet>-ware we are trying to combat.
Part of the problem has been that there's a mountain to climb vis a vis that extra ten miles to take something that 'works for me' and turn it into 'gramps can install this and it doesn't trigger his alopecia'.
Rather, that was the problem. If you're looking for a use case for LLMs, look no further. We do actually have the capacity to build user-friendly stuff at a fraction of the time cost that we used to.
We can make the world a better place if we actually give a shit. Make things out in the open, for free, that benefit people who aren't in tech. Chip away at the monopolies by offering a competitive service because it's the right thing to do and history will vindicate you instead of trying to squeeze a buck out of each and every thing.
I'm not saying "don't do a thing for money". You need to do that. We all need to do that. But instead of your next binge watch or fiftieth foray into Zandronum on brutal difficulty, maybe badger your llm to do all the UX/UI tweaks you could never be assed to do for that app you made that one time, so real people can use it. I'm dead certain that there are folks reading this now who have VPN or privacy solutions they've cooked up that don't steal all your data and aren't going to cost you an arm and a leg. At the very least, someone reading this has a network plugin that can sniff for exfiltrated data to known compromised networks (including data brokers) - it's probably just finicky to install, highly technical, and delicate outside of your machine. Tell claude to package that shit so larry luddite can install it and reap the benefits without learning what a bash is or how to emacs.
And still, there is plenty of software that you can't run on anything but Windows. That's a major blocker at this point and projects like 'mono' and 'wine', while extremely impressive, are still not good enough to run that same software on Linux.
[flagged]
I would figure state actors don’t need to go through the trouble of a browser extension. But, yeah.
I'm not a spy so I don't know, but surely in most scenarios it's a lot easier to just ask someone for some data than it is hack/steal it. 25 years of social media has shown that people really don't care about what they do with their data.
Wasn't there a comment on this phenomenon along the lines "we were so afraid of 1984 but what we really got was Brave New World"?
The apathy of the oppressed is a core theme of 1984.
Not really? In 1984 you were made an active participant of the oppression. The thought police and 5 minutes hate all required your active, enthusiastic participation.
Brave New World was apathy: the system was comfortable, Soma was freely available and there was a whole system to give disruptive elements comfortable but non disruptive engagement.
The protagonist in Brave New World spends a lot of time resenting the system but really he just resents his deformity, wanted what it denied him in society, and had no real higher criticisms of it beyond what he felt he couldn't have.
1984 has coercive elements lacking from Brave New World, but the lack of any political awareness or desire to change things among the proles was critical to the mechanisms of oppression. They were generally content with their lot, and some of the ways of ensuring that have parallels to Brave New World. Violence and hate were used more than sex and drugs but still very much as opiates of the masses: encourage and satisfy base urges to quell any desire to rebel. And sex was used to some extent: although sex was officially for procreation only, prostitution was quietly encouraged among the proles.
You might even imagine 1984's society evolving into Brave New World's as the mechanisms of oppression are gradually refined. Indeed, Aldous Huxley himself suggested as much in a letter to Orwell [1].
[1] https://gizmodo.com/read-aldous-huxleys-review-of-1984-he-se...
Huh? Of course they would: It's way less work than defeating TLS/SSL encryption or hacking into a bunch of different servers.
Bonus points if the government agency can leave most of the work to an ostensibly separate private company, while maintaining a "mutual understanding" of government favors for access.
Why wouldn't they? It isn't that you need to, just that obviously you would. You engage with the extension owners by sending an email from a director of a data company instead of as a captain of some military operation. The hit rate is going to be much higher with one of the strategies.
Download Valley strikes again!
How did I know this was an israeli company just by how unethical they are at scale?
It would have been no less suprising to me had it been a US company but it certainly fits the cultural stereotype of callousness that particular country has been openly displaying in recent years.
And what are the odds that mossad are getting access to this data?
Why is a security researcher using a Free VPN? The standard wisdom is "if its free, you're the product". So you're going to proxy all your sensitive traffic through a free thing? Its not great to trust paid services with your data, nevermind free stuff.
Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.
They are not. They found it by searching for extensions that had the capability to exfiltrate data.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
Some people have mentioned that this is a U.S incorporated company (Delaware). Recommend reading Moneyland by Oliver Bullough if you want to know more about the U.S role as the new shell company haven.
The island states have been dethroned.
What is the economic value of all these AI chat logs? I can see it useful for developing advertising profile. But I wonder if it's also just sold as training data for people try to build their own models?
Not just advertising but market research. Loads of people want to know exactly what type of questions ppl are asking these chat bots
Pretty easy to match up those logs with browser fingerprinting to identify the actual user. Then you have "do you want to purchase what Mr. Foo Bar is prompting the LLM?"
lol, this Urban VPN addon was available for Firefox too but got removed at some point. https://old.reddit.com/r/firefox/comments/1jb4ura/what_happe...
Thanks, the last fetched page on archive.org is from 2025-01-26 [1], removed after this date and before 2025-02-13. 155,477 users at the moment, 1 star reviews were mostly about not working. It's interesting that the developers didn't care to remove the button directing to the ff add-on page at least several months after the removal. Maybe was some kind of PR compromise, they probably thought that listing it with linking to a broken page was better than not listing at all.
A review page [2] mentions that this add-on is a peer-to-peer vpn, not having its own dedicated servers that already makes it suspicious.
[1] https://web.archive.org/web/20250126133131/https://addons.mo...
[2] https://www.vpnmentor.com/reviews/urban-vpn/
Only those users that were stupid enough to "converse" with their chatbot.
Do we know for how much that type of content sells? Not that I'm interested in entering the market, but the economics of that kind of thing are always fascinating. How much are buyers willing to pay for AI conversations? I would expect the value to be pretty low
I doubt its the actual conversations but the aggregated insights that are valuable.
Think: is my brand getting mentioned more in AI chats? Are people associating positive or negative feelings towards it? Are more people asking about this topic lately?
Let's assume that people are discussing medical conditions in these conversations - I think that insurance companies would be pretty interested to get this kind of data in their hands.
Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.
Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.
> This means a human at Google reviewed Urban VPN Proxy and concluded it met their standards.
Or that the review happened before the code harvested all the LLM conversations and never got reviewed after it was updated.
I think this is most likely what happened. The update/review process for extensions is broken. Apparently you can add any malicious functionality after you’re in and also keep any badges and recommendations.
Is the use of WebAssembly going to make spotting these malicious extensions harder?
Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
> Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
That can be circumnavigated by bundling the conversations into one POST to an API endpoint, along with a few hundred calls to several dummy endpoints to muddy the waters. Bonus points if you can make it look like an normal-passing update script.
It'll still show up in the end, but at this point your main goal is to delay the discovery as much as you can.
As soon as you hijack the fetch function (which cannot be done with WebAssembly alone), it's going to look suspicious, and someone who looks at this carefully enough will flag it.
Wasn't the whole coercion Google did around Manifest V3 in the name of security?
How is it possible to have extensions this egregiously malicious in the new system?
What would the fallout look like if too many people start to have horror stories about how much their life is destroyed by incriminating or down right nasty or wrong ai chat history. It'll suddenly become a tool where you can't be honest. If it's not already.
Would using native AI apps only prevent this? I think so right?
Which "AI" has a native app?
Or you mean the web sites packed with a copy of chromium?
Correct. The article is about Chrome and MS Edge browser extensions.
Oh, a free of cost vpn extension that requires access to all sites and data is somehow spyware, color me surprised.
With those extensions the user's data and internet are the product, most if not all are also selling residential IP access for scrapers, bots, etc.
Good thing Google is protecting users by taking down such harmful extensions as ublock origin instead.
ublock requires access to all sites and data. Maybe they are trustworthy but who really knows?
Let's say we don't trust ublock. At the very least it is still blocking ad networks which do reduce internet performance and are vectors of exploitation, so it is still adding value whether you trust it or not.
Under the hypothetical that we don't trust ublock, it would be foolish to grant it full access to all data on all websites. It would not be adding value.
Yeah — they’d be selling enhanced versions of that data to every site they blocked, and then some. I very much doubt they are.
I mean, I don't trust ublock, for what it's worth. I just disable javascript by default with has pretty much the same effect.
> And then an uncomfortable thought: what if someone was reading all of this?
> The thought didn't let go. As a security researcher, I have the tools to answer that question.
What huh, no you don't! As a security researcher you should know better!
> Exactly the kind of tool someone installs when they want to protect themselves online.
No. When you want to increase your security, you install fewer tools.
Each tool increases your exposure. Why is the security industry full of people who don't get this?
Is this criminally prosecutable?
Can someone just AI all the privacy policies please and tell us who else is pranking?
If the business model isn't obvious, you are the product
"And then an uncomfortable thought: what if someone was reading all of this?"
If you really are a security researcher then that's not true. You already know all this.
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
Trusting Google with your privacy is like putting the fox in charge of the henhouse.
Am I just paranoid or open router is the next bomb ticking to a privacy explosion? What is their business model anyway?
>What is their business model anyway?
They take a 5.5% fee whenever you buy credits. There's also a discount for opting-in to share your prompts for training.
Note that in the profile of a model in Openrouter, under Data Policy, there is a statement as "Prompt Training". Some of model will clearly stated that prompt training is true, even for paid models.
If you want a VPN you can trust, deploy your own with AlgoVPN: https://github.com/trailofbits/algo
I prefer WG-Easy (https://github.com/wg-easy/wg-easy), which uses a Docker container, not ansible.
Is this the same Google that is preventing us from installing unapproved software on our phones?
With hardcoded flags like “sendClaudeMessages” and “sendChatgptMessages”, they weren’t even trying to hide it.
I treat extensions like they're all capable of privileged local code execution. My selection is very vetted and very small.
The only extensions I have installed are dark reader and ublock origin. Would be nice if I could disable auto updating for them somehow and run local pinned versions...
Get the source code and manually pack your own unsigned web-ext’s.
Add-ons Manager -> (click the add-on in question) -> change "Allow automatic updates" to "Off"
(for firefox/derivatives anyways...)
Same here, uBlock Origin and EFF's Privacy Badger are the only extensions I trust enough to install.
Ditto, plus 1pass / BitWarden.
8 million users on sketchy VPN extensions.
70 thousand users on what I would actually call "privacy" extensions.
Bit of a misleading title then.
From my experience, Google does not do a thorough app review. Reviewers get maybe a few minutes to review and move on due to the volume of apps awaiting review.
I imagine this would be a great use case for AI helping out?
I'm thinking of installing the extension in a sandbox and then use a local agent to have endless fake conversations with it
“There’s too much human harmful code to review and too few human reviewers.”
“I know, let’s have an AI do all the work for us instead. Let’s take a coffee break.”
No way that could backfire... Prompt injection is a solved problem right?
The footer animation of koi.ai is so cool.
> A free VPN promising privacy and security.
If you are not paying for the product, you are the product.
b xncbb vnxv
fasdfas
ctrl-f israel: 1 result found
4*
2*
There were these two people.
And um, a boy and a girl.
...
Anyway, the thing was that one day they started acting kinda funny. Kinda, weird.
They started being seen exchanging tokens of affection.
And it was rumoured they were engaging in...
Note that this is a pretty blatant GDPR violation and you should report this to the local data protection agency if you are a EU resident and care about this (especially if you've used this extension). Their privacy policy claims the data collection is consent-based and that the app settings also let you revoke this consent. According to the article, the latter isn't the case and the user is never informed of the extent of the collection and the risk of sensitive or specially protected personal information (e.g. sexual orientation) being part of the data they're collecting. Their privacy policy states the collected data is filtered to remove this kind of information but that's irrelevant because processing necessarily happens after collection and the GDPR already applies at the start of that pipeline.
If Urban VPN is indeed closely affiliated with the data broker, a GDPR fine might also affect that company too given how these fines work. There is a high bar for the kind of misconduct that would result in a fine but it seems plausible that they're being knowingly and deliberately deceptive and engaging in widespread data collection that is intentionally invasive and covert. That would be a textbook example for the kind of behavior the GDPR is meant to target with fines.
The same likely applies to the other extensions mentioned in the article. Yes, "if the product is free, you are the product" but that is exactly why the GDPR exists. The problem isn't that they're harvesting user data but that they're being intentionally deceptive and misleading in their statements about this, claim they are using consent as the legal basis without having obtained it[0], and they're explicitly contradicting themselves in their claims ("we're not collecting sensitive information that would need special consideration but if we do we make sure to find it and remove it before sharing your information but don't worry because it's mostly used in aggregate except when it isn't"). Just because you except some bruising when picking up martial arts as a hobby doesn't mean your sparring partner gets to pummel your face in when you're already knocked out.
[0]: Because "consent" seems to be a hard concept for some people to grasp: it's literally analogous to what you'd want to establish before having sex with someone (though to be fair: the laws are much more lenient about unclear consent for sex because it's less reasonable to expect it to be documented with a paper trail like you can easily do for software). I'll try to keep it SFW but my place of work is not your place of work so think carefully if you want to copy this into your next Powerpoint presentation.
Does your prospective sexual partner have any reason to strongly believe that they can't refuse your advances because doing so would limit their access to something else (e.g. you took them on a date in your car and they can't afford a taxi/uber and public transport isn't available so they rely on you to get back home, aka "the implication")? Then they can't give you voluntary consent because you're (intentionally or not) pressuring them into it. The same goes if you make it much harder for them to refuse than to agree (I can't think of a sex analogy for this because this seems obvious in direct human interactions but somehow some people still think hiding "reject all non-essential" is an option you are allowed to hide between two more steps when the "accept all" button is right there even if the law explicitly prohibits these shenanigans).
Is your prospective sexual partner underage or do they appear extremely naive (e.g. you suspect they've never had any sex ed and don't know what having sex might entail or the risks involved like pregnancy, STIs or, depending on the acts, potential injuries)? Then they probably can't give you informed consent because they don't fully understand what they're consenting to. For data processing this would be failure to disclose the nature of the collection/processing/storage that's about to happen. And no, throwing the entire 100 page privacy policy at them with a consent dialog at the start hardly counts the same way throwing a biology textbook at a minor doesn't make them able to consent.
Is your prospective sexual partner giving you mixed signals but seems to be generally okay with the idea of "taking things further"? Then you're still missing specific consent and better take things one step at a time checking in on them if they're still comfortable with the direction you're taking things before you decide to raw dog their butt (even if they might turn out to be into that). Or in software terms, it's probably better to limit the things you seek consent for to what's currently happening for the user (e.g. a checkbox on a contact form that informs them what you actually intend to do with that data specifically) rather than try to get it all in one big consent modal at the start - this also comes with the advantage that you can directly demonstrate when and how the specific consent relevant to that data was obtained when later having to justify how that data was used in case something goes wrong.
Is your now-active sexual partner in a position where they can no longer tell you to stop (e.g. because they're tied up and ball-gagged)? Then the consent you did obtain isn't revokable (and thus again invalid) because they need to be able to opt out (this is what "safe words" are for and why your dentist tells you to raise your hand where they can see it if you need them to stop during a procedure - given that it's hard to talk with someone's hands in your mouth). In software this means withdrawing consent (or "opting out") should be as easy as it was to give it in the first place - an easy solution is having a "privacy settings" screen easily accessible in the same place as the privacy policy and other mandatory information that at the very least covers everything you stuffed in that consent dialog I told you not to use, as well as anything you tucked away in other forms downstream. This also gives you a nice place to link to at every opportunity to keep your user at ease and relaxed to make the journey more enjoyable for both of you.
TLDR: AI company uses AI to write blog post about abusive AI chrome extension
(Yes it really is AI-written / AI-assisted. If your AI detectors don’t go off when you read it you need to be retrained.)
Deleted.
What sort of argument is that? Just because I need to eat (also let's be real the developers/owners behind this app are not struggling to get food on the table), does excuse me doing unethical/illegal things (and this behaviour is almost certainly illegal in the EU at least).
There is a “contradictions” section that clearly explains why this is a scam of the highest order.
There are honest ways to make a living. In this case honest is “being transparent” about the way data is handled instead of using newspeak.
The guy that holds up people for money in the alley is a human too, people forget, and needs to pay for food and a place to live. Of course they do too.
It's ridiculous how many comments are being removed.