Meta said the contracting "did not meet (meta's) standards". I am sure that is true. meta's "standard" is not to reveal the illegal, immoral, unethical things meta does. No matter what the harm.
Maybe a company with those standards should not get our business. Oops, no wait, maybe they mean the Friedman Doctrine standards? In that case they are entitled to do any and every thing to make a profit. No matter what the harm.
I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.
So that when I say that they really do have a zero tolerance policy for anyone using their internal systems to violate user privacy, it's not because I'm eager to defend them. It's just true (at least, it was when I was there). There are internal systems dedicated to making sure you have access to what you need to do your job, and absolutely nothing else. All content you interact with through internal tools is monitored and logged. If you get caught trying to use whatever access your job gives you for anything other than doing your job, security immediately escorts you out of the building. This is drilled into new hires early and often. For everything Meta gets wrong, they really do take this seriously.
These contractors were hired to view this data. Your defense of Meta here doesn't make sense. Meta fired them for speaking out about the data Meta collects, not because they saw the data they were hired to look at.
The problem is that your comment and the one you're responding to can both be true: Just because the rules are heavily enforced does not mean the right rules are in place, starting with the fact that Meta is collecting this data to begin with.
There's no allegation that these workers abused their access. The allegation is that their routine work reviewing footage included private content. The revelation is that USERS are using meta glasses non-consensually.
Many things about the world are worse than they could be because of choices Meta has made.
If Facebook were designed with a different set of incentives that prioritized the user, fostered positive engagement, and better respected individual's privacy and data sovereignty - setting a better standard for the whole industry - I feel there wouldn't be all this fuss today about banning social media accounts.
You're still on the koolaid, as many replies here accurately point out. Saying it's not because you're eager to defend them is lying to yourself, because you're smart enough to think of most of these replies yourself. Primarily the fact that these are contractors whose entire job is to watch smart glasses footage and the point your bringing up - even if we believe it at face value - is completely irrelevant to this post.
If you truly want to atone for your sins, you have a long way to go.
Anecdotal of course, but I heard that this wasn't at all the case circa 2006 and that (then) FB employees would routinely read private messages and such. Obviously it wasn't a big company yet and probably didn't have those policies yet... (clearly the policies are there for a reason...)
As someone who worked for a contractor which had Meta as a client, I disagree.
All advertiser support agents were given super-read on all profiles & pages, and I never once observed a CSR being questioned on their use of this access in any way.
Yea but no. Meta is a defense contractor that hires out to 3rd parties exactly to do this. so you guys don't get to do that, but a lot of other people are. I hope that helped you sleep at night while you were there. But yea, it all gets bought and sold at the end of the day.
The irony is meta wants to implement verification to protect kids. Meanwhile it's doing everything it can to exploit them most at every single level for profit and for the love of the game. Billions of dollars, the world's most advanced computers all dedicated for it
> At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI.
They just got fired for "piercing the veil". They committed the sin of bringing attention to the invasion of privacy.
If you don’t disable the glasses they could continue to share content. The article describes the glasses being left on a dresser and then sharing content of people without their consent, which could easily parallel into showing a sexual encounter or other privacy-sensitive scenarios.
Unfortunately in today’s world where organizations are larger than many a country’s GDP, they really only have to face responsibility towards shareholders and maximizing profits is the thing they usually care about.
That's not what the Friedman Doctrine is, technically. It is that management should obey moral, ethical, and legal frameworks in the operation of the business for the benefit of its investors; and specifically NOT take actions which are outside of that narrow scope.
Does that include trying influence moral, ethical, and legal frameworks to the benefit of the investors as well? Because if it does it is kind of a moot point.
Is it illegal or immoral? Having Meta review this material has to be approved by users and has their consent.
There was an example in the article where a user’s glasses kept recording the user’s wife after he took them off. That’s bad but on the user, not Facebook.
Seems similar to a situation where someone takes nudes of someone without their consent and then sends them off to a lab to be printed. The lab isn’t doing anything illegal or unethical printing them when they ask the user “are these legal” and the user replies “yes.” Unless you want to stop photo printers from ever printing nudes, I think the responsibility is on the user, not the firm.
Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.
How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.
It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).
> How else do you want companies to remove and prevent CSAM?
Different situation.
Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.
And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.
---
In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.
---
This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)
But OpenAI is not such a firm. It's a general AI company.
OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?
triage doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.
CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
> Server moderators should be legally responsible for content on their server.
And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.
Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.
You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.
But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.
Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.
Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.
Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.
Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.
The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.
Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?
Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?
Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,
When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.
I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.
I am not sold on the federated thing to solve CSAM or similar issues.
Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.
>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.
That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.
> Server moderators should be legally responsible for content on their server.
So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?
> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?
They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.
This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.
right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.
It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.
Fosstodon fediverse server doesn't have this issue, for example.
I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.
The problem is the people who participate in this, and not "the network."
The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.
Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.
One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.
These workers prepare data for AI. I don't think the need for them will go away anyway soon.
Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.
Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?
They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.
Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)
But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.
Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.
In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.
Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.
I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.
I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.
I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.
Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.
Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.
A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.
There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.
Most people don't run around holding out their smartphone directly in front of them. It has to be pointed at the subject, and tends to be obvious.
Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.
The general populace is much more likely to notice the former recording rather than the latter.
I've seen people keep their phone in their shirt pocket. The only reason it tends to be obvious is that most people aren't trying to be covert. Those aren't the ones you should be worried about.
Because person wearing glasses usually can move and video surveillance cameras usually can't?
If that's not it then spell it out for me, please.
Also, why would i be deceptive in this discussion? I feel like I missed some ideological conflict.
Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.
On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?
I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.
They don't care. Or they refuse to realize that tech isn't the solution to it, but an amplifier of it's scale.
Can tell you that my urge to take photos/record drastically dips around other people. Particularly if it were meant for any sort of commercial exploitation. Stephenson called people wired for max indiscriminate data collection/processing "gargoyles". Personally I prefer glassholes.
At everything on the opposite side of the screen, typically. There is a recording light for Meta glasses, but not one for iPhones, for example: the "recording" indicators are all user-side there.
When I'm on public transport, people generally face their phones in such a way that they'd only be filming your feet or the floor... They don't hold them up at head height in such a way that other people would be recorded. Maybe it's just a cultural thing
A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out.
Meta says it's because Sama did not meet its standards, a criticism Sama rejects ...
Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere
There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.
What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.
Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.
Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.
I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.
The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.
The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.
> > Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
> Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
It's total fantasy. I've worked in big tech. Casually uploading and providing company/contractor access to non-redacted intimate photos or pictures of the insides of people's homes vaguely "for the purpose of improving the customer experience" would not pass even a surface-level privacy or data-protection review anywhere I've ever worked. Do Meta even read what they are saying?
Well you gotta give out black mail material to the scam centers somehow. Otherwise they don't actually have leverage! Oh right... We don't want that happening.
I’ve worked in trust and safety - for me this is stupid, but well below the threshold of impossible.
Hell, I know of a major firm that decided QA was not needed for their trust and safety process.
Another common issue will be SEA Arabic speakers tasked with labelling Middle Eastern Arabic content, because accents and cultural dialects are not a thing.
I’ve had people at FAANG firms cry on my shoulder, because they couldn’t get access to engineering resources at their own firms.
There was the famous case of meta executives overriding T&S policy and telling them that what content was news worthy during the Boston bombing. On a separate incident, they told their team that cartel violence was not newsworthy when friends in London complained about it.
When you say this is fantasy, what do you mean precisely?
What I mean is: I'm not sure what they base their statement that it's "a common practice among other companies" on. Unlikely they are talking about their peer companies. I suppose if you read the sentence literally, there surely exist one or more "other companies" in the broad universe of "other companies" that routinely do this kind of stuff. But I wouldn't think anywhere serious.
I once read the manual of one of those small floor cleaning robots (Ecovacs Deebot U2 pro), and it basically said that by using it you were giving them a right to take pictures and send them to a remote server (to analyze issues or something like that)
What you should have read correctly was the Facebook terms of service. I still get strange responses when I tell people that I don't use WhatsApp. All Meta's properties are tainted such that I won't use them.
Are you conflating telemetry with literally live-streaming your life to Meta? Because that's what makes the statement weird.
edit 2: OK, I see what you mean. But I'm wondering if it should be possible to consent to this via T&C. Basically the same issue as with many online services, turned up to 11, sure. And it involves OTHER people, who have not consented.
Stuff like this used to be outrage fuel even when it was more of a social experiment, e.g. the documentary "We live in public" or the "Big brother" TV show. By now, I'm sure there have been millions of influencers doing similar things, but it's very much not considered normal?
Streaming to an unknown number of employees might be considered different from streaming to the public, sure.
But the core question here is whether there's informed consent, and, IMO also, if it should be possible to consent to this when the other party is a company like Meta and the pretext is not deliberately seeking attention (like influencers and streamers do).
So already, this person wearing these glasses are already agree with that Meta can monitor them. They also probably trust Meta when they say "When the glasses are off, nothing is recording", for better or worse. So with that perspective in mind, it's not far fetched to assume these same people will willingly be naked into front of these recording devices they believe to be off.
Of course, anyone who opened a newspaper in the last 10 years or so would know better, but I can definitively see some people not giving a fuck about it.
There are "content creators" who intentionally record people without any sort of consent. At least when they point cameras, one can notice the cameras and take action. With these sorts of glasses, no one in view has consented, nor have they agreed to any sort of terms & conditions.
I never understood the appeal of upskirt pictures. But I think that taking videos of non-consenting participants/victims is the current version of the upskirt photo craze.
One of the bigger commercial niches for smart glasses is filming POV porn, so it is hardly surprising that sort of content ended up in the moderation queue. The project should have planned to account for that use case.
And I do appreciate how awkward it is for Meta to admit that use case exists. Even in the Oculus Go days there were a bunch of polite euphemisms internally to avoid mentioning "our device has to ship with a browser so people can watch porn on it"
This is my question too. I get moderating things that people are posting. Being not familiar with the device and how it works, I'd assume that all footage is posted to the user's cloud account even if not publicly posted. This being cloud storage, Meta is "moderating" the footage to ensure CSAM or other restricted footage type is not being stored on their (Meta's) platform. That's my very generous take on it, not that I believe it
The article itself is ambiguous on this point: "At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI."
That could be moderation, or it could be labelling new examples for training/validation
I believe the tricky privacy and security issues around smart glasses (and other "personal" tech) can be navigated successfully enough by a thoughtful, diligent, responsive company.
Which is why I'd never touch a person tech device from Meta.
Their entire DNA is written to exploit their users for profit. In my judgement, they literally cannot and will never consider those issues as anything other than something to obscure to keep people unaware of the depth of the exploitation.
The thing that really gets me is that internally there are 4 levels of data 1 being public domain shit (the sky is blue) up to 4 which is private user data, or something that is sensitive if leaked or shared.
I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
The part where I worked, in visual research, we had to jump through a years worth of legal hoops to get permission to record videos in public. We had to build an anonymisation pipeline, bullet proof audit trail, delete as much data as possible, with auto delete if something went wrong.
We had rigid rule about where that data could be stored and _who_ could access it. We were not allowed to share "wild" footage (ie data that might have the hint of anyone who hadn't signed a contract) for annotation because it would be given to a third party. THe public datasets we released all had traceable people, locations all with legal waivers signed.
Then I hear they just started fucking hosing private data to annotators to _train_ on? without any fucking basic controls at all? Just shows that whenever Zuck or monetsization want something, the rules don't apply.
I look forward to that entire industry collapsing in on it's self.
> I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
Given the size and nature of Meta's business, I would assume they would have better systems in place. SWEs should only have access to PII with explicit consent from users/customers e.g. support tickets.
Especially someone going through boot camp. Do they have access to de-anonymized user data during training?
Shit, at my last company I had to jump through so many hoops to access user data even with consent from the customer.
have always wondered about this especially post Cambridge Analytica where Meta imposed really stringent requirements for API use even for personal things while it was blatantly obvious that internally it was a different story
It seems the issue is not the glasses users, but the people that the glasses users were having sex with. Did meta get their consent before redistributing this content?
At this scale, this sound like some insider joke contract made up only to make some hustle on the side capitalizing with stock options on the possibility of adhoc news trading bots glitching out on the keyword, here "x.com/sama" signals.
If you want to read more about how unsavory aspects of AI-training are off-loaded onto poor workers in third-world countries, would recommend Karen Hao's "Empire of AI". These workers are paid pennies an hour for unstable jobs that expose them to some horrific material.
About the "they asked us to view it and then fired us for it". Having worked in their RL division(I don't work at meta anymore) this story is quite weird for two reasons:
1. Meta AFAIR paid/compensated people — contractors or recruited via ads — to have them submit their data. There are strict privacy protocol and reviews in place to distinguish data use in these cases vs gen public. This is not to say the process is perfect, but if these users are gen public, I would be very shocked.
2. Hiring contractors to submit data is a more controlled environment VS recruitment of gen pub via ads to submit data, but the former has more well understood privacy disclosures than the latter. This means in practice asking contractors to wear glasses and "move around their surroundings naturally and do things" goes well with basically the privacy practice "the data your are submitting we can view and use all of it for purpose X and nothing but X". BUT this framing is with ad based recruited people — which are general users who willingly submit data — is much much harder. My suspicion is they are running ad based recruiting in general public and while those users may have signed a privacy statement it is very surprising that they did not tighten the privacy practices around the use of the data and who has access.
Yeah, I think it's more of a British English thing. It can also mean things like "in a fight". Like: "those two guys had a big row outside the pub the other night"
I think Meta, like all companies, doesn’t want its subcontractors creating bad press for them.
So it doesn’t surprise me that Meta didnt renew/cancelled a contract that is a net negative for them. Arguing over the reason seems fruitless as no reason is needed per the terms of the contract (I assume since breach of contract wasn’t brought up by the sub).
Why do they even need workers to classify naked content? They could filter some content prior to passing it to workers. They already have models to moderate explicit content.
The most important real use case of devices like this is as accessibility tech. Blind people everywhere are talking about devices like this.
It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.
Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.
The impulse to "ban" is sometimes a bit naive imo.
Why? What's the difference between that and one of the many, many concealed camera options that you don't even notice? Just that it's noticeable? I don't think that's a good enough reason for yet-more-regulation. You're already being recorded everywhere you go in public by the authorities, and often by people standing right next to you unnoticed, so just act accordingly.
Because they will be popular and lots of people will buy them and use them all the time, leading to much more generalized surveillance than the concealed options that only a tiny tiny fraction of people would buy or use (and that we should also regulate)
> What's the difference between that and one of the many, many concealed camera options that you don't even notice?
The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!
The owner of the private space generally has authority to deny this already, there's no need for an additional law.
In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.
Facebook may have to rename itself into NaughtyBook or SpyBook
or Pr0nBook. They really want people to help them spy on other
people here - including their sex life. Expect new sexy videos
in 3 ... 2 ...
> and was a common practice among other companies.
Meta isn’t lying, you should assume other companies are doing it too, Tesla did it with their cameras, and assume others like any company has access to your camera, I would even assume CCTV cameras too. It’s why for anything sensitive, try to use open source stacks, you might lose some of the features, but it’s a needed compromise.
So I've never had a smart speaker in my house (Alexa, Apple, Google). I've just never been comfortable with the idea of having an always-on cloud-connected microphone in my house. Not because I thought these companies would deliberately start listening and recording in my house but because they will likely be careless with that data and it'll open the door for law enforcement to request it. Consider the Google Wi-fi scraping case from STreetView.
Or they might start scanning for "problematic" behavior, a bit like the Apple CSAM fingerprinting initiative.
So not one part of me would ever buy Meta glasses (or the Snap glasses before that). You simply don't have sufficient control over the recordings and big tech companies can't be trusted, as we've witnessed from outsourced workers sharing explicit images. And I bet that's just the tip of the iceberg.
I honestly don't understand why anyone would get these and trust Meta to manage the risks.
That is to say nothing of the new technological use cases that could develop from the already existing technology. They just haven’t been thought of developed yet.
Things like audio scanning your living space using those Alexa smart speakers with ultrasonics to get an image of not only everything in your space, but where you are in that space as well.
That technological use case only came out within the last five or so years, maybe closer to eight. Either way I could see that coming before it became a thing just because ultrasound imaging of your unborn child is a thing ultrasound imaging of the sea floor is a thing so why wouldn’t ultrasound imaging of your living space be a thing by a company who wants to know what you buy.
I never ever ever had Alexa I only ever had a Google home because I got it for free with GPM but I almost never used it because I hated the idea of it always listening.
I already regret Wi-Fi because they figured out now how to look through walls with that.
Meta said the contracting "did not meet (meta's) standards". I am sure that is true. meta's "standard" is not to reveal the illegal, immoral, unethical things meta does. No matter what the harm.
Maybe a company with those standards should not get our business. Oops, no wait, maybe they mean the Friedman Doctrine standards? In that case they are entitled to do any and every thing to make a profit. No matter what the harm.
[edit: add last two sentences]
I used to work for Meta. I quit largely because of intense frustrations with the company. Meta has made a lot of mistakes, overlooked a lot of harms, and made a lot of short-sighted, selfish choices. Many things about the world are worse than they could be because of choices Meta has made.
So that when I say that they really do have a zero tolerance policy for anyone using their internal systems to violate user privacy, it's not because I'm eager to defend them. It's just true (at least, it was when I was there). There are internal systems dedicated to making sure you have access to what you need to do your job, and absolutely nothing else. All content you interact with through internal tools is monitored and logged. If you get caught trying to use whatever access your job gives you for anything other than doing your job, security immediately escorts you out of the building. This is drilled into new hires early and often. For everything Meta gets wrong, they really do take this seriously.
These contractors were hired to view this data. Your defense of Meta here doesn't make sense. Meta fired them for speaking out about the data Meta collects, not because they saw the data they were hired to look at.
The problem is that your comment and the one you're responding to can both be true: Just because the rules are heavily enforced does not mean the right rules are in place, starting with the fact that Meta is collecting this data to begin with.
There's no allegation that these workers abused their access. The allegation is that their routine work reviewing footage included private content. The revelation is that USERS are using meta glasses non-consensually.
Many things about the world are worse than they could be because of choices Meta has made.
If Facebook were designed with a different set of incentives that prioritized the user, fostered positive engagement, and better respected individual's privacy and data sovereignty - setting a better standard for the whole industry - I feel there wouldn't be all this fuss today about banning social media accounts.
It's likely they wouldn't be as profitable too though, and their mandate to shareholders is to make number go up.
You're still on the koolaid, as many replies here accurately point out. Saying it's not because you're eager to defend them is lying to yourself, because you're smart enough to think of most of these replies yourself. Primarily the fact that these are contractors whose entire job is to watch smart glasses footage and the point your bringing up - even if we believe it at face value - is completely irrelevant to this post.
If you truly want to atone for your sins, you have a long way to go.
Anecdotal of course, but I heard that this wasn't at all the case circa 2006 and that (then) FB employees would routinely read private messages and such. Obviously it wasn't a big company yet and probably didn't have those policies yet... (clearly the policies are there for a reason...)
As someone who worked for a contractor which had Meta as a client, I disagree.
All advertiser support agents were given super-read on all profiles & pages, and I never once observed a CSR being questioned on their use of this access in any way.
@jaidhyani I hate to burst your bubble, but there are major privacy violations here.
https://news.ycombinator.com/item?id=47226756
@jaidhyani I hate to burst your bubble, but there are major privacy violations here.
https://news.ycombinator.com/item?id=47225130
Yea but no. Meta is a defense contractor that hires out to 3rd parties exactly to do this. so you guys don't get to do that, but a lot of other people are. I hope that helped you sleep at night while you were there. But yea, it all gets bought and sold at the end of the day.
The irony is meta wants to implement verification to protect kids. Meanwhile it's doing everything it can to exploit them most at every single level for profit and for the love of the game. Billions of dollars, the world's most advanced computers all dedicated for it
Meta and their employees have spent years breaking the public’s trust over and over again. Why should we trust anything they say?
> At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI.
They just got fired for "piercing the veil". They committed the sin of bringing attention to the invasion of privacy.
Were/are video recordings from the glasses advertised as being E2E encrypted?
Mostly, I'm just surprised that anybody would be naive enough to take a camera provided by Facebook into a sexual encounter and expect anything else.
If you don’t disable the glasses they could continue to share content. The article describes the glasses being left on a dresser and then sharing content of people without their consent, which could easily parallel into showing a sexual encounter or other privacy-sensitive scenarios.
Yeah, why the hell is Meta wa5tching people's videos either? Why PAY a company to invade our privacy and watch our videos? It's flipping BIZARRE.
Isn’t that obvious from the article? They’re labeling content for training AIs, something which is happening all over the world constantly.
Yep gotta bake in that personal data into generative models so it can be reproduced later for profit.
Why generative? Or has it been decided that only generative models are “AI”?
What kind of model "reproduce"s things later for profit that is not generative?
Surveillance models.
Unfortunately in today’s world where organizations are larger than many a country’s GDP, they really only have to face responsibility towards shareholders and maximizing profits is the thing they usually care about.
That's not what the Friedman Doctrine is, technically. It is that management should obey moral, ethical, and legal frameworks in the operation of the business for the benefit of its investors; and specifically NOT take actions which are outside of that narrow scope.
Does that include trying influence moral, ethical, and legal frameworks to the benefit of the investors as well? Because if it does it is kind of a moot point.
Is it illegal or immoral? Having Meta review this material has to be approved by users and has their consent.
There was an example in the article where a user’s glasses kept recording the user’s wife after he took them off. That’s bad but on the user, not Facebook.
Seems similar to a situation where someone takes nudes of someone without their consent and then sends them off to a lab to be printed. The lab isn’t doing anything illegal or unethical printing them when they ask the user “are these legal” and the user replies “yes.” Unless you want to stop photo printers from ever printing nudes, I think the responsibility is on the user, not the firm.
Is there explicit approval? Or is it buried in the legal agreements?
Legal agreements are explicit.
Lol
Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.
"Fun" bonus fact: This isn't the first time Sama (the outsourcing company) has had these problems.
OpenAI had them classify CSAM, so Sama fired them as a client back in 2022. https://time.com/6247678/openai-chatgpt-kenya-workers/
We're 4 years on, 3 years since that report broke. Not a single thing has improved about how tech companies operate.
How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.
It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).
> How else do you want companies to remove and prevent CSAM?
Different situation.
Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.
And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.
---
In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.
---
This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)
But OpenAI is not such a firm. It's a general AI company.
OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?
> traumatizing contract workers for $2-3 an hour)
Is there an hourly rate at which this should be acceptable?
triage doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.
CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
> Server moderators should be legally responsible for content on their server.
And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.
Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.
You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.
But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.
Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.
Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.
Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.
If car manufacturers cannot bring car related deaths to zero, they too should no longer be legitimate companies.
A better comparison would be that if a car company can’t meet preexisting crash/safety standards, they need to shut down.
These are pretty clear laws established by a democratic government with a pretty good record for rule of law.
Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.
The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.
Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?
Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?
We're talking about CSAM right? Which all platforms remove proactively, build models to remove and essentially always respond to when informed.
Demanding some perfect immediate magic response there is the equivalent of asking car manufacturers to prevent all deaths.
Do they remove it and respond really though?
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,
https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html
https://www.zdnet.com/article/us-ai-safety-institute-will-be...
https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok...
The list of not even trying anymore goes on and on. Mechahitler was also fun
When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.
So yes, yes, let's do like we did with cars.
I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.
I am not sold on the federated thing to solve CSAM or similar issues.
Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.
>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.
That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.
> Server moderators should be legally responsible for content on their server.
So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?
> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?
They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.
This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.
right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.
It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.
Fosstodon fediverse server doesn't have this issue, for example.
I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.
The problem is the people who participate in this, and not "the network."
The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.
Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.
One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.
That seems reasonable though. The FBI isn’t interested in busting one perv in a closet, they want the ones making the stuff.
> Banning people is way easier on small servers
Big “citation needed” here. My bet is that Meta have far better moderation systems than any other social media company on the planet.
These workers prepare data for AI. I don't think the need for them will go away anyway soon.
Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.
Couldn't you just use multiple classifiers? Like "is a minor" classifier coupled with "is sexual content" classifier?
Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?
They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.
Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)
But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.
Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.
In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.
Yes and no.
Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.
I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.
I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.
I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.
Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.
Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.
There's a name for these people, glassholes
> I wouldn't even recommend being in their presence.
Great! Now do people with smart TVs and people with smart phones
Don’t we already hate the invasive ad tech industry?
Aren’t there already posts and articles on how to ensure that TVs don’t farm information from us?
I want to get the Oakley Meta ones so I can record bike rides easier, should I not be tolerated?
A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.
No. Fuck off
Also make sure to avoid people with smartphones and places with video surveilence.
Don't let perfect be the enemy of good.
There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.
Is this an honest argument? Surely you can think of how glasses might be ... in a different league than the two items you mention?
Unless you are using these during sex I consider a microphone to be 10x more privacy intruding than a camera.
Security cameras afaik usually don't record audio, but all phones can. And they don't even need to be pointed in any specific direction.
Not meaningfully. Anyone holding a smartphone might be recording you. You’d better avoid them if you don’t want to be recorded.
Most people don't run around holding out their smartphone directly in front of them. It has to be pointed at the subject, and tends to be obvious.
Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.
The general populace is much more likely to notice the former recording rather than the latter.
I've seen people keep their phone in their shirt pocket. The only reason it tends to be obvious is that most people aren't trying to be covert. Those aren't the ones you should be worried about.
This line makes a valid point. People record strangers all the time. In an obvious way or trying to be sneaky.
Just because you don’t notice it doesn’t mean it doesn’t happen.
However, this is still a different thing than smart glasses which can further be segmented into who designed the smart glasses.
Someone has to hold smartphone and point it at you.
Because person wearing glasses usually can move and video surveillance cameras usually can't? If that's not it then spell it out for me, please. Also, why would i be deceptive in this discussion? I feel like I missed some ideological conflict.
Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.
On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?
I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.
Have you heard the term non consensual intimate fantasies? I've heard it's an even bigger problem.
Well, you would fortunately be wrong. Fantasies are commonplace and well studied in society, psychology and even in the law.
The issue is when you go from fantasy to actually enacting it, which is usually when you earn the epithet of “Creep”.
Also, why make a throwaway for this line? I take it you haven’t heard of NCII?
They don't care. Or they refuse to realize that tech isn't the solution to it, but an amplifier of it's scale.
Can tell you that my urge to take photos/record drastically dips around other people. Particularly if it were meant for any sort of commercial exploitation. Stephenson called people wired for max indiscriminate data collection/processing "gargoyles". Personally I prefer glassholes.
https://www.tabletmag.com/sections/news/articles/the-borg-of...
If somebody was pointing a camera on me all the time? I would definitely avoid them.
People do that on my subway all the time.
It's the camera of their smartphone.
Not sure if it's ON though.
They point the camera of their smartphone directly at you?
At everything on the opposite side of the screen, typically. There is a recording light for Meta glasses, but not one for iPhones, for example: the "recording" indicators are all user-side there.
When I'm on public transport, people generally face their phones in such a way that they'd only be filming your feet or the floor... They don't hold them up at head height in such a way that other people would be recorded. Maybe it's just a cultural thing
Examples:
https://www.sciencephoto.com/media/922925/view/three-people-...
https://www.istockphoto.com/nl/foto/happy-woman-using-smart-...
Usually they are pointed at the ground when they're reading off them.
> the content they were paid to classify
Whistleblower protection is key for any working society. Only dictatorships and oligarchies protect criminals while shaming whistleblowers.
I do not care which country the outsourcing company is in. When criminals go global, protection whistleblowers should go global too.
Mark Zuckerberg and disrespect for user privacy.
Name a more iconic duo.
Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
> Why reflexively defend a massive tech corporation caught repeatedly violating the law?
Because it is the natural expansion of the quote attributed to Upton Sinclair:
> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires
You would help conceal a crime against the people just because it's good business??
Congratulations, you have a bright future in politics and/or tech CEOing.
More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere
There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.
Beautifully and succinctly put.
Proactively address the issues? Are you kidding me? This is not an issue that just happened to slip by; it is 100% by design. You're fooling no one.
What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.
Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.
Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.
I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.
If only we had the technology to record video without sending it to Meta's servers.
The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.
The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.
> "We see everything - from living rooms to naked bodies," one worker reportedly said.
> Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
> > Meta said this was for the purpose of improving the customer experience, and was a common practice among other companies.
> Am I reading this correctly?! This is probably the weirdest statement I've read on the internet in twenty years.
It's total fantasy. I've worked in big tech. Casually uploading and providing company/contractor access to non-redacted intimate photos or pictures of the insides of people's homes vaguely "for the purpose of improving the customer experience" would not pass even a surface-level privacy or data-protection review anywhere I've ever worked. Do Meta even read what they are saying?
Well you gotta give out black mail material to the scam centers somehow. Otherwise they don't actually have leverage! Oh right... We don't want that happening.
I’ve worked in trust and safety - for me this is stupid, but well below the threshold of impossible.
Hell, I know of a major firm that decided QA was not needed for their trust and safety process.
Another common issue will be SEA Arabic speakers tasked with labelling Middle Eastern Arabic content, because accents and cultural dialects are not a thing.
I’ve had people at FAANG firms cry on my shoulder, because they couldn’t get access to engineering resources at their own firms.
There was the famous case of meta executives overriding T&S policy and telling them that what content was news worthy during the Boston bombing. On a separate incident, they told their team that cartel violence was not newsworthy when friends in London complained about it.
When you say this is fantasy, what do you mean precisely?
What I mean is: I'm not sure what they base their statement that it's "a common practice among other companies" on. Unlikely they are talking about their peer companies. I suppose if you read the sentence literally, there surely exist one or more "other companies" in the broad universe of "other companies" that routinely do this kind of stuff. But I wouldn't think anywhere serious.
With lawyers like these, …
I once read the manual of one of those small floor cleaning robots (Ecovacs Deebot U2 pro), and it basically said that by using it you were giving them a right to take pictures and send them to a remote server (to analyze issues or something like that)
> What you should have read correctly was the Facebook terms of service.
I'm reminded of Bo Burnham's wonderful "That Funny Feeling" from 2021's "Inside", where one of the absurd examples he offers in the lyrics is:
Meta is a defense contractor. They see the world a little differently from everyone else.
How is this weird? People have been trading away their privacy for the smallest possible gains in convenience for a long time.
Are you conflating telemetry with literally live-streaming your life to Meta? Because that's what makes the statement weird.
edit 2: OK, I see what you mean. But I'm wondering if it should be possible to consent to this via T&C. Basically the same issue as with many online services, turned up to 11, sure. And it involves OTHER people, who have not consented.
Stuff like this used to be outrage fuel even when it was more of a social experiment, e.g. the documentary "We live in public" or the "Big brother" TV show. By now, I'm sure there have been millions of influencers doing similar things, but it's very much not considered normal?
Streaming to an unknown number of employees might be considered different from streaming to the public, sure.
But the core question here is whether there's informed consent, and, IMO also, if it should be possible to consent to this when the other party is a company like Meta and the pretext is not deliberately seeking attention (like influencers and streamers do).
edit, clarified social media comparison
Not sure which is worse here - that Meta are recording video from customers' smart glasses, or that they are firing people who talk about it.
The latter, as they can't even claim to have done so by accident, or "it was just bug".
Everything having to do with Meta, starting with its very name, has been evil from the start.
Can I squeeze in the just a teeny tiny bit of… why the hell are you wearing an internet camera on you while naked and/or having sex?
… although I really extend that to why are you wearing an internet connected camera that is obviously going to be monitored by Meta.
So already, this person wearing these glasses are already agree with that Meta can monitor them. They also probably trust Meta when they say "When the glasses are off, nothing is recording", for better or worse. So with that perspective in mind, it's not far fetched to assume these same people will willingly be naked into front of these recording devices they believe to be off.
Of course, anyone who opened a newspaper in the last 10 years or so would know better, but I can definitively see some people not giving a fuck about it.
There are "content creators" who intentionally record people without any sort of consent. At least when they point cameras, one can notice the cameras and take action. With these sorts of glasses, no one in view has consented, nor have they agreed to any sort of terms & conditions.
I never understood the appeal of upskirt pictures. But I think that taking videos of non-consenting participants/victims is the current version of the upskirt photo craze.
The Ray-Ban stays ON during sex!
It still blows my mind that anyone would volunteer to don these smart glasses, it's almost like some alien mindset to me.
I wonder under what circumstances footage from the glasses are uploaded for classification.
Probably this is people asking the glasses something about what they see and the glasses uploading video for classification to generate an answer.
People think it is "just AI" so are not very concerned about privacy.
It would be refreshing for once to see the top comment to such articles to be
“Yes, we all know it, and we keep those app installed regardless“.
One of the bigger commercial niches for smart glasses is filming POV porn, so it is hardly surprising that sort of content ended up in the moderation queue. The project should have planned to account for that use case.
And I do appreciate how awkward it is for Meta to admit that use case exists. Even in the Oculus Go days there were a bunch of polite euphemisms internally to avoid mentioning "our device has to ship with a browser so people can watch porn on it"
Why is there even a “ moderation queue”? Isn’t this people’s private recordings?
This is my question too. I get moderating things that people are posting. Being not familiar with the device and how it works, I'd assume that all footage is posted to the user's cloud account even if not publicly posted. This being cloud storage, Meta is "moderating" the footage to ensure CSAM or other restricted footage type is not being stored on their (Meta's) platform. That's my very generous take on it, not that I believe it
Yes but also we don't want people live streaming murder and suicide, so there's detection and moderation in place.
I’m betting this is going to some ML / Data labelling pipeline.
Yeah, moderation may instead be labelling in this case. Its likely the same type of firm handles both sorts of work on behalf of FAANG
Sounds plausible.
We could also toss vibe coded mess on top of this and probably get closer to the truth.
The article itself is ambiguous on this point: "At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI."
That could be moderation, or it could be labelling new examples for training/validation
Why would anyone trust Meta with their personal data! After a while it's just natural selection.
I believe the tricky privacy and security issues around smart glasses (and other "personal" tech) can be navigated successfully enough by a thoughtful, diligent, responsive company.
Which is why I'd never touch a person tech device from Meta.
Their entire DNA is written to exploit their users for profit. In my judgement, they literally cannot and will never consider those issues as anything other than something to obscure to keep people unaware of the depth of the exploitation.
Ex Meta employee here (yes you are right to boo):
The thing that really gets me is that internally there are 4 levels of data 1 being public domain shit (the sky is blue) up to 4 which is private user data, or something that is sensitive if leaked or shared.
I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
The part where I worked, in visual research, we had to jump through a years worth of legal hoops to get permission to record videos in public. We had to build an anonymisation pipeline, bullet proof audit trail, delete as much data as possible, with auto delete if something went wrong.
We had rigid rule about where that data could be stored and _who_ could access it. We were not allowed to share "wild" footage (ie data that might have the hint of anyone who hadn't signed a contract) for annotation because it would be given to a third party. THe public datasets we released all had traceable people, locations all with legal waivers signed.
Then I hear they just started fucking hosing private data to annotators to _train_ on? without any fucking basic controls at all? Just shows that whenever Zuck or monetsization want something, the rules don't apply.
I look forward to that entire industry collapsing in on it's self.
> I was told that by default all user data is level 4, as in if you do anything without decent approval, you're insta fired. There are many stories about at least one person a month during boot camp accessing user data and getting escorted out of the building within hours.
Given the size and nature of Meta's business, I would assume they would have better systems in place. SWEs should only have access to PII with explicit consent from users/customers e.g. support tickets.
Especially someone going through boot camp. Do they have access to de-anonymized user data during training?
Shit, at my last company I had to jump through so many hoops to access user data even with consent from the customer.
have always wondered about this especially post Cambridge Analytica where Meta imposed really stringent requirements for API use even for personal things while it was blatantly obvious that internally it was a different story
https://archive.ph/ubWba
It seems the issue is not the glasses users, but the people that the glasses users were having sex with. Did meta get their consent before redistributing this content?
Bigtech and the race to the bottom of the ethical pitt. We can still go lowerrrr!
Meta ended its contract with Sama
At this scale, this sound like some insider joke contract made up only to make some hustle on the side capitalizing with stock options on the possibility of adhoc news trading bots glitching out on the keyword, here "x.com/sama" signals.
If you want to read more about how unsavory aspects of AI-training are off-loaded onto poor workers in third-world countries, would recommend Karen Hao's "Empire of AI". These workers are paid pennies an hour for unstable jobs that expose them to some horrific material.
Which examples did they cover in the book?
this may be the greatest title i've seen on hacker news in a decade
About the "they asked us to view it and then fired us for it". Having worked in their RL division(I don't work at meta anymore) this story is quite weird for two reasons:
1. Meta AFAIR paid/compensated people — contractors or recruited via ads — to have them submit their data. There are strict privacy protocol and reviews in place to distinguish data use in these cases vs gen public. This is not to say the process is perfect, but if these users are gen public, I would be very shocked.
2. Hiring contractors to submit data is a more controlled environment VS recruitment of gen pub via ads to submit data, but the former has more well understood privacy disclosures than the latter. This means in practice asking contractors to wear glasses and "move around their surroundings naturally and do things" goes well with basically the privacy practice "the data your are submitting we can view and use all of it for purpose X and nothing but X". BUT this framing is with ad based recruited people — which are general users who willingly submit data — is much much harder. My suspicion is they are running ad based recruiting in general public and while those users may have signed a privacy statement it is very surprising that they did not tighten the privacy practices around the use of the data and who has access.
Absolutely no way I'd buy anything from Meta that has a camera built-in.
What does "in row" mean? For us non-English English speakers.
“a noisy argument or fight”, from the Cambridge dictionary. I believe it’s primarily used in British English.
To add to the other replies, when it's an argument, it's pronounced like "how" not like "no".
A row in this context is like a dispute or argument
It's also pronounced r-ow (ow, as in I hurt myself) in this context, instead of r-oh, in case that helps the OP
in an argument
"row" means "an argument"
Yeah, I think it's more of a British English thing. It can also mean things like "in a fight". Like: "those two guys had a big row outside the pub the other night"
I always remembered it from Phantom Tollbooth "a DREADFUL Rauw"
> Meta's glasses have a light in the corner of the frames that is turned on when the built-in camera is recording.
Because nobody knows how to put a dot of nail polish on an led they don't want seen, right?
There is some detection for obstructing the LED. It's a little more clever than you assume.
I think Meta, like all companies, doesn’t want its subcontractors creating bad press for them.
So it doesn’t surprise me that Meta didnt renew/cancelled a contract that is a net negative for them. Arguing over the reason seems fruitless as no reason is needed per the terms of the contract (I assume since breach of contract wasn’t brought up by the sub).
A question for the HN folks who work for Meta - Is the pay so good that it makes it worth working for such a morally bankrupt organization?
There are countless large, high paying, morally bankrupt companies out there. It’s no mystery that people continue to work for them.
Why do they even need workers to classify naked content? They could filter some content prior to passing it to workers. They already have models to moderate explicit content.
Can we boycott meta yet? I am sick of this company.
Unfortunately this news will have no impact, neither on customer behavior, neither on policy, neither on Meta's behavior.
Not a fan of regulation in general, but would love to see a ban of cameras on glasses used in public spaces.
The most important real use case of devices like this is as accessibility tech. Blind people everywhere are talking about devices like this.
It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.
Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.
The impulse to "ban" is sometimes a bit naive imo.
Why? What's the difference between that and one of the many, many concealed camera options that you don't even notice? Just that it's noticeable? I don't think that's a good enough reason for yet-more-regulation. You're already being recorded everywhere you go in public by the authorities, and often by people standing right next to you unnoticed, so just act accordingly.
“You're already being recorded everywhere you go in public by the authorities”
You are the frog being boiled.
Because they will be popular and lots of people will buy them and use them all the time, leading to much more generalized surveillance than the concealed options that only a tiny tiny fraction of people would buy or use (and that we should also regulate)
> What's the difference between that and one of the many, many concealed camera options that you don't even notice?
The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!
The problem is if it becomes socially normalized. If you're using a concealed camera and someone notices, you're a creep/asshole.
Yet more regulation? We have regulation for these glasses already?
Aren’t there countries that make it mandatory to blot out faces of people on videos if they didn’t consent?
If anything they should be banned in private spaces, like if someone wearing them enters someone's home etc.
There is no expectation of privacy in public.
The owner of the private space generally has authority to deny this already, there's no need for an additional law.
In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.
I got a paywall, first time I've seen that on BBC.
Meta is so evil
Evil is the current meta
People have sex with their glasses on?
I'm guessing at least some of these cases are where the glasses are sitting on a nightstand and still recording
Are their partners even consenting to glasses with cameras??
Facebook may have to rename itself into NaughtyBook or SpyBook or Pr0nBook. They really want people to help them spy on other people here - including their sex life. Expect new sexy videos in 3 ... 2 ...
I bet the victims had their socks on too
i don't think smart glasses itself is a good idea
Good. Anyone who works for such a company is immoral in my opinion.
Oops! Oh, too late. And another nail in the heart of smart glasses…
> and was a common practice among other companies.
Meta isn’t lying, you should assume other companies are doing it too, Tesla did it with their cameras, and assume others like any company has access to your camera, I would even assume CCTV cameras too. It’s why for anything sensitive, try to use open source stacks, you might lose some of the features, but it’s a needed compromise.
So I've never had a smart speaker in my house (Alexa, Apple, Google). I've just never been comfortable with the idea of having an always-on cloud-connected microphone in my house. Not because I thought these companies would deliberately start listening and recording in my house but because they will likely be careless with that data and it'll open the door for law enforcement to request it. Consider the Google Wi-fi scraping case from STreetView.
Or they might start scanning for "problematic" behavior, a bit like the Apple CSAM fingerprinting initiative.
So not one part of me would ever buy Meta glasses (or the Snap glasses before that). You simply don't have sufficient control over the recordings and big tech companies can't be trusted, as we've witnessed from outsourced workers sharing explicit images. And I bet that's just the tip of the iceberg.
I honestly don't understand why anyone would get these and trust Meta to manage the risks.
That is to say nothing of the new technological use cases that could develop from the already existing technology. They just haven’t been thought of developed yet.
Things like audio scanning your living space using those Alexa smart speakers with ultrasonics to get an image of not only everything in your space, but where you are in that space as well.
That technological use case only came out within the last five or so years, maybe closer to eight. Either way I could see that coming before it became a thing just because ultrasound imaging of your unborn child is a thing ultrasound imaging of the sea floor is a thing so why wouldn’t ultrasound imaging of your living space be a thing by a company who wants to know what you buy.
I never ever ever had Alexa I only ever had a Google home because I got it for free with GPM but I almost never used it because I hated the idea of it always listening.
I already regret Wi-Fi because they figured out now how to look through walls with that.
You were wise enough to avoid this, unfortunately for most people “shiny tech!”.
This is what happens when you buy a camera from the "they trust me, dumb fucks" guy and put it on your face.
But aren't the users wearing glasses while nude or having sex dumb fucks though?