I'll confess that I like my Meta Ray Ban glasses: I love using them to listen to podcasts at the pool/beach, while riding my bike, and it's cool to snap a quick picture of my kids without pulling out my phone.
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).
I don't understand how a parent can be OK non-consenually uploading pictures of their children's real faces to an ad driven AI company famous for abusing people's data and manipulating children on their platforms.
It is because they don't understand the scope of the problem. People are inclined to think that other people who have treated them kindly mean well also in the long term.
Probably the majority of the planet share family photos on facebook, messenger, whatsapp or instagram - all meta properties. On the whole nothing much bad happens.
- the vast majority of people are not creeps and are not discreetly filming random people
- the vast majority of people are not interesting, and nobody is filming them
- today, in a public space, everybody already has lots or cameras pointing to them (e.g. anyone with a phone), without a way to know if they're being filmed. So this is not a new 'problem'.
- banning smart glasses doesn't make sense if you're not also banning all devices that can film discreetly (so, smartphones)
- 'creeps' use hidden miniature cameras, not glasses with an obvious camera right there on their very face
Incorrect points, there should be just first and the rest is just fluff.
Try taking a photo of somebody with your phone. Usage will definitely look like you are snapping a picture, nobody walks around with phones straight up. The result is, when you take pics with phone, most often its obvious. When you insult people by not asking, they see it and react negatively.
When you point to people with smart glasses, nobody knows do they and that seems to be the point. Or is it beeping and blinking some led to make everybody aware? I don't think so.
Also, we live in society where smart doorbell for which it shouldn't be technically possible to upload any pics to cloud due to not having subscription still did that, and from major manufacturer. Security is a moot point, quadruple that for facebook / meta who are consistent assholes regarding breaking security and privacy to scoop any possible data points for further advertising. The slaps on wrist they receive is just cost of doing business.
> Try taking a photo of somebody with your phone. Usage will definitely look like you are snapping a picture, nobody walks around with phones straight up.
I urge you to visit any big city and see for yourself how wrong you are. I see it at least every time every day just during my barely 20-25min subway commute to work.
And that's the most unremarkable the most uninteresting place and scenario here. Any big park, any even remotely touristy location, any public square, any concert/sports venue, and even an overwhelmingly large proportion of restaurants are like that.
I mean, it is about as subtle as a middle school student or someone wearing a suit+tie on subway. I would notice them, but absolutely nobody would mind or care about it.
People holding their phone up get pretty much the exact same treatment. I.e., being something that you would notice, but pay no mind to it as being something entirely unremarkable.
> It's as creepy as Google Glass, yet we don't see the same pushback.
Didn't it come out that the pushback against google glasses was in part made by PR companies on behalf of their competition? I remember reading something along those lines.
>>But his latest defense puts forward an absurd definition of sexual harassment and effectively accuses women of reporting it to fit in with the cool crowd, while claiming he’s writing in “a spirit of healing.” There’s even a tasteless plug for his latest business venture. It’s one of the most disappointing responses we’ve seen to a sexual harassment complaint, which, after the past few weeks, is a fairly remarkable achievement.
He's scrubbed it from his blog and even Internet Archive, but it was well covered and widely quoted all over:
I think you're on to something! Maybe Meta paid Scoble to embarrass Google Glass, and now Google is paying him to embarrass Meta AI Smart Glasses too! Great work if you can get somebody to finance your serial sexual harassment scandals.
There are bubbles, you are obviously in one if you do not know any privacy concerned under 25. I know 15 year olds who are extreme privacy freaks, then I care about it so it might be easier to find those people. I do find that it is the people that I think are least likely are the one who are the most extreme.
You make a good point. I know a couple in there late 20s with kids who are pretty apathetic about their own privacy but who refuse to let Google or iCloud sync photos of their kids.
I'm pretty sure they care who takes pictures or videos of them.
Try going on a train and taking pictures of a young woman or man.
The only difference is these are less noticeable.
On the other hand, EVERY young person in my circle (my kids and their friends) is insanely privacy aware. All of that means ... we're not part of the young people anymore?
I’ve banned them from our office, for the same reason that I’d tell someone deliberately aiming their phone camera at the screen all day to knock it off. In an office setting, you have to treat these as industrial espionage tools, either by choice of the wearer or of a remote person controlling them.
Google Glass failed because they made the user look like they were wearing a high tech computer on their face ala Dragon Ball Z. It looked odd. Meta and Snap learned from this, but it had nothing to do with smartphone cameras not being part of daily life.
The first iPhone was 2007. Google Glass came out in 2013
It’s not a controversial viewpoint that a child can’t consent to their information being uploaded permanently to the internet, even by a parent. This is because, as an adult, I can’t retroactively remove my presence from the internet. Seems silly in trivial cases (school website), but is quite severe in others (bathtub photos).
It’s also not controversial to paint the harmful, profit-seeking actions of companies upon minors as “abusive” (e.g. tobacco firms).
If anything, your knee-jerk response at their rhetoric raises eyebrows: why would you go to bat for a company who by nearly all public measures is fundamentally evil in aim and structure?
If there's something wrong with how we've organized our society than we need to fix it on a societal level.
Evoking what the comment in question evokes over uploading pictures of your kid to the internet is not the way to convince people. It takes the thing you want people to care about and exaggerates it in a way that makes your view point trivial to dismiss.
I say this from the place of someone who deactivated their social media accounts over similar concerns. This is not the way to convince people.
Idk, agree to disagree in this case. Sometimes people do need to hear the stark words of those they disagree with to reconsider their weakly, or even deeply, held positions. Especially in this forum, where so many people of what I would figure is “higher intelligence” continue to turn a blind eye to the clearly unethical actions of their employers because $$$. Some of them even convince themselves that what they’re doing is somehow not unethical!
Consider the US in the late 2010s and where we are now. Making the (oversaturating) argument that X is basically Y is how we got here. The people who argee with you directtionally nod in agreement (because of course it is) and you alienate the ones who don't.
This is abuser rhetoric that’s become increasingly common in conservative circles, akin to “You’re making me do this to you!”
“Woke” individuals (i.e. people who are well-read and critically observant) have been sounding the alarm about warning signs for years, but their message was often twisted and lampooned, leaving an easy out for less critically-observant individuals to mark it as hysteria: “X is basically Y”.
You can find plenty of moderate “woke” voices dating back to the Bush administration warning about objectively concerning trends, especially with regards to the surveillance state and rights to privacy, which is why this thread exists in the first place.
Oh come on, this has nothing to do with being an abuser. You're doing the online millennial version of calling someone a dork. It's the way an entire generation of "left"ists (with no actual leftist principles) learned to bully the people they have a distaste for. Just call them an abuser, a facist, etc etc until the words mean nothing anymore and actual abusers and facists can get away with it in broad daylight.
No I stand by my careful choice of the word “abuser”. There’s quite literally an overarching movement of actions and rhetoric from conservatives since 2016 that is best analogized as an abusive partner.
You’re actually doing it again in your very comment, ironically, painting it as my fault that things are the way they are, despite the fact that all I’ve done is try to bring attention to things that I find troubling. Just like an abusive partner: “It’s your fault. You’re the reason I have to be violent with you.”
So yes, I will continue to call out actions and rhetoric that can be analogized to an abusive relationship because I believe it’s one of the core moral failings of the current reactionary movement in the US.
Edit: Also, isn’t “the boy who cried fascist” a relatively weak argument when the fascists actually do show up, during the exact political movement the boy was warning sounded fascist?
Have you heard of the story of the boy who cried wolf? The wolf succeeds because people start ignoring the boy whenever he claims there's a wolf. So no, it's not a weak argument. It's the whole point of the story you're referencing.
Tasteless to you, factually correct to me. Both correct actually.
Look, you do your kids, literally nobody in the world cares how great or messed up individuals they will become, the result always match the process so its pretty obvious.
But your freedom to do whatever stops when you start infringing rights of me and my family. Right to privacy is, where I live and most sane places, enforceable by law. Also, its called not being an asshole or similar rougher terms.
How exactly can a child consent to having their face analyzed and tracked, both by Facebook and its 10,000 ad partners, including ingestion into Government databases automatically, then used in countless AI algorithms, which may act against them.
They simply are not of sound mind to understand the consequences of such a transaction.
Those settings are IMO likely not doing what you think they are. Or might be doing strictly, precisely what they say they are.
[OFF] "Share data about your Meta devices to help improve Meta products." doesn't preclude sharing data for other purposes.
[OFF] "Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." doesn't preclude sending them to Meta's cloud for permanent storage.
Last year they pushed out an update stating if any “Meta AI” is left on, they can access image data for training,
I turned the AI off and used them as headphones and taking videos while biking. After a couple rides, I couldn’t bring myself to put them on because people started to recognize them and I realized I didn’t want to be associated with them (people are right to assume Meta has access to what they see).
Meta Ray Bans, if kept simple, could have been a great product. They ruined them.
After all that has been revealed to us over the past 15 years, it is really disheartening to see people still thinking that setting a few toggles will prevent these companies from abusing them.
Just continues to prove that if you solve a bit of inconvenience for them, people will let you exploit them and their families.
I'll confess I look at Meta Glasses the same as Google Glasses: A big sign saying "punch me in the face". If you enter some premises I'm in while wearing those, I'm either leaving or they will have to come off your face somehow.
Wearing these glasses is just as obnoxious as walking around putting your phone in people's faces while recording.
If it say "punch me in the face" then you have bigger problems. And after you got recorded showing what it says to you they might be growing. Tell them what you think but don't forget "Pretty, I feel pretty, ..." - just in case.
Your thinly veiled threat of using the glasses to record and then publish interactions to harass people is exactly why lots of people have issues with these glasses...
I think the most likely case is: this company is labeling images from meta AI use from people who opted-in to share their data with Meta.
It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.
I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.
I know some of the criticism of Meta: many people don't like the way their products are optimized for engagement. I've heard about their weird AI bots interacting on their platform as if they were people. And I know people of all political stripes have had complaints about content moderation and their algorithm.
But all of that is within the bounds of the law and their terms of service.
None of it would remotely approach something like: bypassing the well-advertised features in the glasses that show when the camera is in use and secretly recording things to train AI. It's hard to imagine any company's lawyers approving something like that. (this sounds like what many commenters believe is happening)
FWIW, I suspect this is the relevant section of the Privacy policy:
> "When you use the Meta AI service on your AI Glasses (if available for your device), we use your information, like Media and audio recordings of your voice to provide the service."
Meta has consistently and repeatedly shown an absolute lack of respect for user privacy for basically as long as they’ve existed as a company. I’m honestly not certain there’s anything fully out of the question as far as things they might do, regardless of what their policies might say.
They bought a “privacy” VPN app and used it to harvest data, then abused Apple’s enterprise app deployments to continue to ship the app after it was banned from the app store: https://en.wikipedia.org/wiki/Onavo
You missed the cases where the facebook app ran a local webserver on your smartphone which the facebook ad trackers would send data to to be able to track you across all websites, breaking GDPR laws and circumventing browser third-party cookie controls?
A simple on/off toggle isn't going to prevent them from using your data. If your data is in their server then it's going to be used one way or another. Whether in an anonymous way or shipped to where there are no privacy laws.
Your setting is off cloud media until the company arbitrarily turns it on for you. Seems crazy now, won’t be ten years from now. They’re just boiling the frog all the way.
The core issue here is that "to provide the service" in privacy policies has become a catch-all that can justify almost anything.
I work on web products in the EU and we had to redesign our entire data pipeline for GDPR compliance. The key principle is "data minimization" — you collect only what's strictly necessary and delete it after processing.
Meta's approach seems to be the opposite: collect everything, process in the cloud, and use vague language to keep the door open for secondary uses like
labeling and training.
The fact that turning off "Cloud media" might not actually prevent your data from being sent to Meta's servers for inference is a textbook dark pattern. Users
see a toggle and assume they have control. In practice, the toggle only controls one specific processing path while others remain active.
Under GDPR, this would likely fail the "informed consent" test — consent must be specific, unambiguous, and freely given. But enforcement is slow and fines are just a cost of doing business at Meta's scale.
i don't trust the zuck at all, and am not naive about any of this. I'm sure the words used above are watertight in court of law but I bet you there are shenanigans in places where light don't reach
You might enjoy these conveniences now, but this is just the pre-enshitification stage. Soon enough, to take advantage of those features you will have advertisements integrated into your view, and your data will be scraped for whatever its worth to Meta.
I haven't closely followed distributed protocols like this in years so might just be just behind the times, but I can't recall ever seeing "Pro Censorship" as a badge on a technology.
Seems to fit with real-world trends that confuse and frustrate me. I'm not saying we need a censorship-proof world, just that I find this quite jarring. I suspect I might be missing some context.
I love playing with my mirrorless camera and lenses, but I'm becoming more and more convinced that it's a risky proposition "investing" in a bunch of expensive camera gear (which traditionally holds it's value better than most gadgets) when computational methods will soon evaporate the advantages of bigger sensors / faster glass.
You might look at youtube red, which is included with the google music streaming service. Will let you download videos to your mobile device for offline viewing. Combined with your youtube "to watch" list, it works pretty well. Unfortunately doesn't support an "add on desktop -> auto download mobile" workflow.
You're misreading. Everyone they hire is intended to be a leader, and being right is one of the qualities they care about in employees. Not "management is usually right, so trust them".
This happened to a lot of people in the Valley after the dot com bust. They exercised their options, triggering a tax liability, and then the shares tanked, leaving them with no money to pay off the liability. They could only deduct $3000/year. Lots of people were in a lot of trouble, but I remember hearing that finally the IRS made a change so that this wouldn't affect you anymore.
You should be able to deduct the loss, but I'm not sure how that plays with the tax owed, since the loss is a capital loss and you're paying tax on an object of specific value (not a capital gain). It might not simply cancel out.
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).
reply