It turns out that much of the footage recorded by Meta Ray-Ban AI smart glasses is being sent to offshore contractors for data labeling, a process that involves human reviewers seeing some incredibly private and intimate moments from users’ lives. This comes from a report by a Swedish news website, Svenska Dagbladet.
These smart glasses have really taken off, selling over seven million pairs in 2025 alone. That’s a huge jump compared to the two million sold in 2023 and 2024 combined. They let you record first-person video and audio, and even analyze the world around you using Meta’s AI model. While that sounds pretty cool, the hardware has definitely kicked off a heated debate, with critics raising concerns about potential facial recognition features and Meta’s past privacy issues.
The reality of how these AI models get trained is often a bit messier than tech companies let on. Many don’t realize that to make AI smarter, companies employ human contractors to review and annotate footage. It’s a super resource-intensive process, and it’s happening with your Meta AI glasses. Contractors based in Nairobi, Kenya, recently revealed in a joint investigation that they’re being asked to review some seriously sensitive data.
This trend is reminiscent of social media content moderation, which has relied on similar exploitative labor practices for years
One contractor for a company called Sama shared a pretty shocking detail, saying, “In some videos you can see someone going to the toilet, or getting undressed.” This individual believes users wouldn’t be recording if they knew this was happening. Another data annotator mentioned seeing a video where a man put his glasses on a bedside table, left the room, and then his wife came in and changed her clothes. Other footage included images of people’s bank cards, users watching adult content, and even entire “sex scenes.”
Employees also described feeling pressured to continue this work. One said, “You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work.” They added that questioning the process could lead to losing their job.
Meta’s AI terms of use do mention that the company reserves the right to “review your interactions with AIs,” including conversations and messages, and that this review can be “automated or manual (human).” The document also advises users not to share information they “don’t want the AIs to use and retain, such as information about sensitive topics.” However, given the kind of footage contractors are seeing, it’s clear many users aren’t aware of this crucial advice.
What’s even worse is that owners of Meta’s AI glasses don’t have an option to use the AI features without agreeing to share data with Meta’s remote servers. Once that data is sent, it’s often too late to take it back. As data protection lawyer Kleanthi Sardeli explained, “Once the material has been fed into the models, the user in practice loses control over how it is used.”
After two months of silence, a Meta spokesperson eventually referred the investigation to its terms of use and privacy policy. The spokesperson stated, “When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.”
Published: Mar 5, 2026 02:30 pm