Meta Glasses Send Nude, Bathroom Photos Unknowingly to Kenyan Workers for Review
Updated
Meta Glasses sends video and audio recordings to human reviewers in Nairobi, Kenya, according to a recent investigative journalism report from Svenska Dagbladet, a Swedish outlet. This includes sensitive recordings of naked bodies, bathroom activities, and unblurred bank card numbers. “In a large office complex, long rows of employees sit in front of computer screens,” Svenska Dagbladet reports. Thousands of people work at this center for a Meta subcontractor, Sama. They review the images and audio sent across the world by often unsuspecting users of Meta’s innovative VR Glasses.
The investigation included interviews with workers at the data annotation center who have all signed non-disclosure agreements and thus are speaking anonymously to the journalists.
“We see everything – from living rooms to naked bodies,” one worker said. “Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me.”
The terms of service state that information from conversations with “Meta AI and other AI-powered features” will be reviewed by humans and machine processes, depending on your settings.” The ToS also states that AIs may store and use information shared with them, for which the reporters’ stay is mandatory, with no way to opt out.
Faces in videos and images that are sent for human review are “blurred automatically” according to a former Meta employee, but that doesn’t always work, and the human reviewers are able to see the faces of the people in the sometimes revealing images and videos.
Workers describe how uncomfortable it is to be seeing into the private lives of people who likely are unaware that this data is being shared, viewed, and heard by strangers across the world.
“When you see these videos, it feels that way. But since it is a job, you have to do it,” one worker said. “You understand that it is someone’s private life you are looking at, but at the same time, you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone.”
A Time Magazine report from 2023 noted that OpenAI was using Sama as a subcontracting agency for data annotators to help make ChatGPT less toxic. They noted that the workers on Sama’s ChatGPT project were earning between $1.32 and $2 per hour. ChatGPT was paying Sama $12.50 an hour for the work, based on the contract details reported by Time Magazine.
Workers in low-wage countries make up the backbone of the AI infrastructure as companies like Meta and OpenAI expand their offerings.
EssilorLuxottica, an Italian eyewear company, sold 7 million pairs of Meta AI glasses in 2025 – more than three times the previous year’s total. Meta is reportedly working with the company on a deal to double production to 20 million pairs by the end of this year, as there is “unprecedented demand” in the US for the new technology.
IT and security experts say the data collected by Meta is far more valuable than the income it generates from the sale of the glasses, which in some cases retail for as much as $799 for the most advanced edition. The data can be used to learn intimate details about consumers to improve algorithms and ad sales.
In response to the investigative report, a lawsuit has already been filed against Meta in the Northern District of California, San Francisco. The filing states, “This nationwide class action seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline. Consumers purchased these Glasses in reliance on Meta’s privacy assurances. They did not, and could not reasonably, understand that their bedrooms, bathrooms, families, bodies, and more would be exposed to strangers around the world. Meta’s conduct violates state consumer protection laws, offends basic notions of privacy, and exemplifies the kind of AI-era surveillance harms that demand accountability.”
While there are many concerns about human reviewers in Kenya reviewing footage recorded unknowingly by the user, there are also concerns about the potential misuse of these glasses by people knowingly recording and collecting information about others without their knowledge or consent. A small LED light appears on the glasses when they are recording, but it is often missed.
The New York Post reported that two Harvard students uploaded videos from the glasses into facial recognition software. They were able to identify the person and find their address. Meta is reportedly looking to add facial recognition software directly into the glasses by the end of this year. This combination of technologies has been referred to as “every stalker’s dream.”
This report comes as Meta CEO Mark Zuckerberg is facing multiple lawsuits alleging failures to keep children safe on his social media platforms, Instagram and Facebook. Meta has internal research showing the harms of excessive use, yet continues to run campaigns to get children to spend more time on the platform. Sama and Meta workers, along with class action lawyers, allege that Meta is aware of the scale at which its product is collecting sensitive information without proper informed consent. The company is pressing on and looking to add facial recognition software, which would create significant safety issues as stalkers and scammers can quickly obtain personal information about strangers.
Meta insists that the default setting for the glasses is not to send information to data annotators for human review and that the user has full control over what happens with this data. The company further states that according to the privacy policy, it is the user’s responsibility to use the glasses in a “safe and respectful manner.”