Meta’s futuristic Ray‑Ban AI smart glasses, once marketed as a breakthrough in wearable artificial intelligence and privacy‑centric design, are at the centre of a global privacy scandal that has ignited lawsuits and scrutiny. Recent investigations have cast a stark light on how the device actually works. More importantly, why its so‑called AI may not be as autonomous as consumers believe.
The controversy revealed that footage and audio captured by the glasses’ cameras and microphones are routinely sent to human contractors in Nairobi, Kenya, for review and annotation. These workers, employed by subcontractor Sama, assist in training the systems that power the glasses’ artificial intelligence.
Exposure of Intimate Footage and Broken Privacy Claims
According to worker accounts made public in these investigations, the content reviewed by human annotators has included users undressing, using bathrooms, entering bank details, and more. Although Meta says it uses automatic blurring to protect identities, whistleblowers and reviewers say the technology frequently fails, leaving faces and other identifying details exposed to human reviewers.
Needless to say, the scale of the device’s market means this issue could affect millions of users around the world. It’s worth noting that seven million pairs of the Ray‑Ban smart glasses were sold in 2025 alone, according to reports.

While Meta has maintained that media stays on the user’s device unless shared with Meta AI, the reality is different and much more complex. Data sent through the AI features is routed to Meta’s servers and potentially reviewed by human workers as part of the training and improvement process. While at first glance, and also as per the marketing done, the terms of service do permit this type of review. However, critics argue it is buried deep in dense legal language that most users never read.
Lawsuits and Regulatory Action
In the United States, a class action lawsuit was filed against Meta in the Northern District of California, accusing the company of false advertising and violating consumer protection laws by promoting the glasses as designed for privacy and controlled by you while failing to disclose the true extent of human review of personal footage.
The complaint asserts that no reasonable consumer would expect deeply personal moments to be reviewed by remote human contractors based on the company’s marketing claims. It seeks damages and an injunction requiring Meta to change both its practices and its advertising.
In Europe, the United Kingdom’s Information Commissioner’s Office (ICO) has also reached out to Meta for clarification on its data handling practices as part of a privacy inquiry sparked by the Swedish reporting.
The Myth of AI
At the centre of the controversy of the Meta Smart glasses privacy issue is a broader issue in technology — the gap between how companies portray AI and how these systems are actually built. Despite sleek marketing around AI that can translate languages or interpret the world around the wearer, much of the heavy lifting behind these capabilities involves human labour, especially in labeling and interpreting real‑world data.

Industry experts have long warned that many AI systems are not purely autonomous learning entities, but instead are trained and refined by underpaid human workers. At times, humans also pose as chatbots in the name of pushing AI to the market. These workers review, correct, and annotate data to teach AI models how to interpret language and images. The smart glasses scandal has thrust that reality into public view.
Critics say the episode underscores how modern AI is not fully intelligent in the autonomous sense: many systems still depend on humans to provide context, clarification, and supervision. This dependency complicates claims about automated privacy protections and raises ethical questions about consent, surveillance, and the hidden human costs of consumer technology.





