From bathroom breaks to sex, Meta’s AI Glasses are watching your most intimate moments.
Meta’s Ray Ban glasses were sold to the world as an all-in-one AI assistant that could answer a user’s questions, translate foreign languages and film everything in real time.
No doubt the first person perspective videos have proliferated your feed, sometimes of totally banal activities, a concert, a work-out.
But what innocent wearers are now discovering is their footage is not so private after all.
For months, Swedish newspapers Svenska Dagbladet and Göteborgs-Posten have been investigating the glasses and interviewing workers tasked with reviewing footage.
Each recording made with the glasses is streamed directly to an office in Kenya.
Here, rows of workers train AI systems, labelling items, describing them in detail, and monitoring generated answers.
This means all footage recorded with the glasses must be watched by the workers, often in 10-hour shifts, who are forced into the job by poverty.
Workers filter through clips of bathroom visits, nudity, sex, credit card details and other sensitive moments.
Footage that if leaked could cause an “enormous scandal.”
In one example, the glasses are not even being worn, but instead resting on a table.
Footage of a woman getting undressed is accidentally captured, and subsequently, kept in Meta’s database indefinitely.
Though the workers in Kenya are uncomfortable, the expectation of work overrides ethics, “if you start asking questions, you are gone,” said one.
“There are cameras everywhere in our office, and you are not allowed to bring your own phones or any device that can record,” said a worker speaking of the strict workplace.
The explosive report has triggered action across the globe.
UK data watchdog the Information Commissioner’s Office (ICO) has launched an investigation, and a lawsuit has been filed in the US by plaintiffs Mateo Canu of California and Gina Bartone of New Jersey.
Meta has quickly tried to cover its tracks, claiming that any data from the glasses “is first filtered,” like blurring people’s faces.
But workers said this mechanism often failed and people’s faces could be seen in full.
Facial recognition is being sold as the next step for the Big Brother glasses, worsening an already spiralling privacy nightmare.
An internal company memo revealed this feature would be launched “during a dynamic political environment” in which society would be “focused on other concerns.”
It’s clear that privacy is not sacred for Meta and its war to win our minds.