Imagine walking through your campus or a local mall, and someone wearing a pair of stylish sunglasses instantly knows your name, your social media profiles, and where you work—without you ever saying a word. This isn’t a scene from a sci-fi movie; it’s a feature Meta is reportedly preparing to launch.
According to a report from The New York Times, Meta plans to add facial recognition technology to its Ray-Ban smart glasses. The feature, internally nicknamed “Name Tag,” would allow wearers to identify people in real-time using Meta’s AI assistant.

The Privacy Concern
Privacy advocates are raising the alarm. An internal Meta document leaked to the Times suggested that the company intentionally planned to launch this during times of “political chaos,” hoping that civil society groups would be too distracted to push back.
Mario Trujillo from the EFF (Electronic Frontier Foundation) didn’t hold back: “Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt.”
This isn’t Meta’s first brush with facial recognition controversy. The company has already paid nearly $7 billion in settlements over previous systems, including $5 billion to the FTC and massive payouts to states like Illinois and Texas.
A Practical Guide to Privacy
At Adafruit, we’ve been tracking these developments closely. We’ve even been documenting Wegmans’ facial recognition deployment and building practical ways to fight back—like using IR-blasting hats to confuse sensors.

Meta claims the LED indicator light on the glasses notifies bystanders when they are being recorded. However, reports from 404 Media show that it is incredibly easy to disable this light with a $15 sticker or a simple hardware hack. Unlike Apple, which hardwires the MacBook camera LED in series with the power (making it impossible to use the camera without the light), Meta’s design allows the light to be bypassed.
The Ethics of Engineering
For students studying AI, computer science, and engineering: this is a major ethical crossroads. As the next generation of developers, you have the power to decide what kind of tech gets built.
As the original Adafruit post points out, demand for your skills is at an all-time high. You don’t have to be the person who wires up mass facial recognition for a company with a questionable track record. You can choose to work on projects that respect user privacy and build trust.

The Bottom Line
Meta’s history with data is complicated. From the “They ‘trust me.’ Dumb f***s” era to accidentally banning Ladyada because an AI misidentified her as a magazine photo, the message is clear: Trust is not a product that can be easily sold once it’s been broken.
Stay curious, keep building, but always keep an eye on the ethics of the code you write.
