Brunswick, ME • (207) 245-1010 • contact@johnzblack.com
Bruce Schneier opened his post about Meta’s AI glasses this week with: “Surprising no one, Meta’s new AI glasses are a privacy disaster.”
When Schneier can’t even muster the energy to act surprised, we’ve probably moved past needing to frame this as a revelation. But here’s what’s actually new: someone built an app to detect when a pair of these glasses is nearby. That’s the story worth paying attention to.
Meta’s Ray-Ban smart glasses can record video, transmit it in real time, and process it through AI systems. They do this without any obvious indication to the people being filmed. They look like sunglasses. The only tell is a small LED that indicates recording – which can be obscured, or simply ignored by anyone who doesn’t know to look for it.
Pair real-time video with AI facial recognition and you get wearable, ambient, person-identification technology that’s indistinguishable from ordinary eyewear at a conversational distance. Face plus AI lookup plus public records equals a home address. That chain works today.
What made his post worth reading is that he didn’t offer a clean solution. He wrote: “I’m not sure what can be done here. This is a technology that will exist, whether we like it or not.”
That’s unusual for a security professional. Usually the frame is “here’s what regulators should do” or “here’s what you can do.” Schneier looked at this one and said: the technology is here, it’ll proliferate, and I don’t have a reassuring answer. That honesty forces a more useful question: if you can’t prevent this from existing, what’s the actual response?
A developer built an Android app that detects when smart glasses are nearby by exploiting the Bluetooth or Wi-Fi signals connected wearables emit. It won’t catch every pair, and it doesn’t tell you who’s wearing them or whether they’re recording. But it exists, within months of these glasses shipping.
This is what grassroots counter-tech looks like. Not a regulation, not a policy paper – someone wrote an app and put it on their phone. Think about ad blockers: rough at first, the ecosystem complained, advertisers lobbied against them. None of that stopped them from becoming standard. A smart-glasses detector isn’t there yet. But the impulse is the same: if someone can see me without my consent, I’d like to know.
The EU AI Act has provisions around real-time biometric identification in public spaces. Enforcement against individual wearers using consumer devices is a different challenge. In the US, there’s no federal framework that would cleanly apply.
Schneier was right not to offer false comfort. Ambient AI identification technology exists. It’ll get cheaper, better, and smaller. The glasses will eventually look exactly like normal glasses, because that’s where this goes.
The counter-response is forming. It’s just forming slowly, against technology that’s moving fast.