It’s hard to believe that Google Glass debuted nearly a decade ago. I was one of the early “Glass Explorers,” which is a far kinder name than the other ones that were used to describe people like me at the time. Google Glass both ushered in the smart glasses movement and quickly died off as a consumer product. Just two years after its introduction, save for a niche enterprise model, Google Glass production ceased.
Mainstream iterations of smart glasses from other brands have since followed. Amazon has its Echo frames, which didn’t wow me. I never even bothered with Snap Spectacles. And Meta’s RayBan Stories, a product I consulted on for Meta, are not something I would have bought on my own.
Obviously, I’m bearish on smart glasses. Or am I?
I actually have high hopes for the form factor, mainly because companies can learn from the mostly failed products of the past. They can see what use cases make sense, partly by already seeing which ones didn’t, and create compelling products that meet broader needs. More importantly, there’s newer technology available now that can be used in more intelligent ways to add features people want or need.
Take this ingenious product idea from entrepreneur Tom Austad, which uses smart glasses not to see better, but to hear better. Five years ago, Austad had trouble hearing his mates in a crowded London pub. Due to background noise, conversations, and clinking glasses, he couldn’t follow the conversation at his own table. He wondered why the microphones on most hearing aids are placed behind the ear when what you really want to hear is in front of you.
And so his concept of smart glasses with 16 microphones and a built-in camera was born, as was Oculaudio, the company that offers them.
The camera isn’t used for snapping pics or videos. That was the main purpose behind the camera on the ill-fated Google Glass and a key reason people didn’t want them. It’s a privacy nightmare for people around you and they may not even realize it.
Instead, Austad’s spectacles use a camera to identify the face of someone speaking. Thanks to an algorithm run against the camera input, microphones on the glasses are focused on where the sound you want to hear is coming from. It reduces background sounds out of the wearer’s field of vision and amplifies the conversation you want to hear.
This is a far more compelling use case than what prior and current smart glasses offer. It adds more value and doesn’t scare people into a privacy frenzy. Put another way, it’s a clever example of how to combine technology and style in a product that’s actually a step up from current solutions. In this case, smart glasses could effectively replace hearing aids.
Along the same lines is a product from XRAI Glass, a startup building smart glasses to let deaf people see what people are saying.
This wearable doesn’t have a camera. And it only does one thing: listen to conversations and display subtitles of those conversations on a small screen inside the frames. Yes, you can do this sort of thing with a phone today, but implementing it in a wearable device is far more natural and easier to use.
And to be honest, I think a future iteration from XRAI Glass would be the real “killer product.” That’s because for now, you have to connect the glasses to a mobile phone for them to work. The frames rely on the phone for its processing power to translate spoken words into viewable text.
But it’s not hard to imagine customized, low-power embedded chips that focus solely on this activity. With those, XRAI’s glasses become a standalone smart wearable that can vastly improve the lives of millions of people.
What about smart glasses in the home?
I don’t know of any companies looking to replace smartphones, but the technologies are there when it comes to controlling the smart home. Today we interact with our home devices through voice; physical hardware, such as buttons and switches; or the apps on our phone. And it works quite well. But it could be better.
What if you don’t have or want smart speakers with a digital assistant? Or maybe you’re the kind of person that doesn’t carry a phone everywhere in the house. (I’m looking at you, Stacey!). What are your options?
Imagine a stylish set of specs that don’t have a camera but instead integrate an ultra-wideband radio (UWB) and some discreet LED indicators. We’ve already seen demos that use the UWB chip in a phone to “see” and control the different connected devices in a home.
The same solution could be implemented in glasses form. This makes perfect sense to me. If I want to control a device in my home, looking at it provides the intention of control. And a UWB radio can register that intent. To help determine exactly which device I want to interact with, an LED inside the top of the glasses frame could light up. I think that’s useful in case multiple smart devices are close to each other; the LED could focus my intent on the proper device.
Add in a low-powered Thread radio and then it’s simply a matter of issuing a voice command or tapping a capacitive side panel on the glasses frame. That touch technology is already several years old and used on many current smart glasses. This solves the problem of not having a handful of smart speakers in the home for those who don’t want them. And it doesn’t require carrying a phone throughout the house.
Look, we’ve had enough of the glasses that offload image and video capture from a phone to the face. The market has spoken: Most people don’t want that.
Companies have an opportunity right now to use newer, advanced technologies to create real value in smart glasses products. They can evaluate the everyday pain points against the latest chips, wireless protocols, and algorithms to move beyond the “novel but not innovative” use cases. As that happens, smart glasses will finally move from being the next big thing to being completely awesome.