Analysis

Ambient computing’s next big problem is intent

Radar used to be large and expensive, as shown on the left. But for Soli, Google shrunk radar down to the tiniest chip on the right. Image courtesy of K. Tofel.

This week’s most exciting tech news was probably the inclusion of a radar sensor in the latest Google phone. While Project Soli, which enables the phone to wake up when it detects someone’s face or the fact that someone is reaching for it, may be just a gimmick, as we embed computing everywhere the technology behind the sensor is showing up in more and more places, not just phones.

When we put computers in everyday objects and offer multiple modes of interaction, one of the biggest challenges facing those computers (or digital assistants) is to figure out our intent. When I’m typing on my laptop keyboard, the computer clearly knows what I’m trying to do and what device I’m addressing. But think back to the last time you called for Alexa or Google somewhere that had multiple voice assistants. What happened? Did several of them respond? Was it the one you wanted?

Now imagine that in your own home you are surrounded by several computers, each of which could be triggered by a gaze or a gesture. How would those devices — many without dedicated interfaces such as a keyboard or touchscreen — detect your interest and then your intent? This is a unique problem associated with the transition from personal computing to ambient computing.

Personal one-to-one devices won’t become obsolete, but they will be augmented by surrounding computers and multiple ways of interacting with those computers. Google’s Project Soli is just one of many ways companies are experimenting with new interaction models. The radar inside the 60GHz chip that Soli is built on only works a few feet out, but it can detect movements with millimeter accuracy.

The radar has the ability to measure if, for example, a face is nearby, and can then “wake” the computer vision processor associated with face identification. The user experience is that the phone unlocks faster than it otherwise would. Google on its Pixel 4 handset is also enabling a few gestures to control applications and volume. And unlike the scene in “The Hitchhiker’s Guide to the Galaxy,” where one of the characters stays very still in a room to avoid accidentally changing the radio station with an errant movement, the close range means the phone can register intent fairly easily.

Apple has invested in and installed a similar wireless technology — ultra-wideband (UWB) — to measure intent in its latest phones. It could be used for fine-ranging location data (the potential for an Apple tracking service is high here) and for securely making payments. Compared with Bluetooth, UWB needs a much closer range, which makes man-in-the-middle attacks more difficult.

But like Soli, the Apple UWB chip could also be used to provide more context, which could, in turn, make ambient computers smarter. And other devices with UWB chips could communicate with the iPhone. For example, when a person says, “Turn on the light,” the phone signals to the nearest light to turn on.

Combine this with novel technology such as gaze detection, which I learned about last week in a presentation by Jack Sprouse of Synapse, and it’s possible you could stand near a device, look at it, and control it with little more than a vague command. Such a capability would make even more sense in industrial or enterprise settings, where there are a lot more devices to control.

Already Texas Instruments has introduced a 60Ghz radio for industrial automation, using the signals from the radio to help robots navigate factory floors without injuring humans. In the future, with cameras and radar or UWB radios in other frequencies providing fine-grained proximity awareness, a human could command a robot — using words or gestures — in a more intuitive way. For example, instead of using a joystick or letting a robot run a pre-programmed route, a human could step in front of the robot and hold up their palm to tell it to halt.

That’s why Soli and Apple’s UWB chip are so neat. Both add fine-grained proximity to the phone, and could help democratize its use in more places. Proximity is great for finding lost objects, but it’s also great for adding another signal to the noise of human interactions that a computer can use to figure out what we want it to accomplish.

Of course, if we want this to work, we’re probably going to want to pick one standard technology and protocol so they can be embedded in everything. That’s the next battle in IoT wireless standards.

 

Want more smart home analysis like this? Subscribe to my newsletter.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

8 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago