Analysis

Google ATAP has a blueprint for ambient computing

This story was first published in the Aug. 19 issue of my weekly newsletter

If you live in a smart home, how do you interact with it? Chances are you do so in one of two ways: Either you speak to a voice assistant or you tap around on a mobile application. Or perhaps you do some combination of both. And your approach likely works (most of the time). But is it the optimal, “smartest” way? I doubt it, and so does Google, at least based on a recent series of tweets from its Advanced Technologies and Projects (ATAP) group.

The first of these tweets surfaced earlier this month, with Google ATAP suggesting it was researching how to “create spatially aware systems for computing.” While the thread references computing in general, it could also be applied to the cornucopia of connected devices in the home. Google has a vision for the smart home of tomorrow and it’s based on ambient computing that relies on the interpretation of human movement and intent.

Image courtesy of Google ATAP

That’s especially true when you consider the three dimensions of human movement around computing devices, which ATAP defines as:

  1. Proximity (distance between you and the device)
  2. Orientation (the angle of your body or head relative to the device)
  3. Pathways (certain trajectories you may have in space)

The computing device in each of these scenarios could be a sensor, lamp, smart display or any number of traditional smart home devices. And based on any, or all, of these dimensions, the connected device could intelligently take some action, seemingly on its own.

That would be a completely different type of interaction than the present-day methods of voice and touch. Instead of a smart home device doing something the user physically requests, the device could simply respond to its surroundings in a contextual way.

Take the examples of orientation and proximity, for instance. If a smart light or lamp knew that I was approaching and facing it, it could automatically turn on if the current light levels were low. A similar situation would be when I approach my front door on the way out of my house. Why would I use a voice command or manually unlock the door if the device knows that my intention is to leave? It could simply unlock itself in preparation for my action.

If this sounds like a theory of ambient computing, it should. Because that’s what it is. It’s more than a theory, though, because Google’s ATAP group is defining the measurable parameters to make ambient computing the next invisible interface.

I say the “next” one because back when Apple launched Siri in 2011, I defined voice as the invisible interface. The following year brought what I believed was the second iteration of this interface when Google Now arrived on the scene, surfacing contextual information at the right time and place.

Since then, we’ve seen many attempts at moving the invisible interface forward, but none have taken root. Currently there is no successful implementation of a product, service, or software that can fully recognize our intent in the smart home without the use of additional input such as touch or voice.

But there are promising technologies that have been in the works for several years. And together they will help advance the smart home interface to one that’s akin to ambient computing.

Ultra-wideband (UWB) radios are one such example. These high-frequency radio chips can recognize gestures, movement, and distance between a person and a smart device — data that ties in nicely with what the Google ATAP team defines as the three spatial computing dimensions. For now, though, few devices have UWB capabilities. And those that do typically leverage the radio specifically for the use case of locating another UWB device. That’s why you can see your exact distance from a lost Apple AirTag when it’s nearby. And why your iPhone can literally point you in the direction of the tag.

But a recent (and timely) project shows promise when it comes to moving beyond this current use of UWB technology.

Back in December, we featured a prototype solution called Point that uses the same proximity and orientation aspects of the Google ATAP effort. This week, according to The VergePoint is now called Fluid One and it’s no longer a prototype.

Image courtesy of Point

Fluid One

Fluid One is now a Kickstarter project that includes UWB beacons and a visual interface that lets you control connected devices when your phone is “looking” at them. While you still need a phone in this case, it’s promising nonetheless.

Obviously, we’ll need some “sensing” technologies to be integrated in future smart home devices in order to bring us to the next level of ambient computing. Aside from UWB, other RF solutions are being tested, such as using mesh Wi-Fi signals. Or Bluetooth, so as to have the smart home know our in-house location, and possibly, our intent.

Location is an easier nut to crack than intent, though. But Google ATAP shared another tweet this month suggesting how human intent might be interpreted.

Image courtesy of Google ATAP

Through research and experimentation, the group has defined “a set of interaction primitives inspired by the natural way we communicate with each other.” Essentially, smart devices can learn our intent much the way we interact with non-smart objects and other people. The primary interactions Google ATAP is focusing on for this learning are:

  1. Approaching and leaving a device
  2. Turning towards and away from it
  3. Passing by it
  4. Glancing over at it

Some of these interactions can easily be interpreted through cameras in devices. But from a personal privacy standpoint, that’s far from ideal. Again, when it comes to privacy preservation, RF sensing fits in nicely here.

Google’s Project Soli, for example, can determine if you’re looking at or away from a device. Soli can also detect movement as well as gestures, so with some refinement, it could interpret all four of the above interactions. But you could easily insert other radio solutions for Google’s specific Soli solution. The future doesn’t have to, and shouldn’t, depend on a single-company solution or technology.

There is still one piece of the puzzle to be solved before ambient computing truly becomes the next smart home interface, though. How will these more intelligent devices actually know what to do as we passively interact with them?

I think the Matter standard is the missing piece. Unifying all the Matter devices that know the current state and capabilities of all the other Matter devices on the network would get us most of the way there. Especially since one device would be able to recognize a person’s intent and then signal that intent to other devices.

But Matter along with intelligent device sensing alone won’t get us to the finish line. There will have to be some type of initial, likely manual, configuration before the smart home anticipates our needs and correctly acts on them. Think of this like some basic automation rules you might set up in your smart home today. That would give the smart home of the future a baseline or starting place to deliver on ambient computing.

Eventually, our devices will need to learn our individual routines and preferences and adjust their responses accordingly. That means more machine learning at the edge, another parallel project that’s been in the works for the last few years.

So there are still many efforts that need to bear fruit before ambient computing becomes the smart home interface. However, all of the technological pieces, along with the well-defined awareness rules and intents from Google ATAP, could make this a reality. I, for one, can’t wait!

Kevin C. Tofel

Share
Published by
Kevin C. Tofel

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

9 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago