Analysis

Suprisingly, it’s Amazon, not Google, bringing more context to the smart home

I’ve considered the concept of context to be one of the most important when it comes to interacting with data. Before the first Android Wear watch debuted, I noted that Google Now could provide valuable contextual information at a glance, for example. Stacey and I have also talked about context in the smart home for a few years on the IoT Podcast as well. And now we’re finally moving forward when it comes to context in the smart home. Only the company that’s leading to way to deliver it isn’t the one you might expect.

It’s Amazon, at least of today, which is a bit surprising. I thought for sure that all of the search and data analysis that Google has in-house would mean it would lead the way here. Nope. Based on the recently announced Amazon Echo devices and services, Amazon is inching closer to making our smart homes more like fictional Tony Stark’s digital assistant, Jarvis.

Need an example of what I mean by context? Take this review of the JBL Smart Display, which is powered by Google, as Exhibit A:

Recipes are one of the most popular uses for a display-equipped smart speaker, but the way this works with the Link View is somewhat awkward: You must say the “Hey Google” or “OK Google” wake word for each and every step of the process (you can also touch a next step button on the display, but that will be awkward if you’ve been using your hands to cut or mix ingredients). For the Pasta Primavera recipe I tried, that meant saying “Hey Google, next step” 17 times. Amazon’s Echo Show doesn’t require the wake word in this way. Once you’ve started the steps in the recipe, you just say, “next step.”

I have the similar Lenovo Smart Display and have never used it for recipes. And I don’t know if this reviewer had enabled the “continued conversation” feature of Google Home.

Regardless, if I’m using a device for a recipe with 17 steps, the device should know — read: understand through context — that when I’m following the recipe steps, saying “next step” should just work natively. There’s no need to say a wake word before every step. Put another way: If I was teaching my son Tyler how to install a smart switch step-by-step, I wouldn’t say, “OK Tyler…..” before each step because we’re in the context of a multi-step process.

Here’s another example, specifically relating to what Amazon is bringing to its Alexa services through a new Smart Home Skills API. Today, when I use Alexa to preheat my June Oven, I have to say, “Alexa, tell June to preheat the oven.” It works great. But the June Oven team can update their skill to provide what Amazon calls “natural invocation.”

That means I’ll be able to say “Alexa, preheat the oven.” It sounds like a small tweak but it’s super helpful and illustrates the power of contextual conversation. After all, if someone has a smart oven, they’re only likely to have one. And to be honest, who cares what the name of it is? You just want your oven to work.

And if the oven is ever swapped out with another brand, in today’s world, everyone in the home needs to know the new name. Why? It’s an oven and you want it to preheat, so your smart home should just do it with a basic voice command.

This idea extends to smart lights in the home as well. You can already tell Alexa to “turn on the lights” and she’ll turn all lights on grouped within a particular Echo device. This removes the need to say which lights because Alexa uses the context of your location — interpreted by which Echo you’re talking to — to eliminate saying which lights you want to be turned on.

This could easily be extended to FireTVs throughout the home. If you’re sitting in the living room and say “Alexa, turn on the TV”, she should know that you don’t want the bedroom set powered up because she understands the context of your request.

Amazon’s new device setup is also part of the contextual picture. By storing your home’s Wi-Fi password with Amazon, adding a new device will remove user steps with automated processes. You’ll plug in or power on a new Echo-compatible device, which will ping your existing Echo without any interaction on your part. Alexa will tell you that a new device was discovered and she’ll add it to the list of things you can voice control.

How does this work? Context.

By allowing new devices to broadcast their initialization intent, Alexa knows there’s a new node in the smart home network. She then makes sure there’s connectivity to the device and offers you the chance to name it. There’s no need to go into the Alexa app; you simply tell Alexa what to call the device or what group it should be in. This all happens based on the context of a setup process and it’s user-friendly. Having set up dozens of devices with my Google Home, I can’t say the same for that product because it’s still a user-centric process stuck in the mobile app framework.

Amazon seems more intent to remove the smart home pain points we face today, removing complexity and steps that can be done by smart devices themselves. That’s going to help advance smart home adoption and get us closer to where the smart home can do more on its own for us.

It’s also context used in the new Alexa Hunches feature, coming later this year. Amazon explains it best:

As you interact with your smart home, Alexa learns more about your day-to-day usage and can sense when connected smart devices such as lights, locks, switches, and plugs are not in the state that you prefer. For example, if your living room light is on when you say “Alexa, good night,” Alexa will respond with “Good night. By the way, your living room light is on. Do you want me to turn it off?”

Google is, of course, working on this as well. Back in May, Google’s Mark Spates gave a presentation on how device makers will get artificial intelligence (AI) into their devices. And by AI, he’s talking about personalization and context, combined with a similar learning model: Spates said devices “can learn from every interaction.” And that’s happening at some level with Google’s digital assistant products in ways. The new continued conversation feature is a good example of this from a contextual standpoint.

To be sure, we’re not yet at the point where our homes will predict our needs or take actions on their own based on sensor data, although Hunches is a first step. We’ll have to see how well Hunches actually work, however. But it’s interesting that Amazon seems to be creating a learning model in our local devices that is essentially “learning at the edge”, even if the machine learning (ML) takes place in the cloud and the models are sent back to our Echo devices. This will bring an intuitive, highly personalized experience to the smart home.

Intuition — through AI and ML — combined with sensor data is what will make the smart homes truly “smart” based on presence, learning models, device states, time, and interpretation of our needs based on voice commands. Somehow, it appears that Amazon is beating Google to the punch when it comes to this context in the home.

Kevin C. Tofel

Share
Published by
Kevin C. Tofel

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago