For a home to be truly smart, it should anticipate our needs. It’s not enough to tell Alexa to turn on the lights; we want to walk into a room and have our house turn on the lights for us. And those lights should be dim if it’s the middle of the night and bright if it’s evening and I want to cook a meal.
But getting from today’s clunky automations and voice commands to a real smart home requires a home that can read our intentions and retain context about who and what is in the home. And that is difficult. But last week I saw two products that represent a step forward in making a smart home. One does it by helping to clarify our intentions by taking in more than one signal, while the other provides deeper context.
When it comes to using more than one signal to determine our intentions, I think of devices as using two-factor authentication for intent, but I’m happy to call it something less clunky. The first example of this 2FA for intent comes from Level, the connected lock company. It launched a new deadbolt that combines a smart look with capacitive touch or a keycard. The lock uses Bluetooth on the phone to register a user near the door and then waits for a touch before unlocking the door. This ensures that the person isn’t simply walking by the door, but actually wants it to unlock.
That one extra step is a way to confirm intent in an ambiguous interaction. I think the ambiguity in our homes will only rise as they get more connected devices. I’ve already talked to companies that are thinking about using presence or even gaze to control smart devices in the home. For example, telling a digital assistant to turn on the lamp while looking at a specific lamp provides two ways of registering intent, which can ensure the right lamp turns on.
Today, Google and Amazon will both turn on the lights in the room that my smart speaker is in if I simply tell them to turn on the lights. But my house has an open floor plan, so sometimes lights in a nearby room turn on when a different speaker picks up the command. Adding a second input could help.
Use cases where intent is clear but context is missing are also important to address. iRobot, maker of the Roomba robotic vacuum cleaners, updated its software this week to make its vacuums smarter. The company has been gathering data from its vacuums for years, creating maps of the home. Now, in the app that controls the vacuum, users can label those maps to provide even more context.
This will come in handy when that context meets a spoken command such as, “Clean under the kitchen table.” Thanks to the mapping data, the device has the necessary context to handle the command. Yes, we’re still having to give orders, but as the combined elements of intent and context are embedded into smart homes, we’ll eventually see the rise of more intuitive commands. Just imagine saying “clean up” and having robots come out after dinner to clear everyone’s plates and load the dishwasher. Or at some point, not needing to say anything at all.
For that to work, we’re going to have to layer on a little more intent and a little more context.