Amazon announced that it sold millions of Alexa devices last weekend, helping to bring digital assistants to more individuals and smart homes. And yet, how “smart” are our smart homes these days, really? Not very when you think about it.
Essentially the mainstream smart home products that work with hubs or apps, as well as voice assistants, are currently very rules based. For example, smart home owners generally have to program devices to turn on or off at different times.
I do this with my outdoor lights, having them turn on at dusk and off by 11pm. Devices in the smart home also “react” based on trigger events: When a motion sensor detects movement in my house, the thermostat will run on the “at home” settings I’ve previously set up. The common thread is that all of these device actions require some manual configuration to be smart.
Yet all of these IoT devices are generating gobs of data. They know when you turn your lights on or off whether that’s done through voice commands or in app controls. Your router, your set top box, or your streaming apps all know when you’re viewing IP-based video content on a television, and even — in the case of the content app — what you’re watching. A smart door lock knows when you might be leaving or coming home; information that can be validated with the GPS data on your phone.
Are smart home devices really using this data yet? I’d argue not really although the original Nest thermostat in 2011 was self-learning. Unfortunately, few other devices or services have pushed forward from where Nest left off. And that means there’s a huge opportunity here.
What’s missing from the few scenarios I’ve outlined above is machine learning and/or artificial intelligence. Put another way, why can’t our homes learn how we live our lives in them and truly bring that smarts to the experience?
I could see this happening either in the cloud or on the hubs that run our smart homes, although it’s likely that both will be part of the solution.
On the cloud side, we’ve already seen examples of predictive or contextual assistance. Google’s smart replies in Gmail add value because they predict a few relevant replies based on the content of an email message which is scanned in the cloud.
And when I open Google Drive before our weekly podcast recording, Google automatically surfaces our show notes spreadsheet because it knows I normally open the document at that time. This type of assistance is better than what we see from Alexa, Google Assistant, Siri and Cortana today because we don’t even have to ask for the help: It’s smartly suggested for us.
For hub-based intelligence, we’ll need to bring that type of assistance down from the cloud and onto devices. It’s early yet but Google’s Tensorflow Lite was released earlier this month as a way to bring machine learning to mobile and embedded devices. Amazon too is entering this space, today announcing its AWS Greengrass ML Instance: A way to move machine learning down to the device level. In the short term, I don’t envision a sentient-like assistant such as Iron Man’s Jarvis in the home. (Although that would be nice: “OK Jarvis, write this blog post for me.”)
Instead, I’m hoping we see digital assistants, apps and smart home devices working together to anticipate our needs based on our day-to-day lifestyles.
When I’m not at home and my wife walks downstairs past the thermostat, for example, my home should know it’s her and that she likes our home a little warmer than I do. It would know this because she raises the heat more often than not when I’m not home.
After dinner, I often retire to my office and use a voice command or sensor to turn on the lights. Typically I then fire up one of several streaming apps on my television, unless I’m going to read. In the latter case, I play one of a few SiriusXM channels on a smart speaker. Once my home knows I’m in my office at night, it would be great if a voice assistant asked me which of the two typical activities I’m going to do and then adjust the environment appropriately: Turning on the TV or playing my favorite music.
Until this happens though, I’ll have to keep programming my devices even though they have enough data to adjust on their own. And I’ll keep barking commands at my digital assistants to make things happen, since they never initiate a conversation based on all of the information they have at their disposal.