Voice control is the next big interaction model for computers. While it’s not appropriate for every task, it’s excellent for controlling simple devices without the use of keyboards and screens. And since we’re putting computing into everything from socks to dishwashers, voice control is poised to become the equivalent of the mouse for computing everywhere.
But currently, adding voice means adding an internet connection. Which means extra expense, privacy worries, and usability headaches for consumers who don’t want to install an app for everything in their home. But there is hope. Synaptics believes that in the next year or two companies will release products featuring its new voice chip, which offers natural language processing without the need for cloud-based transcription.
Devices that have the Synaptics voice chip plus a microphone will accept voice commands but won’t need a Wi-Fi or Bluetooth connection. Which means you could come home and turn on a lamp by directly telling the lamp to turn on, without having to go through Alexa or Google. More interestingly, you’d be able to operate devices with multiple but limited functions and settings using a direct verbal command. For example, you could get your washing machine to run a cold, permanent press wash containing sweaters without ever having to figure out what all the dials and settings on the washer mean.
There are limits to how smart these devices could be, but in most cases, you don’t need to ask your lamp the weather when you have a digital assistant on a phone or even a smart speaker. And if you don’t want Alexa knowing every time you turn on your living room light, these future products could be a welcome addition to the home as they would allow you to yell from the couch without having to share your data.
According to Saleel Awsare, senior vice president and general manager in the IoT division at Synaptics, the company will unveil a list of devices capable of on-device speech recognition using the new chip later this year. Synaptics already provides voice recognition chips for Amazon, Google, and Apple, so it has a strong history with consumer brands. Of the three, Google has been experimenting with local natural language processing and has recently enabled voice transcription on Pixel handsets without needing the cloud.
I’ve tried the feature, and it’s very good. However, voice transcription is not the same as natural language understanding, which is what a voice-activated device would need. Yet that understanding is what Awsare is promising.
And while Google might offer more local control to connected devices in the smart home by letting more happen on the device as opposed to in the cloud, Apple’s focus on data privacy might lead it to adopt a technology like this, or to pressure its vendors within the HomeKit ecosystem to adopt local natural language processing. Given how most consumers feel some conflict around how much they can trust voice-activated and voice-controlled devices, a technology that keeps the voice data local would be welcome.