Google has released a new API for developers that will allow them to create chatbots for the Google Home. It's called Actions on Google, and it will let developers build conversations between users and Google's artificial intelligence, known as the Google Assistant.
This is an essential move for Google, but executed in a way that highlights Google's incredible strength in artificial intelligence and its struggle with user interfaces as it tries to deliver that AI across computing everywhere.
The biggest complaint about Google Home, a personal assistant that doubles as a speaker, has been that it didn’t do all that much when it came out last month. It didn’t link to other services like its rival the Amazon Echo did.
The new API that lets developers create bots to run on Google Home devices remedies this after a fashion. So now you can talk to Google Home and ask it to read headlines from Venture Beat or to play a game. In the Google-given examples, the AI powering the experience seems strong, parsing complex requests and doing well with a truly conversational interface. But a catch here is that when you activate one of these new “bots” you lose access to the traditional Google Home world of commands and capabilities while the bot is active.
Instead, an entirely new voice can come out of the Google Home speakers and talk to you. It’s like what I used to imagine it would be like to speak with someone with split personalities. But it’s also an ingenious way to offer brands something they crave when it comes to the future of advertising–a way to distinguish themselves outside of screens.
I suspect that giving brands their own voice will help Google reel in developers working on behalf of big companies who want to offer their services to consumers. We could have Martha Stewart share her favorite appetizers ahead of our dinner party.
This may get overwhelming for consumers, especially because so far, it’s not clear how Google plans to help people discover the chatbots available on the Google Home. Also, the experience of Google’s back end Google Assistant that is supposed to work across all of its platforms is not consistent. This means users may struggle to understand what they can do on devices like Google Home, and also get confused when a command they use on their Pixel phone works differently on Google Home.
Amazon doesn’t have the challenge of consistency across platforms because it isn’t trying to make Alexa work in different ways across different platforms. However, it has recently opened up the underlying AI behind Alexa, which could change things.
When it comes to finding things to do in the Echo, life is a bit easier. With the Amazon Echo app, you can now search through a section called Skills that are divided into different categories such as Health and Fitness or Food and Drink.
Even so, both the Amazon Echo and Google Home will struggle with helping consumers figure out what magical words to say when trying to activate a “bot” or a “Skill” of their choosing. As I wrote last week, both of these companies are working not just to deliver a helpful personal assistant, but also to control how voice develops as a platform for devices without screens.
As this review of Google Assistant’s different capabilities across different devices illustrates, developing these interfaces is very much still a work in progress that’s devoid of consistency. So while Google’s AI offers the potential for rich interactions, it’s unclear how consumers will find them and if Google can stretch its AI across so many varied platforms.