Analysis

Google finally approaches the edge!

The crowds at Google Cloud Next were eager to learn about the edge. Image by S. Higginbotham.

Clearly edge computing has hit the mainstream, as both Google and Microsoft devoted a lot of attention to the concept at their recent respective developer events. Google shared several announcements at its Cloud Next conference last week, including the development of a machine learning processor for edge devices, a version of its IoT Core device management software for edge devices, and news that its Google Functions service is now out of beta.

The edge efforts — especially the extension of Google’s IoT Core software to the edge — help Google catch up with Amazon Web Services and Microsoft’s Azure in a number of areas. Amazon and Microsoft both have similar products that essentially connect their cloud platforms directly to software that runs on devices that have less memory and computing power.

However, Google still doesn’t have a compelling product akin to Amazon’s Greengrass, which allows developers to run functions at the edge. AWS calls its functional programming service Lambda while Microsoft calls it Azure Functions. Microsoft does have tools to bring Azure Functions to the edge.

explained why all of this matters back in June of 2017, when Amazon launched Greengrass. The TL;DR version is that, for some IoT use cases, Functions offers a way to quickly perform a task. For example, if the temperature gets above 78 degrees, it can send a message. There’s no need to keep a virtual machine constantly running in order to track something like that. Instead, you pay only for the quick instant you need the computing power to send the message.

This week, Google took Functions out of beta, which was an important step toward offering a product that is as powerful and important as those from Amazon and Microsoft. But it still needs to allow Functions at the edge.

Google did, however, boost its credibility in machine learning with new chips that were designed using its tensor processing units. The chips can’t handle the arduous and power-hungry task of training new models at the edge, but they can run existing models efficiently. My friend Alasdair Allan, who has been playing with various machine learning hardware at the edge, is excited about these new chips, calling them better than what he’s seen so far from Intel’s Movidius purchase.

Machine learning at the edge has the power to bring better computer vision to cameras, phones, industrial robots — even drones. It can also be used on gateway devices to detect local traffic patterns, manage manufacturing processes, and predict behaviors in real time. Google’s ahead of the game here. Microsoft is researching chips to provide machine learning at the edge, but it’s still research. Amazon doesn’t have special hardware, although it does provide tools on the machine learning sidethat are designed for edge devices.

Outside of the keynotes, the company offered several compelling customer examples of its products being used by companies to analyze data. One of my favorites was how FTD, the company behind many of the brands selling cut flowers and bouquets, is using Google’s machine learning tools to recognize the types of flowers in a bouquet and how many there are. This helps FTD understand how well a florist managed to match an FTD bouquet design and to measure the quality of the service FTD and its florists provide.

FTD is also experimenting with IoT by putting sensors in florists’ shops to track temperature and humidity as another measure of quality control. Those efforts are only a month old, though, and it wasn’t clear how much Google’s tools have helped.

At a smart cities session, a company that offers smart parking using sensors placed in parking spots showed how it could track data using Google’s cloud and Google Maps. Again, the edge device management aspect of that particular implementation wasn’t discussed, which makes me wonder how much Google has really developed there.

Google’s developers are clearly interested in the internet of things. Many of the IoT sessions I want to were jam-packed, including one where Googlers built a simple IoT device and application in 45 minutes. I missed the session dedicated solely to Google’s IoT vision (I had a plane to catch), but based on what I saw, its advantages for now are clearly associated with its machine learning tools, the ease of use of its management platforms, and the number of data analytics products it has.

The challenge it will have is taking all of that expertise out to the edge and helping customers make sense of their data there before shipping it back to the Google data centers.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago