Categories: AnalysisFeatured

Let’s talk about machine learning at the edge

ARM believes its architecture for object detection could find its way into everything from cameras to dive masks. Slide courtesy of ARM.

You can’t hop on an earnings call or pick up a connected product these days without hearing something about AI or machine learning. Because of the way it is used in digital marketing, there are things like this machine learning course to help you understand them and how they work. But as much hype as there is, we are also on the verge of a change in computing that’s as profound as the shift to mobile was a little over a decade ago. In the last few years, the results of that shift have started to emerge.

In 2015, I started writing about how graphics cores—like the ones Nvidia and AMD make —were changing the way companies were training neural networks for machine learning. A huge component of the improvements in computer vision, natural language processing, and real-time translation efforts have been due to the impressive parallel processing graphics processors have.

Even before that, however, I was asking the folks at Qualcomm, Intel, and ARM how they planned to handle the move toward machine learning, both in the cloud and at the edge. For Intel, this conversation felt especially relevant, since it had completely missed the transition to mobile computing and had also failed to develop a new GPU that could handle massively parallel workloads. If you are interested in finding out more about how machine learning might be able to help event driven architecture you might be interested in something like https://vantiq.com/resources/why-businesses-are-moving-to-an-event-driven-architecture/.

Some of these conversations were held in 2013 and 2014. That’s how long the chip vendors have been thinking about the computing needs for machine learning. Yet it took ARM until 2016 to purchase a company with expertise in computer vision, Apical, and only this week did it deliver on a brand-new architecture for machine learning at low power.

Intel bought its way into this space with the acquisition of Movidius and Nervana Systems in 2016. I still don’t know what Qualcomm is doing, but executives there have told me that its experience in mobile means it has an advantage in the internet of things. Separately, in a conference call dedicated to talking about the new Trillium architecture, an ARM executive said that part of the reason for the wait was a need to see which workloads people wanted to run on these machine learning chips.

The jobs that have emerged in this space appear to focus on computer vision, object recognition and detection, natural language processing, and hierarchical activation. Hierarchical activation is where a low-power chip might recognize that a condition is met and then wake a more powerful chip to provide necessary reaction to that condition.

But while the traditional chip vendors were waiting for the market to tell them what it wanted, the big consumer hardware vendors, including Google, Apple, Samsung—and even Amazon—were building their own chip design teams with an eye to machine learning. Google has focused primarily on the cloud with its Tensor Flow Processing Units, although it did develop a special chip for image processing for its Pixel mobile phones. Amazon is building a chip for its consumer hardware using tech from its acquisition of Annapurna Labs in 2015 and its purchase of Blink’s low-power video processing chips back in December.

Some of this technology is designed for smartphones, such as Google’s visual processing core. Even Apple’s chips are finding their way into new devices (the HomePod caries an Apple A8 chip, which first appeared in Apple’s iPhone 6). But others, like the Movidius silicon, use a design that’s made for connected devices like drones or cameras.

The next step in machine learning for the edge will be to build silicon that’s specific for the internet of things. These devices, like ARM’s, will focus on machine learning with incredibly reduced power consumption. Right now, the training of neural networks happens mostly in the cloud and requires massively parallel processing as well as super-fast I/O. Think of I/O as how quickly the chip can move data around between its memory and the processing cores and even make decisions based on data!

But all of that is an expensive power proposition at the edge, which is why most edge machine learning jobs are just the execution of an already established model, or what is called inference. Even in inference, power consumption can be reduced with careful designs. Qualcomm makes an image sensor that that requires less than 2 milliwatts of power, and can run roughly three to five computer vision models for object detection.

But inference might also include some training, thanks to silicon and even better machine learning models. Movidius and ARM are both aiming to let some of their chips actually train at the edge. This could help devices in the home setting learn new wake words for voice control or, in an industrial setting, be used to build models for anomalous event detection.

All of which could have a tremendous impact on privacy and the speed of improvement in connected devices. If a machine can learn without sending data to the cloud, then that data could stay resident on the device itself, under user control. For Apple, this could be a game-changing improvement to its phones and its devices, such as the HomePod. For Amazon, it could lead to a host of new features that are hard-coded in the silicon itself.

For Amazon in particular, this could even raise a question about its future business opportunities. If Amazon produces a good machine learning chip for its Alexa-powered devices, would it share it with other hardware makers seeking to embrace its voice ecosystem, in effect turning Amazon into a chip provider? Apple and Google likely won’t share. And Samsung’s chip business is for its gear and others, so I’d expect its edge machine learning chips to find their way into the world of non-Samsung devices.

For the last decade, custom silicon has been a competitive differentiator for tech giants. What if, thanks to machine learning and the internet of things, it becomes a foothold for a developing ecosystem of smart devices?

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago