Analysis

Microsoft is pushing AI to the farthest edge

Counting cars on battery-powered sensors can’t happen today, but Microsoft Research is working on ways to take machine learning to the smallest sensors out there.

Microsoft Research, the research arm of the software giant, is taking a counterintuitive approach to AI at the edge; it’s pushing machine learning to the smallest processors out there, the microcontrollers commonly used in battery-powered sensors and wearables. In a conversation with Byron Changuion, a principal software engineer at Microsoft Research, he explained that bringing AI to the very edge of the network gives users more privacy, lowers power consumption, and speeds up response times.

Microsoft still has a group focused on machine learning in the cloud, complete with its own specialty silicon that relies on field programmable gate arrays (FPGAs), but it seems fairly unique in trying to push machine learning to microcontrollers. To do this, it has built the Embedded Learning Library (ELL), a repository of code aimed at developers and makers who want to experiment with AI at the extreme edge.

Microsoft Research hasn’t been able to take the ELL down to the sensor level yet, but that is the ultimate goal.

The edge is a tough place for machine learning, or “ML”. There are two elements associated with it: training the model, which requires a lot of processing power and data, and implementing the model, which just requires the model to parse incoming data to see if it matches or not. That second step, called inference, is what companies have been doing at the edge all along. To do so, they are usually using pretty powerful CPUs, clusters of ARM Cortex A-class processors, or even specially designed silicon.

Changuion hopes that not only inference but even a little bit of training can be brought all the way down to devices that run on resource-constrained ARM Cortex-M4 or M0 chips. He explains that once a company has trained a model in the cloud, it could still train at the edge by adjusting the first layer or two of the neural network based on incoming data. Right now, he’s running ELL and doing a bit of training on Raspberry Pi 3 boards.

When I asked if he’d consider using the newly launched Raspberry Pi 4 boards designed with more processing power for industrial use, he said no because his goal is to go in the other direction — toward the more resource-constrained devices.

Having smarter edge devices makes sense. Companies already use inference at the edge to reduce the bandwidth costs and latency associated with sending data to the cloud. Additionally, keeping data at the edge also means data stays on the device, which should better protect user privacy.

In the consumer space, Google has an effort underway to handle more processing locally on its Google Home products, while in industrial settings it’s relatively common to see companies run a model on a gateway device to keep their IP in-house and their latency low.

Swim.ai, in a nod to practical realities, is trying to perform machine learning on less powerful edge devices. For example, it’s running a neural network that analyzes vehicles at traffic lights. Those intersections can generate 500 terabytes of information a day. But clouds can’t handle that influx of data across whole cities or multiple devices even if companies wanted to send it all to the cloud. And yet, when I asked ML researchers about the idea of pushing AI to the farthest edge, most didn’t see the point of running on a microcontroller.

Yet, I recall my excitement at CES in 2018 when Qualcomm showed off a chip that could run a few computer vision models while only consuming a milliwatt of power. Such a sensor could be used as a people counter, which is one of the examples Microsoft’s Changuion gave as a reason for pushing ML to microcontrollers. Or a low-power, low-resolution people tracker could be used as a signal to wake up a faster processor to run a more involved model.

Changuion also showed off some impressive wake-word detection models running on a Rasberry Pi. I could imagine putting such a model on a lower-powered microcontroller as a way to save power on a wearable device. Imagine if you had one of the high-tech jackets with an internal heating unit that athletes wore to the 2018 Winter Olympics. With a battery-powered microcontroller running a speech recognition model, you could say “warm it up” instead of fumbling around for a button (remember, you’ll probably be wearing bulky gloves, so that button press would be tough).

I’m a maximalist when it comes to computing, not in the sense that we need more powerful chips, but in that we’re going to want to put computing in more and more places. So to my mind, having machine learning even in the tiniest devices makes sense. Now we just have to see if Microsoft and others can get us there.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago