Imagine if your smart speaker could be trained to recognize your accent, or if a pair of running shoes could alert you in real time if your gait changed, indicating fatigue. Or if, in the industrial world, sensors could parse vibration information from a machine that changed location and function often in real time, halting the machine if that information suggested there was a problem.
We often write about the value of on-device machine learning (ML), but what we’re generally discussing is running existing models on a device and matching incoming data against the established model. This is known as inference. So when you say the name “Alexa,” your smart speaker matches the pattern and wakes up. Inference is great, and there is a robust community of researchers and product managers adding on-device machine learning to phones, cameras, wearables, and more. But the next big research goal is on-device learning.
In the ML world, on-device learning is generally referred to as “training.” And training an algorithm, which is what happens when a researcher feeds data into computers running different types of models in order to create a usable algorithm, takes place in the cloud.
But training in the cloud requires a lot of data and a lot of compute power. Which is why getting a small device, such as a sensor or wearable, to take in data locally and adjust its algorithm accordingly, feels impossible. But if researchers can make on-device learning real it would open up a lot of use cases.
One is personalization. So in the earlier example, if an individual says “Awexa” instead of “Alexa,” the wake-word recognition algorithm could adapt over time, learning that in this individual’s home, “Awexa” is the wake word.
While personalization is a compelling reason to focus on local on-device learning, it’s not the only one. Remember that, broadly speaking, doing anything locally with machine learning can help save on bandwidth and connectivity costs as well as save power and reduce latency. Because data isn’t heading up to the cloud, it also protects privacy. Add learning to the mix and you can protect privacy even further, because identifying data doesn’t need to head up to the cloud for training, either.
So in areas where connectivity is expensive or intermittent, on-device learning might make it easier to, say, train camera traps in a rainforest to recognize different animals. Or it might allow the personalization of anomaly detection on a machine running in a mine in remote Western Australia.
For all of these reasons, this week I spent two days virtually watching a series of presentations put on by the TinyML Foundation discussing on-device learning at the edge.
In one presentation, a researcher from Qualcomm showed how pulling local biometric data from local devices to create a hash value would let companies build a way to use local on-device learning to perform biometric authentication without needing to send personal data to the cloud. (Here is the related research paper, because there’s no way I can explain this succinctly in a paragraph or two.)
After watching these presentations, it’s clear to me that the research community sees promise in on-device machine learning. But the technology is still in its early stages. There are numerous challenges associated with training models on devices with little computing power and memory. Local on-device learning in particular introduces new security challenges, such as adversarial attacks on sensors. Imagine if, for example, you were able to access the Amazon speaker I referenced earlier and train it to respond only to “Alesta.”
There are also challenges associated with testing and scaling models that run on local devices. So how does one ensure that an on-device model that’s trained locally is performing as intended?
Let’s look at some of the solutions proposed in the presentations this week to address the limitations of microcontrollers. TinyML is the act of running machine learning algorithms on very constrained hardware, which includes microcontrollers. These devices have limited computing and memory in their kilobytes. So before you can train on the sensor edge, you need an ML framework designed to run on such constrained devices.
Among the presentations I watched was one from Google Sr. Program Manager Bill Luan, who showed off TensorFlow Lite for Microcontrollers, a framework that supports a tiny 16 Kb runtime core and battery power. Luan said that an accompanying Coral Dev Board Micro would be out in mid-October. This developer board includes an ARM M-4 core to run TensorFlow Micro and a fatter Arm M7 core that could run TensorFlow Lite. He demonstrated on-device learning using a Raspberry Pi, which is a far more robust computer than a microcontroller, but said he was working on a demonstration of an on-device learning model for shape and color sorting that would run on the Coral Dev Board Micro.
Having a framework for tiny training models is only one step in trying to make on-device learning for MCUs real. Researchers also have to find the right neural network strategies for each use case. For example, in another presentation I watched, Valeria Tomaselli of STMicroelectronics proposed using echo state networks (ESNs) for change detection for anomaly detection. ESNs are a form of recurrent neural networks that can readily parse time series data. A researcher from Siemens gave a presentation where he suggested retraining the top layer of a neural network at the edge, while keeping the bottom layers of a neural network stable.
Instead of focusing on the math behind an algorithm, other researchers envisioned new hardware. Kaoutar El Maghraoui, principal research scientist at IBM, proposed in her presentation specialized hardware, including chips that used in-memory processing or analog processing to conserve power or probabilistic computing to preserve power. She added that IBM is also researching quantum AI, but that is much farther out than analog or probabilistic computing.
For many in business, the idea of personalization and privacy through on-device machine learning is exciting, but it’s also still far off. Today, companies are just starting to use ML at the edge that does only inference, running the incoming data against an existing algorithm. Using incoming data to locally teach an AI is still the stuff of science fiction and research labs. But the benefits it could offer make it worth keeping an eye on.
Mike Hasselbeck says
Yes, it can be done! Although it’s quite rudimentary, I have developed an acoustic sensor that uses a low-power TI MCU that trains without use of the cloud. It is scalar time-series data and statistical analysis is also involved, but it works. Best of all, it’s 100 percent open source with hardware available at Crowd Supply: