Analysis

Qualcomm is researching machine learning at the edge

Regular newsletter readers know that I am beyond excited about machine learning (ML) at the edge. Running algorithms on gateways — or even on sensors — instead of sending data to the cloud to be analyzed can save time, bandwidth costs, and energy, and can protect people’s privacy.

So far, ML at the edge has only involved inference, the process of running incoming data against an existing model to see if it matches. Training the algorithm still takes place in the cloud. But Qualcomm has been researching ways to make the training of ML algorithms at the edge less energy-intensive, which means it could happen at the edge.

Personalization, privacy, broadening data sets, and improvements in federated learning (FL) are all reasons Qualcomm is investing in training on the edge. Image courtesy of Qualcomm. 

Bringing ML to edge devices means user data stays on the device, which boosts privacy; it also reduces the energy and costs associated with moving data around. It can also lead to highly personalized services. These are all good things. So what has Qualcomm discovered?

In an interview with me, Qualcomm’s Joseph Soriaga, senior director of technology, broke down the company’s research into four different categories. But first, let’s talk about what it takes to train an ML model.

Training usually happens in the cloud because it requires a computer to analyze a lot of data and hold much of that data in memory while performing probabilities to assess if the data matches whatever goal the algorithm is trying to meet. So to train a model to identify cats, you have to give it a lot of pictures of cats; the computer then tries to figure out what makes a cat. As it refines its understanding, it will produce calculations that a data scientist can assess and refine further by weighting different elements of the assessment more heavily in favor of elements that make something look like a cat.

It requires a lot of computational heft, memory, and bandwidth to build a good model. The edge doesn’t historically have a lot of computing power or memory available, which is why edge devices perform inference and don’t learn while in operation. Soriaga and his team have come up with methods that can enable personalization and adaptation of existing models at the edge, which is a step in the right direction.

One method is called few-shot learning, which is designed for situations where a researcher wants to tweak an algorithm to better meet the needs of outliers. Soriaga offered up an example involving wake word detection. For customers who have an accent or a hard time saying a wake word, using this method to improve accuracy can boost detection rates by 30%. Because there is a limited and clear data set, and labels, it’s possible to train existing models without consuming much power or computing resources.

Another method for training at the edge is continuous learning with unlabeled data. Here, an existing model gets updated with new data coming into the edge device over time. But because the data is unlabeled — and the edge data may be over-personalized — a data scientist has to be aware of those limits when trying to adapt the model.

My favorite research topic is federated device learning, where you might use the prior two methods to tweak algorithms locally and then send the tweaked models back to the cloud or share them with other edge devices. Qualcomm, for example, has explored how to identify people based on biometrics. Recognizing someone based on their face, fingerprint, or voice could involve sending all of those data points to the cloud, but it would be far more secure to have an algorithm that can be trained locally for each user.

So the trained algorithm built in the cloud might recognize how to differentiate a face, but locally, it would have to match with an individual face. That individual face data would stay private but the features that make it a face would get sent back to help adjust the initial algorithm. Then that tweaked version of the algorithm would get sent back to the edge devices where some noise would get added to the face data to ensure privacy, but also to ensure that over time the cloud-based algorithm gets better without sharing that person’s data.

This approach provides large sets of face or voice data without having to scrape it from social media or photo sites without permission. Federating the learning over many devices also means data scientists can get a lot of inputs but that the raw data doesn’t ever leave the device.

Finally, we also need ways to reduce the computational complexity associated with building algorithms from scratch. I’m not going to get too into depth here, because there’s a lot of math, but here’s where you can find more information. Broadly speaking, the solution to traditional training in the cloud is to make training on less compute-heavy devices easier.

Qualcomm researchers have decided that one way to do that is to avoid using backpropagation to figure out how to weigh certain elements when building a model. Instead, data scientists can use quantized training to reduce the complexity associated with backpropagation and use more efficient models. Qualcomm’s researchers came up with something called “in-hindsight range estimation,” to efficiently adapt models for edge devices. If you are keen on understanding this, then click through to the research paper. But the money statement is that using this method was as accurate as traditional training methods and resulted in a 79% reduction in memory transfer. That reduction computes to needing less memory and compute power.

This research is very exciting because training at the edge has long been the dream, but a dream that has been so hard to turn into reality. As regulations promote more privacy and security for the IoT, all while demanding reduced energy consumption, edge-based training is moving from a wish-we-had-it option to a need-to-have it option. I’m hoping R&D keeps up.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham
Tags: Qualcomm

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago