Lately, I’ve been reporting more and more on algorithms that parse incoming data locally in order to make some sort of decision. Last week, for example, I wrote about a company doing voice analysis locally to detect Alzheimer’s. I’ve also covered startups that process machine vibrations locally to detect equipment failures. Each of these examples benefits from machine learning (ML) algorithms that run on microcontrollers, or what we call TinyML.
Running machine learning algorithms locally helps reduce latency. That means a burgeoning machine problem can be detected and the machine turned off quickly if needed. Running machine learning algorithms locally also protects privacy, something especially important in the medical sector. Indeed, I would prefer that neither Google nor Alexa be aware if I develop Alzheimer’s.
But as companies push sensitive and necessary algorithms out to the edge, ensuring they perform as intended becomes essential. Which is why I spent time this week learning about the security risks facing our future sensor networks running TinyML.
On Tuesday, during an online presentation hosted by the Tiny ML Foundation, Prof. Dr. Muhammad Shafique, Department of Electrical and Computer Engineering New York University Abu Dhabi, covered ways to design both hardware and algorithms to securely run on constrained silicon at low power. It covered TinyML and also edge ML where machine learning algorithms run on more robust hardware such as smartphones, gateways, or even a car.
Dr. Shafique highlighted two types of security. The first security focus is on ways to prevent adversaries from “tricking” a sensor and its algorithm. One way to do this is by overwhelming the sensor with “noise.” For example, it’s possible to trick the algorithms governing lane changes in a self-driving car simply by placing black squares on the pavement. The squares introduce “noise” that can confuse the algorithm, leading the car astray.
Hackers have already demonstrated the use of “noise” to confuse other types of sensors and algorithms, such as those in medical devices. Shafique called these “adversarial attacks” and offered different ways to design the ML models that could make them less susceptible to such noise. (This is especially relevant for TinyML because the process of shrinking algorithms to run on less robust chips makes them more vulnerable to any weird outlier data coming in.)
The other type of security is related to the security of the chip running the ML algorithm. This might involve a bad actor getting malware on the chip that could affect it or the software it runs. Or it might simply cause the sensor to produce inaccurate results, which essentially turns the sensor into a node that lies about the state of the thing it’s trying to measure. In an industrial system, it might mean the sensor that’s monitoring temperature reports an inaccurate value, causing a process to fail, whereas with medical devices it might give false readings that could lead to a delay in diagnoses or even the active infusion of an unnecessary drug.
It is, in other words, super scary stuff. And what makes it even scarier is that we’re gradually going to move more and more machine learning to the edge because that’s where such processing belongs. As I’ve noted in previous posts, it simply take too much time, money, and energy to move data to the cloud. Because most of us recognize that edge processing also means data stays local to a computer network and thus, more private — or at least more controlled — we also tend to think of machine learning at the edge or on a sensor as more secure.
But as Dr. Shafique’s presentation made clear, that’s an erroneous assumption. This is why I recommend that anyone planning to build sensors and systems that rely on edge-based machine learning start thinking now about the security needs of both the hardware and the algorithms that will soon be making more of the decisions in our smarter and more connected world.
Leave a Reply