Featured

How can we make TinyML secure? (And why we need to)

Lately, I’ve been reporting more and more on algorithms that parse incoming data locally in order to make some sort of decision. Last week, for example, I wrote about a company doing voice analysis locally to detect Alzheimer’s. I’ve also covered startups that process machine vibrations locally to detect equipment failures. Each of these examples benefits from machine learning (ML) algorithms that run on microcontrollers, or what we call TinyML.

Running machine learning algorithms locally helps reduce latency. That means a burgeoning machine problem can be detected and the machine turned off quickly if needed. Running machine learning algorithms locally also protects privacy, something especially important in the medical sector.  Indeed, I would prefer that neither Google nor Alexa be aware if I develop Alzheimer’s.

But as companies push sensitive and necessary algorithms out to the edge, ensuring they perform as intended becomes essential. Which is why I spent time this week learning about the security risks facing our future sensor networks running TinyML.

The three branches of AI, based on the amount of computing and power available. Image courtesy of Prof. Dr. Muhammad Shafique, Department of Electrical and Computer Engineering at New York University Abu Dhabi.

On Tuesday, during an online presentation hosted by the Tiny ML Foundation, Prof. Dr. Muhammad Shafique, Department of Electrical and Computer Engineering New York University Abu Dhabi, covered ways to design both hardware and algorithms to securely run on constrained silicon at low power. It covered TinyML and also edge ML where machine learning algorithms run on more robust hardware such as smartphones, gateways, or even a car.

Dr. Shafique highlighted two types of security. The first security focus is on ways to prevent adversaries from “tricking” a sensor and its algorithm. One way to do this is by overwhelming the sensor with “noise.” For example, it’s possible to trick the algorithms governing lane changes in a self-driving car simply by placing black squares on the pavement. The squares introduce “noise” that can confuse the algorithm, leading the car astray.

Hackers have already demonstrated the use of “noise” to confuse other types of sensors and algorithms, such as those in medical devices. Shafique called these “adversarial attacks” and offered different ways to design the ML models that could make them less susceptible to such noise. (This is especially relevant for TinyML because the process of shrinking algorithms to run on less robust chips makes them more vulnerable to any weird outlier data coming in.)

The other type of security is related to the security of the chip running the ML algorithm. This might involve a bad actor getting malware on the chip that could affect it or the software it runs. Or it might simply cause the sensor to produce inaccurate results, which essentially turns the sensor into a node that lies about the state of the thing it’s trying to measure. In an industrial system, it might mean the sensor that’s monitoring temperature reports an inaccurate value, causing a process to fail, whereas with medical devices it might give false readings that could lead to a delay in diagnoses or even the active infusion of an unnecessary drug.

It is, in other words, super scary stuff. And what makes it even scarier is that we’re gradually going to move more and more machine learning to the edge because that’s where such processing belongs. As I’ve noted in previous posts, it simply take too much time, money, and energy to move data to the cloud. Because most of us recognize that edge processing also means data stays local to a computer network and thus, more private — or at least more controlled — we also tend to think of machine learning at the edge or on a sensor as more secure.

But as Dr. Shafique’s presentation made clear, that’s an erroneous assumption. This is why I recommend that anyone planning to build sensors and systems that rely on edge-based machine learning start thinking now about the security needs of both the hardware and the algorithms that will soon be making more of the decisions in our smarter and more connected world.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago