What if you threw a party but half the guests of honor never showed? That was the situation at the tinyML Summit that took place in San Francisco earlier this week. Together, chip vendors, startups trying to shrink machine learning (ML) algorithms, and dozens of academics put on an amazing event that demonstrated several breakthroughs when it comes to making machine learning fit on the world’s smallest microcontrollers.
But despite all the innovation on display by the 348 attendees, actual users of the technology were missing. This lack of concrete use cases was the biggest topic at an event that has seen amazing growth in a community that should be getting far more attention.
TinyML is shorthand for building machine learning algorithms that can operate on microcontrollers as opposed to more powerful chips inside gateways or cellphones. By running machine learning at the farthest edge, engineers can build products that protect privacy, conserve power, reduce latency, and in some cases even make an internet connection unnecessary. If we want to build a truly distributed internet of things with billions of sensors then we’re going to need TinyML.
Perhaps the most popular use case for TinyML today is on-device wake word detection. Offloading wake word detection to a dedicated chip means that the device can conserve power — and protect a user’s privacy — while always listening. A local ML algorithm will listen for the right wake word, and won’t wake up the internet connection or other elements of the device until it hears the right sequence of noises.
Two other popular use cases for TinyML are people counting and person detection. Devices running these algorithms can avoid sending images of all the people in a room to the cloud and can instead simply recognize an individual person or send the number of people in the room to the cloud, saving on bandwidth and preserving privacy. And with person detection on a battery-powered device, such as a camera, handling that detection locally can save on power.
It’s clear that the benefits of on-device machine learning are consequential, and the options for using it are both numerous and varied. So I don’t understand why more companies aren’t putting it into use in their digital transformations or in their products. But at the event I ran into a few potential end users who had several concerns. None of them were there in their official capacity for their companies, so they didn’t want me to use their names or company names in this article, but they were happy to share their thoughts.
An individual who worked in health care said he thought the technology was still too early and that he had a lot of questions about the reliability of models. He also noted that given the regulations covering medical devices, adding machine learning as part of a device — especially any ML that’s designed as part of a diagnosis — would trigger a longer review process.
If a company built a device that could send data to a doctor and that doctor was able to diagnose or make recommendations to their patients, then the device is subjected to a shorter regulatory review process. Thus, putting TinyML to use in medicine or health care would be challenging due to concerns around the reliability of the ML algorithms, which might grow stale over time and and require updates, and because the regulatory review process doesn’t fully address the impact of ML. (However, the FDA is working on this.)
Another potential end user, who handles hardware for a sporting company and was similarly looking for use cases at the event, noted that the silicon shortage is also causing delays in implementing new product features. He was one of several people there who bemoaned the lack of development boards on which to test new features.
And as a person who tries to bridge the gap between the engineers who build out some of the coolest and most innovative technology and the companies that might want to use it, I think that, so far, TinyML is just a little too hard to implement for product engineers. These teams will need easy ways to shrink their models to fit on a device, and likely devices that have the ML already built in (for more on that topic, see the story below).
With so many ways to use TinyML, but few easy-to-implement use cases available engineers are mostly working on wake word detection, people detection and basic motion capture easy for users to implement on their devices.
I did see some cool demonstrations, though, including a battery-powered, always-on glass break sensor and some interesting options for building longer-lived activity trackers. But the most compelling use cases will likely be completely new to us. One area that holds a lot of promise is using elements such as always-on wake word detection to train a lamp to turn on if a person says “Lamp, turn on.” You don’t need Alexa or even an internet connection for something like that.
Marry local wake word detection with something like gaze detection and companies could build devices that don’t need the internet for voice activation. So you could look at something and simply tell it to turn on. Or a home could have an air quality sensor built into the walls or over the stove that automatically turns on a fan or ventilates to outdoor air when particulates in the home get too high.
In enterprise settings, something as simple as a person-detection sensor attached to a desk could trigger a chain of events such as a light going on, turning on an HVAC system in that area of the building, and letting building services know someone is in that area in case of emergency. Elevators running TinyML that track how well doors are closing could trigger an alert if anomalies were detected. By taking such sensors offline, the elevator monitoring company could reduce the risk of spoofing attacks on the sensor, save power, and reduce the latency.
I believe there is a ton of promise when it comes to making smarter products that don’t need an internet connection, or connected products that keep and process the most sensitive data locally in order to protect privacy. But I also think there are few use cases right now, especially at the more sophisticated consumer technology companies, that are ready to give up on grabbing data and funneling it to the cloud.
That means TinyML might be waiting a little longer before it can be the belle of the ball.