Featured

Why TinyML is still so hard to get excited about

This week, I went to the tinyML Summit in Burlingame, Calif. TinyML, or running small machine learning models on constrained devices, is one of the most exciting technologies I’ve encountered. But it’s also the one most likely to put people to sleep when I talk about it.

Using local computing to handle object or even limited face detection, wake word detection, anomaly detection, and more holds the promise of bringing more privacy to the IoT and more sensors to the world, and to give everyday products superpowers.

Last year, I was bummed because the conference was heavy on tech and possibilities and light on actual use cases. But this year, the organizers made a big effort to show off users. In the meantime, I was struck by just how challenging the technology is to implement — and to get people excited about it.

Sony showed off a demo of its image sensors running tinyML models able to track the race car using fewer milliwatts per inference when compared to a Pi. Image courtesy of S. Higginbotham.

Among the various use cases on display, there were two common themes: One, that the actual model development and running TinyML on hardware wasn’t difficult and two, that packaging it or making it discoverable was. The other challenge that makes TinyML so hard to talk about was that many of the implemented use cases were hidden or somewhat dull.

While at the conference, I ran into Pete Warden, founder and CEO of Useful Sensors, which I covered last year when it launched an integrated object detection sensor that sells for $10 and has the sensor and model already built in. At the time, he mentioned that the company’s next sensor would be a gesture recognition sensor that could be integrated into televisions or other devices. It would recognize a few basic gestures, such as waving a hand to skip to the next image or channel, or putting a finger in front of your lips to mute something.

However, at the conference Warden told me that, while he’d quickly discovered that the model worked, educating people about new gestures was tough. “No one knows that these gestures are available,” he said. This makes sense. If you remember back to the launch of the first iPhone and its touchscreen, the first ads and demonstrations focused on things like taps and pinch-to-zoom. Those weren’t intuitive; they were taught.

So instead, Warden’s company is releasing a new sensor that can scan a QR code. The idea behind this $6 sensor is that appliance makers can put it inside their products as a method of getting devices onto Wi-Fi easier. A user could simply show their Wi-Fi QR code (I find mine in my router app) to the sensor and get their, say, fridge or washer online. I think it could be neat as a way to transfer a recipe to an oven, or specific washing instructions to a washing machine for particular items of clothing. Unfortunately, unlike scanning a new shirt and getting the machine to change its parameters to provide the best wash, many of the use cases for TinyML are going to be kind of boring.

Elsewhere at the event, HP showed off two TinyML implementations with ST Micro that are embedded in new laptops. The first TinyML model uses a gyroscope to detect if a laptop has been placed in a bag or taken out of a bag. The idea behind the implementation is that the laptop will start booting up when it’s taken out of a bag in preparation for its owner to use it. If the model detects the laptop has been placed in a bag, it will change heating and cooling parameters to make sure the laptop doesn’t overheat.

The second use case also helps with thermal management. In that use case, the laptop detects when it is on a hard or soft surface. If it’s on a soft surface, like a bed or a person’s lap, it will try to run cooler so as to avoid overheating.

Which is neat, but not anything you’d write home about. It’s also not a reason someone would buy a laptop, which makes it hard to justify adding TinyML to one. Many of the consumer use cases at the show fit this mold. Using TinyML to track where a person’s face and ears are as part of a sound bar, for example, does help deliver great sound, but it’s also a nice-to-have element, not a need-to-have one.

On the industrial side, things get a little more interesting, but the challenge there is that few companies want to talk about TinyML. As Warden noted to me, industrial users view success with TinyML as a competitive advantage and so are loathe to share the details of their success with potential competitors. Having previously been at Google and elsewhere the tech world, where success in innovation is heavily touted, he found the reluctance to share disheartening and surprising. I found his surprise at this charming.

Another example of how difficult it was to turn a TinyML solution into a product came during a presentation from the founders of a startup called Shoreline IoT. Shoreline IoT makes a ruggedized sensor that can be flashed with different ML models to detect different issues. CEO Kishore Manghnani said that getting useful models running on the computing hardware only solved about 15% of the problem associated with industrial sensing. The other 85% was in packaging the sensor into a form factor that could be deployed by anyone, in rugged environments, with good connectivity (among other things).

Boring use cases, challenges packaging a solution, and customers that don’t want to talk are not obstacles solely faced by TinyML. In many ways, these are issues the tech industry will have to increasingly confront as it pushes computing and connectivity into more places. While a computer felt like it was a solution in and of itself after we added the internet and an array of online services (instead of the fancier calculator, word processor, and game player it was in the late 70s and 80s), computing is really just a tool designed to solve existing problems.

In many circles, connectivity and computing is seen as a way to add new services to more devices (and charge for them accordingly), but it may be that all we really need are new ways to solve old problems using better tools. TinyML is one such tool that will allow more information to be processed quickly, privately, and perhaps without consuming much power.

That’s nothing to scoff at, but it may mean that those touting the technology have to adjust their expectations accordingly.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

9 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago