Featured

To make ML at the edge work, make it more accessible

It’s becoming increasingly obvious that one solution to companies gathering too much data about you is to ensure the data stays on the device you’re using and that any related machine learning takes place there. But the world of machine learning at the edge is still nascent and focused primarily on a few use cases, such as object detection and wake word recognition.

Zach Shelby, the CEO of Edge Impulse, believes there is a lot more opportunity to provide compelling features using edge-based machine learning, and he’s built his company to help make building those features easier for developers. Shelby is the former founder of Sensinode, an IoT company that sold to Arm in 2013.

This week, Edge Impulse announced additional funding and launched computer vision models designed for edge devices that developers can use.

Links with hardware companies and software libraries make it easy to transfer a model to a physical device. Image courtesy of Edge Impulse.

Edge Impulse launched earlier this year with a plan to make it easy for people who aren’t data scientists to take in machine data and build a machine learning model that will run on constrained hardware. Shelby explains that most data scientists and people building machine learning models are used to high-powered graphics processors and having access to thousands of compute nodes on which to train their models and run inferences.

A world of value awaits companies if they can just make models work on the tiny computers that sit inside sensors, smart home devices like outlets or light bulbs — even wearables. But getting a traditional ML expert to focus on the edge and constrained devices is difficult. It’s like bringing in a celebrity interior designer to redecorate your two-car garage.

Shelby wants to make it easier for traditional developers working at the edge to take in data and apply it to Edge Impulse’s existing models, which are designed for edge computing, or to take data and train new models without requiring GPUs and a data scientist. It reminds me a bit of Qeexo, the simple-to-train ML platform I covered a few months ago.

In addition to having models, the Edge Impulse platform helps developers trying to apply machine learning at the edge to share their work with colleagues, manage data collection, and track the changes made to the algorithm. This is part of an emerging process called machine learning ops (like DevOps, but for ML). Customers can use the Edge Impulse platform for free, paying only for commercial use that allows for those sharing and tracking features.

Shelby’s mission now is to convince embedded systems engineers that they can easily try out and use machine learning models at the edge so Edge Impulse can encourage businesses in the industrial world to start playing with the technology. The startup has pre-populated its platform with models that can analyze vibration data from machines, audio data that can track machine health (Augury also does something similar), and even audio data designed to track the health of workers. And this week it added computer vision models.

The embedded world is already familiar with using limited processing power for anomaly detection, but by running sophisticated machine learning models, it can move beyond simply detecting something strange to identifying what, exactly, that strange thing is.

For example, the ML model that analyzes coughs can process a 2-to-5-second audio clip in real time to listen for and track coughing. But it doesn’t just recognize a cough; it can determine the type of cough and classify it as something that needs further attention. Edge Impulse has made it easy to build the machine learning model to run on an Arm-based microcontroller, as shown in this rough prototype. Companies can put a chip running the model inside a wearable for workers, inside a device that’s used in hospital rooms to monitor patients, or include it as part of a sensor for an organization that’s trying to detect illness among its staff.

By making it easy to grab a model, put it in a device, and then deploy it, Edge Impulse could open up the world of machine learning at the edge. Shelby has signed deals with Eta Compute, STMicroelectronics, and Arduino to link its platform to existing microcontrollers used by developers and product companies, which makes it easy to put the model on hardware. And paying customers get to use the cloud-based tracking and sharing features to further adjust models for specific use cases.

I’m genuinely excited by the opportunities a platform like this brings to engineers who understand problems in their field and who want to apply machine learning to them. Just like graphical user interfaces brought computing to more people or services like WordPress allowed more people to blog, a platform like Edge Impulse will let embedded engineers and traditional developers harness the power of machine learning.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago