It’s clear there is huge demand for low-power, smarter sensors at the edge, and Legato Logic has a chip that it thinks can do just that using less than a milliwatt of power. The company has designed a chip to handle small machine learning models using a fraction of the power those models would require if they were running on a traditional microcontroller.
Shahrzad Naraghi, co-founder and CEO, calls the concept perception as opposed to processing. The two-year-old company has taped out a prototype version of its chip to show that its idea works, and is now looking for partners and investors to help it release a second chip to test with potential customers.
There are two things to like about Legato. One is that it has focused on bringing a piece of silicon to the market that already has machine learning models running on it. In Legato’s first chip, the models will detect objects and wake words. It resembles the simple sensors that Pete Warden is building with Useful Sensors (see story above), which involves hiding the complexity of matching a model to hardware and simply offering a low-power sensor.
The second thing to like is that the sensor can run on harvested energy or battery power for years. That means a city could put a chip like this on a streetlight and use it to count cars, or maybe even to detect people. If this sort of functionality could be built into a light and did a good enough job tracking people, the city could then use that detection to trigger the lights to brighten or flash, for example, thereby alerting drivers when there’s a person on the road.
Again, the idea of detecting people in an always-on sensor that doesn’t send images back to the internet isn’t a terribly exciting use case, but using that information to take some sort of action does allow us to use computing as a tool to solve a real problem in a new way that’s cheap.
Legato’s technology is a specially designed computing engine that uses time-domain to perform calculations at incredibly low voltages, which saves on power. I’m not going to force y’all to focus on that part other than to note that it is technically feasible and the resulting chip can be built using traditional manufacturing processes. The chip does need to be specially programmed, but because it’s an integrated model/sensor requiring specialty software, that isn’t really a big deal. It’s not like we want it to run Windows.
The founders have so far raised about $500,000 in funding and are seeking about $3 million to get the second generation of their chip into the hands of potential customers. The founders have worked together before, at companies including IBM and Cadence, and count Alessandro Piovaccari, the former CTO of Silicon Labs, as an advisor.
I like the concept, and I find myself wondering if power-sipping silicon designed for highly specific use cases that could scale to billions of units (these tiny sensors could end up everywhere) changes the economics of how we build chips and software. We’re so used to having a few general purpose options (Arm v. x86) that can scale massively, and building limited software supports those systems.
But if computing really does end up everywhere, it stands to reason that some chip designs or ideas that would have otherwise been doomed to become expensive ASICs could become incredibly widespread and get their own software ecosystems. I don’t think that will necessarily happen here, but it’s an idea I’ve been spending a lot more time thinking about lately.