There is a gold rush under way, this time in chips trying to optimize for artificial intelligence. Google and Apple have both built custom silicon for the job, while an entire range of startups are trying to serve up a more efficient way of running AI workloads in both the cloud and at the edge. The challenge is that most of the chips trying to run or train AI models are doing a lot of math. And math takes power.
When it comes to edge devices, whether it is a cell phone or a sensor, power is in short supply. So the last thing that consumers need is plethora of new home gadgets that require the power draw of a gaming rig.
That’s why I’m intrigued by a new group of AI chips aimed at the edge. The companies behind them are trying to use external accelerators on the module to speed up the calculations required by any AI algorithm. As I’ve noted before, this class of chip startups is changing how computing works by doing it in memory as opposed to on the processor. Among those startups is Syntiant, which I’ve already talked about and which last month raised $25 million.
This week, I spoke with another one of these startups, Gyrfalcon, which is also using in-memory processing to accelerate calculations for neural networks. Gyrfalcon launched its first silicon this year at CES and just this week put out a development kit that includes an edge processor, connectivity, and software running on the kit that can take data and build neural networks optimized to run on the Gyrfalcon silicon.
The Gyrfalcon processor and development kit are promising, although for developers to get the most bang for their processing buck on the Gyrfalcon chip they have to use the proprietary framework optimized for the silicon. Doing so isn’t uncommon when you’re trying to eke out every last milliwatt, but it does require a bit more engineering work. LG and Samsung are working with existing Gyrfalcon chips.
Since Mythic and Syntiant don’t yet have a chip on the market, it’s tough to evaluate these startups. But it’s clear that AI at the edge is hot. Jim McGregor, principal analyst and partner at TIRIAS Research, says that if companies can build chips that are extremely low power but with good enough accuracy, we could have an entirely new generation of devices and device capabilities.
Imagine a sensor that can detect the sound of glass breaking, or the sound of someone falling — even a sensor that can recognize a person’s face. In all likelihood these sensors will run highly specific models as opposed to anything generalized, but they would still be powerful. And as I said, this is a hot space, so these startups are not alone.
Other companies are building edge-based AI capabilities inside accelerators that are embedded in the chip as opposed to running dedicated accelerators alongside the main processor. And still others are building out software that can enable distributed edge-based AI processing. As McGregor notes, “There’s more than one way to skin this cat.”