Featured

It’s time for yet another chip for AI at the edge

The new Flex Logix inference chip can improve performance and reduce power consumption associated with computer vision algorithms. Image courtesy of Flex Logix.

This week, at the International Security Conference, vendors showed off new sensors and solutions for the security market. A notable trend at the show was a large influx of tech that will enable security cameras to process data on the camera itself. For example, Qualcomm released two chips that are designed to handle image processing at the edge, and Western Digital launched storage cards that have more and faster memory to handle larger data files and speedy analytics.

The trend of performing local image processing on security cameras makes a lot of sense. First, sending camera data to the cloud is expensive because video files are fat and require a lot of bandwidth. Storing them in the cloud is even more expensive. Second, in many instances, customers buying security cameras want features that will immediately alert them if there’s a problem. That immediacy means images don’t have time to take a trip to the cloud for processing.

The demand for fast turnaround time and the costs associated with video storage have led to a burgeoning market for image processing at the edge. And it’s not just video that needs machine learning at the edge. Today’s voice interfaces require machine learning models at the edge so that they can offer quick responses, manufacturing processes need machine learning to predict failures, and smart cities need it in order to reduce their storage costs.

In all of these use cases, machine learning models are trained in the cloud, usually on graphics processors. The newly minted models are then stored at the edge to parse the data coming through. This process is called  “inference.” Models running at the edge may be updated weekly or annually, depending on the use case.

As inference at the edge becomes a bigger workload, dozens of startups are building chips for the market. Some focus on a specific job, such as computer vision, while others are trying to be all-around workhorses for edge-based inference. In general, customers want chips that don’t require a lot of power. They also want the ability to run a lot of data from memory on the chip to the processing units on the chip.

Flex Logix is now one of those companies making an inference chip. What’s notable here is that Flex Logix has an established business as an FPGA chipmaker. FPGAs are chips that can be re-programmed, which makes them good for jobs where the workloads may change over time. That flexibility comes at a cost, though. They tend to be larger and run more slowly than chips that aren’t programmable. But especially in today’s world of changeable algorithms, Intel, Microsoft, and others rely on FPGAs for some of their jobs.

Flex Logix has been making FPGAs for embedded computers for several years and on Thursday said it would have inference chips for customers to test toward the end of this year. Geoff Tate, the CEO of Flex Logix, says that the company thought long and hard before entering such a crowded market but ultimately believes that the expertise in building FPGAs helps solve some of the challenges associated with getting data from a chip’s memory to its processor quickly and without using too much energy.

The heart of the Flex Logix InferX chip are reprogrammable interconnects that let the manufacturers of a product tweak the silicon to optimize for sending large amounts of data between the memory and processor. The Flex Logix cores can also be grouped together in large clusters to provide more processing capability for larger models. Tate says the company also worked hard to optimize the software that allows companies to port their existing AI models to the chip.

Companies can use TensorFlow or a tool called ONNX, an open-source framework for converting AI models from other modeling frameworks. ONNX currently supports conversion from Caffe2, Microsoft Cognitive Toolkit, MXNet, and PyTorch. In other words, the new Flex Logix chip can support most of the existing AI models out there.

That, plus improved benchmarks vs. those of existing inference chips for the edge from Intel, Nvidia, and others have Tate confident that Flex Logix could add inference capabilities for customers new and old.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham
Tags: AIFlex Logix

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

8 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago