Featured

Bringing AI to the farthest edge requires new computing

This story was originally published in my June 30, 2023 newsletter. You can sign up for the newsletter here

We’re in the midst of a computing shift that’s turning the back-and-forth between cloud and edge computing on its head. This new form of computing has been creeping to the forefront for the last few years, driven by digital transformations and complicated connected devices such as cars.

But the more recent hype around AI is providing the richest examples of this shift. And it will ultimately require new forms of computing in more places, changing both how we think about the edge and the types of computing we do there. In short, the rise of AI everywhere will lead to new forms of computing specialized for different aspects of the edge. I’m calling this concept the complex edge.

An image sensor attached to a robotic arm. The whole device might have have multiple types of chips handling edge computing jobs.

As part of this shift in computing, we have to become more nuanced about what we mean when we talk about the edge. I like to think of it as a continuum moving from the most compute and power-constrained devices such as sensors to the most powerful servers that happen to be located on premise in a factory. In the middle are devices such as tablets, smartphones, programmable logic controllers (PLCs), and gateways that might handle incoming data from PLCs or sensors.

Moreover, each of these devices along the continuum might run their own AI models and require their own specialized type of computing to compare the data coming into those models. For example, I’ve written about the need for sensors to get smarter and process more information directly.

Smart sensors turn to analog compute

Cameras or image sensors are popular examples of such devices. This vision sensor from Useful Sensors, which can do person detection on a $10 device, runs a simple algorithm that looks for people and counts them. At a higher level, which requires more processing power, sensors from Sony or chips from CEVA are able to detect specific movements, faces, or other options.

A few weeks ago at the Sensors Converge event, a company called Polyn Technology showed off a version of a chip designed to take raw data and quickly convert it into an insight. To quickly process analog signals from the environment (such as vibrations or sound), the Polyn chip uses analog processing to process the signal and then sends the “insight” to another computer for more processing.

By using analog computing to decode an analog signal, the chip can save time and power, quickly generating an insight all on battery power or using energy harvesting. Eugene Zetserov, VP of marketing and business development at Polyn, compared the chip to a dedicated ASIC that gets programmed once and runs the same operation incredibly efficiently. The Polyn chip can make inferences on audio or vibration data at 200 microwatts while constantly running.

Zetserov likened it to how a person processes certain information very quickly without the brain getting involved. For example, when a person touches a hot stove, the hand automatically contracts and pulls back before the brain even has a chance to register pain, much less make a conscious decision. In edge computing, an analog chip might handle the sensing of a specific frequency band associated with a wake word, then use that insight to wake up a more powerful chip that could either do the natural language processing needed to understand a command in the cloud or on a device.

Bring on the FPGAs

So let’s talk about that more complicated chip. Depending on the models running and the use case, we might see a single hop from a sensor to a larger, local computer, or we might see multiple sensors feeding into a gateway device for some pre-processing and then even more computing happening later.

Cars and robots are devices that may benefit from several layers of computing and AI. We already see ARM’s M-class microcontrollers or A-class application processors taking on some of the computing roles here. Intel also has chips that run in industrial gateways. But with power and resiliency at a premium for some edge applications we’re also seeing custom-designed silicon known as Field-Programmable Gate Arrays (FPGAs) take a more prevalent role.

FPGAs have been workhorses in the embedded world for decades, but as the embedded computing world gets connected, the embedded chips inside machines require more security and are taking on more computing tasks. Microcontrollers can’t always tackle the job, but FPGAs can. So we’re seeing them more often.

Shakeel Peera, VP of strategy, marketing, and operations at Microchip FPGA, told me the demand for FPGAs has been on the rise in cars, industrial devices, and defense and aerospace as products from these industries get connected. Indeed, as many people discovered during the pandemic when supply chain issues cut off the supply of chips for automotive customers, vehicles contain many more semiconductors than they did a decade ago. The increase in silicon is tied directly to the increase in sensors and functionality. So if you want to build a car that won’t drive on a flat tire, you have to add sensors inside the tires to track pressure, then send that data to a system that determines if the pressure is OK or not, and then possibly send that data to a safety system that will determine if it needs to send an alert or even take over the steering or braking.

Those smarter systems are everywhere today. And because cars, airplanes, industrial equipment, etc. are critical systems where failures can cause a loss of life, security and resiliency are crucial. That’s why Microchip has been investing in more advanced security including trusted roots, advanced encryption, and even manufacturing security procedures that are designed to prevent supply chain attacks.

As a former chip reporter, I used to focus primarily on innovations in moving down the process node, which helps drive Moore’s Law and better performance over time. But when it comes to us bringing computing, connectivity, and specifically AI to more devices, it’s clear that the way we think about semiconductors and our models for computing have to change. When we talk about the complex edge, we’re really talking about a huge variety of chips that will each have different optimizations based on their position at the edge, their job, and their connectivity.

Then we have to figure out how to scale these complex edge systems. And write software for them. And manage them.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

9 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago