Analysis

What Arm’s new architecture says about the future of compute

Arm unveiled its latest chip architecture this week, a once-a-decade event that represents Arm’s vision for the future of computing.

Chip companies have it tough. Software can change monthly, but when you’re literally etching a design into silicon, then building it into a device that will run the software, it can take three or more years before your ideas hit products, and those ideas had better match the needs of software being used when the ideas are reality.

Thus, chip designers look years into the future. And when we’re talking about an architecture shift such as the v9 architecture Arm announced this week, we’re looking at a bet that will take computing to the next decade — or farther. So it’s a big deal.

Arm’s vision for computing brings computation and the internet to all the things. Image courtesy of Arm.

Arm’s new architecture is designed for two things: flexibility and security. This is because Arm’s vision for computing involves its use in everything from Amazon Web Services servers to sensors sitting on a remote oil pipeline. The architecture needs to expand or contract based on the needs of the device that’s handling the compute, and it needs a way to securely store and compute information.

For reference, back in 2011 when Arm announced its v8 architecture, the vision was around getting Arm processors into servers, and the architecture relied on moving from 32-bit to 64-bit processors. This year’s shift is broader but is part of a longstanding Arm goal: moving beyond cellphones.

Arm silicon is inside almost every cellphone out there today, but increasingly, it’s also turning up other places, like inside the Graviton server instances at AWS or as the basis of Apple’s new M1 processor that replaced Intel chips inside Macbooks. At the low end, Arm microcontrollers compete with designs from Renesas and Microchip for sensors and inside wearables.

The company’s silicon is making inroads into machine learning, a job traditionally performed by graphics processors for training and Intel’s x86 chips for inference, but there are also dozens of startups pushing specialized hardware for inference that don’t necessarily rely on the Arm architecture.

Arm also has to contend with the upstart open-source RISC-V instruction set, which represents an existential threat to Arm’s licensing model should it gain wider adoption across more than a few specialized use cases. And there’s the $40 billion deal with Nvidia, which is facing regulatory scrutiny and blowback from ARM licensees such as Qualcomm and Microsoft.

An architecture upgrade can’t address all of the challenges and competitive threats, but Arm’s approach is smart. With v9, it’s tackling the machine learning workloads by harmonizing and improving the way it handles vectoring across its Mali graphics designs, its Ethos ML designs, and its relatively fat Cortex class chips.

This means software designed to use Arm’s vectoring extensions only needs to be compiled once for the gamut of Arm chips, and it can run on everything from more powerful to more constrained cores. This will provide flexibility for software companies that want to use machine learning in a wide array of devices.

On the security side, Arm introduced what it’s calling the Arm Confidential Computing Architecture. I wanted way more details than Arm was willing to provide for now, although it promises updates later this year. Essentially, Arm has rearchitected the chip design to add a secure area to run software that a company might not want the device owner to have access to.

Arm calls this area a realm, and it’s a separate entity from the secure enclave where hardware roots of trust and encryption keys might reside. I asked if we could think of realms like a container for running an application on the chip, and Peter Greenhalgh, Arm fellow and VP of technology, agreed with the analogy.

Software running inside a realm is both isolated from everything else on the chip and completely opaque to the operating system and hypervisor (if applicable). This addresses several concerns, such as worries about enterprise applications running on multi-tenant servers in the cloud, and worries that an employee running proprietary algorithms or software on their own devices could then reverse engineer the code.

Combined with encryption when moving data to and from a realm, it could also address concerns around data poisoning, in which an attacker tries to insert incorrect data into an operation to fool an algorithm or mess with a closed-loop control system.

Realms change the relationship between the operating systems/hypervisors and the software running on them. The operating system still exists, but the realms get a new manager called a “realm manager,” a smaller section of code. Arm talked about the applications using the “realm manager” to access secured address space on the chip.

An advantage of this approach would be that even as companies stop updating their software during the life of a product, the chip provider and OS provider could still provide updates to ensure that the app inside a realm stays secure.

This would be great for long-lived IoT devices whose manufacturers might go out of business or stop supporting the software after five to seven years.

Arm also introduced technology to reduce security vulnerabilities associated with access to on-chip memory. Memory safety issues cover a huge subset of vulnerabilities such as buffer overflow flaws like the SQL Slammer attack and use-after-free flaws.

All in all, Arm’s focus on secure distributed computing represents a maturation out of the computing models that follow the pendulum arc between mainframe/local to client-server (or cloud). What we could see is a truly distributed model where the computation will happen continuously based on access to data, the latency needs of the application, privacy rules, or the energy available to the device.

So we might see some machine learning performed on a light switch while reporting state back to a smart home speaker for use in other applications in the house, such as personalized scheduling. Meanwhile, anonymized data from the gateway on device performance might end up back in the cloud so manufacturers can track their performance.

The challenge will be figuring out operating systems that can optimize and prioritize computing resources across many applications and embedding that intelligence at appropriate levels in each device. Arm’s flexibility and coherent architecture across a wide range of silicon will help. So will its thinking around securing those computing jobs and applications.

We’re already seeing this level of optimization happen with regard to data processing and the antenna configurations needed for high-speed 5G networks. Soon we’ll see that applied to more applications, and Arm will be ready for it.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago