This week, I was at ARM TechCon in San Jose, Calif., where the chatter was more about chips than about the surprise Pacific Gas & Electric power shutoffs plaguing the region. Though the PG&E mess — the utility is trying to mitigate any chance of sparks from its lines and transformers starting catastrophic wildfires — is a stark reminder of what happens when companies don’t invest in infrastructure.
Infrastructure was also the primary focus of the ARM conference, which makes sense given that the host’s chip designs are an essential piece of infrastructure in sensors, cell phones, IoT gateways, and even cellular base stations. It’s also trying to move into servers inside data centers. Drew Henry, an SVP at ARM, took to the stage on the second day of the event to explain how the internet of things and 5G will change both the infrastructure, and the nature, of how the internet works.
Henry contended that the internet will need to adapt to more devices at the edge sending in data, saying that the internet will switch from a consumption model to a creation model. I’ve heard this before. Back when YouTube launched and we started seeing the rise of mobile video, experts thought we would see the internet move from a consumption to a creation model. After all, making video was so much easier than writing, and consumed so much more data.
The thinking at the time was that the boost in consumer-generated video would even out the then-status quo of incredible data demand on the download side. The concern was that the infrastructure built to deliver fat files to users at the edge would need to shift. And shift it did. We saw content delivery networks and caching tools move closer to users, for example. But we did not see an evening out in the number of uploads vs. downloads, because people still like to binge on hours of Netflix while maybe sending a few 15-second clips up to TikToK.
The biggest change we didn’t see coming was how, due to the rise of social networking and user-generated content, the data centers operated by many of the big players in the field would have to change their architecture. Essentially, companies such as Google and Facebook saw the network traffic in their data centers move from something they called north-south orientation (which means it was data coming in from one place on the internet and then being sent out to another) to an east-west orientation, as they had to pull data in from servers across their own infrastructure to fulfill requests.
These are broad trends that can exist at that same time. And each of them require new computer science problems to be solved, with the solutions finding their way to other providers as they face similar challenges later on down the road. So what will the internet of things mean for infrastructure? And what computer science problems will we have to solve?
When we’re talking about internet infrastructure, we’re really talking about two significant changes, one of which was articulated by ARM at the show: The IoT will lead to billions of devices at the edge (and this edge may be a sensor or a mini data center inside a cell tower) that will need to capture and process data locally.
That is what ARM focused on, laying out a strategy that involves chips designed for sensors that can collect and ship data (while also maintaining a 3-year battery life) to powerful multi-core processors that are taking in the sensor data and running machine learning algorithms on it to send back commands or pass along important bits back to the cloud. These chips will need to be able to run neural networks and will govern not only data flow from sensors, but the performance of the upcoming 5G networks as well.
Henry declared that the total addressable market for silicon at the edge will be $30 billion by 2025; clearly that’s what ARM is designing for. Its chips will sit inside the sensors and the gateways on factory floors (now mostly belonging to Intel), and will manage the routing of network traffic for 5G base stations that are trying to modulate massive multi-antenna arrays for 5G networks.
What I didn’t hear from ARM is that if all of this intelligence at the edge works, the data heading to the cloud will be lessened. So what we’ll really end up with is a tri-part internet with a bunch of data flowing between things on the far edge and a gateway. That gateway should have a robust connection to the sensors and a bunch of processing power, but it might be located in a house, on a factory floor, or even in a car.
Zettabytes of data will flow up from sensors to those gateways (the creation model) and then a tenth of that will flow up to the cloud. Meanwhile, we’re still going to be pulling down tons of cloud data in the form of videos, updated machine learning models, and software updates. And there will still be a lot of cloud-to-cloud traffic as different services make calls to other services to perform tasks on the user’s behalf.
When it comes to infrastructure, I can see a few fallouts from these trends. The first is that the infrastructure most of us have been able to outsource to the cloud or to services is coming home. Businesses and even home users will have to have local hardware capable of parsing streams of data. Latency will benefit (things will happen faster), and so will privacy (data will stay on local machines instead of Google’s or Amazon’s).
The downside is that bringing gear like this into the home, enterprise, or factory means we have to manage it. While companies ranging from Ericsson (5G networks in a box) to Eero (enterprise-class Wi-Fi in a box) have been releasing products to make such management easier, we’re still early in the game here. And there remains a distinct challenge around standards and interoperability for many of the products aimed at industrial, enterprise, or consumer IoT.
That means many of the businesses built around industrial and enterprise won’t scale because there’s such a large consulting element involved. It also means that the IT jobs that were farmed out to cloud providers are now coming back to the enterprise as companies require sysadmins to manage gateways and software that’s now local.
I’m also seeing a shift in the way companies, driven by this new reality of infrastructure stuck at the edge, are planning to sell their wares. For the last 15 years, we’ve embraced the software-as-a-service model, but with all this hardware at the edge we’re seeing what might become a return to the appliance model, plus a SaaS model. Companies are now renting devices as a service, wrapping the cost of device management and replacement in with the software running on top. So far, companies selling IoT gear for the edge are trying to keep the monthly fees associated with traditional SaaS models where possible, but the underlying cost structure will differ wildly from a SaaS model.
After all, those edge companies are going to take on labor associated with maintenance and set-up, as well as the replacement costs of aging or dud devices. Doing so will lower their margins and add elements of risk associated with the hardware. There are so many other elements related to the transitions associated with the internet of things, 5G, and edge computing, but this subtle shift in internet infrastructure and what it means for business feels like a good place to focus for now.