OK y’all, it’s time to get excited about servers. Before I lived for the internet of things, I lived for hyperscale data centers and server news. Today — after watching Yuval Bachar from LinkedIn explain a hardware project for edge computing called Open 19— I’m combining both. Bachar is also the chairman of the Open 19 Foundation.
In a presentation at the Edge AI Summit event in San Francisco, Bachar explained a vision for edge computing built inside the base of cell towers. He argued that we need infrastructure at this edge because so much mobile data would be coming in on cellular networks and some use cases would require incredibly low latency.
The goal of the Open 19 project is to build a data center that can slot easily into the infinite variety of buildings that currently house cell towers. In some cases there are actual buildings, but in others, there’s a shed or a container full of gear in the middle of a field. They are power constrained, size constrained, and physically insecure. They also have different sources of power coming into them.
The project doesn’t have a solution for the physical insecurity yet, but to handle the power and space challenges, the team designed what it calls a brick cage. Inside the brick cage there are room for servers in four different configurations, two power shelves and two networking shelves.
Running down the back of the brick cage are modular cables for networking and power that can be snapped together. This is unusual, because typically the back of a rack of servers can look like spaghetti — or, if you have a good sys admin, like neatly bundled ropes of cable. In this design, you have two poles running up the back into which the servers connect. The Open 19 architecture supports 100 GB per second to each server and will later introduce optical cabling.
Such a design means anyone, including the UPS delivery man who delivers the servers, can plug them into the rack and have them come online. Given that cell towers are, by their very nature, distributed across the landscape and can be a pain to get to, this is a very smart setup. Sending your tech folks to deploy servers to towers across the U.S. could take weeks.
The power shelves are also designed for the cell tower edge. Many of the 100,000 cell towers in the U.S. and the 450,000 towers in Europe have different sources of power coming into the facility, such as AC or DC. The power shelf is designed to handle those different sources and convert them into something the servers can use.
Finally, there are the servers themselves. The Open 19 Foundation calls them bricks and offers a few designs to choose from. They are designed to be densely packed into the rack and optimized to handle the limited power coming into a cell tower. Balchar says that using Open 19 gear has saved LinkedIn 40% in capital expenditure, reduced the space needed in cell towers by 400%, and led to a nine times faster installation of its gear.
The project was created in May 2017; members include GE Digital, HP Enterprise, Flex, and tower operator Crown Castle. But while there are a lot of equipment vendors on the membership list, so far LinkedIn appears to be the only company using Open 19.
It may be that there isn’t yet enough demand for low-latency distributed computing at the edge of the cellular network. After all, most of the reasons someone might want such computing are still in development. For example, VR gaming among friends, connected autonomous cars, and broadband service for trains and planes all could take advantage of such a network. Ironically, the traditional IoT probably doesn’t need it, because while data processing and automation requires low latency, the input coming from sensors does not.