Are you sick of hearing the word edge yet? I am. The tech world is struggling right now to come up with the right way to express a complex shift in how it thinks about computing infrastructure. The result is overuse of the word edge, whereby it means everything from a sensor to a telco data center.
We saw similar confusion back in the mid-aughts as companies were trying to build massive clusters of compute power that could be accessed on demand. IBM and HP called it autonomic computing and grid computing, respectively. But getting access to their compute power required a few days notice and some serious engineering on behalf of IBM and HP to deliver on what companies ordered. It was Amazon and VMware that ended up creating the lasting vision of IBM and HP’s massive computing farms.
Thanks to the ability to segment massive server power into smaller computing units through the use of VMware’s hypervisors, Amazon was able to create a shared, on-demand computing and storage infrastructure that became Amazon Web Services. In 2006, the cloud was born, but it took a few years to settle on a definition of what the cloud actually was.
This is where we are today with edge computing. For many, edge is anything that isn’t the cloud. But as the internet of things matures, I’m seeing four different layers of the new IoT computing infrastructure evolve. There’s the cloud, but then there’s the edge, which can be divided into three layers.
The first, and smallest, is the device, the sensors and connected devices that are usually battery-powered, use mesh networks, and have relatively small computing footprints.
The second is the gateway layer. It’s where the sensor data comes into a box on the factory floor or a server located in an autonomous car. In the home, the gateway might be a Wi-Fi router or set-top box with computing and a variety of radios to bridge the various low-power mesh communication protocols.
The third layer is a larger cluster of servers geographically close to the sensors, able to process data and provide regionalized services that are latency-sensitive. This is what I’ll call the edge. It may be a micro data center processing all of the data gathered from traffic cameras (but not the video itself, which would be processed at the gateway), or a telco point of presence where a customer might aggregate data from thousands of smart home devices.
The fourth layer is the aforementioned cloud. Not every piece of data will travel through all four layers. A rare few pieces of device data may go directly to the cloud. But in general, this is the framework I’m using to think about edge computing services and startups going forward.
This worldview provides a lot of opportunities for established companies and startups. Challenges such as handling security across those layers, figuring out how to orchestrate services across them, and even whether or not companies should have a unified view of them all have yet to be answered.
Which brings me to Edgeworx.io, a startup founded last September to build software for the edge, specifically the gateway edge and micro data center edge. Farah Papaioannou, president and co-founder of Edgeworx, says the company’s software runs on these devices and links various nodes together using a proprietary mesh network that can operate on top of a variety of different types of networks. She views it as the bridge between the devices and the cloud.
The company’s software is meant to act as a platform for applications and microservices that an organization wants to run across all of their edge devices. It reminds me a bit of Resin.io in that it’s recognizing the need for developers to have an easy and scalable infrastructure to build apps across many devices. Like Resin.io, it supports containers. It also supports unikernels, like edge startup Zededa does.
From a security perspective, it offers accountability for data traveling between nodes and can let companies geofence any of that data deemed to be sensitive. This comes in handy for complying with local regulations and for ensuring data stays secure. It also does away with the traditional public key encryption, replacing it with a tweaked version that certifies each node on the Edgeworx mesh based on its hardware profile (including things such as trusted compute modules and hardware roots of trust). Instead of a static key, the devices get a rolling session key that’s dependent on their hardware remaining consistent.
Edgeworx has made the underlying code for its platform open source, and it is managed by The Eclipse Foundation. Papaioannou says the company plans to encourage others to build microservices on top of the platform. An example of a microservice might be a SCADA-to-MQTT translator, or Twilio’s voice platform. A robust ecosystem would entice customers to use the platform, generating enterprise customers who could buy a license for it.
That same ecosystem could provide revenue in the form of a cut from sales of popular microservices, much like Apple takes a cut of sales from the App Store. It has signed and oil and gas firm as its first paying customer.
I like the vision and the clarity that Papaioannou has given to my conception of the edge, but I do wonder how Edgeworx will fare against other efforts to create unified edge computing platforms. For example, Microsoft is working hard to do this, and has chosen Kubernetes as the orchestration and management layer in its vision.
However, Microsoft’s vision includes the cloud, something Edgeworx’s doesn’t. Papaioannou says that’s because many clients don’t want their data ever touching a public cloud. It’s so early on in this game that we’ll simply have to wait and see how things turn out.