Featured

Everything’s distributed: How EDJX is rethinking computing

Distributed computing has been high on my list of useful technologies for almost two decades. And now that everyone is getting excited about the edge, the pendulum is swinging back to some form of distributed computing network that will work in conjunction with the cloud.

To be fair, I get excited about distributed computing every few years because to me it seems like the only scalable option for providing computing or networking capabilities or applications to millions of people or devices. But every time I get excited about distributed technology I’m proven wrong. So it’s with a bit of hesitation that I offer up EDJX, a startup out of Raleigh, N.C., that’s building a business connecting other peoples’ servers and infrastructure as a repository for edge processing and data storage.

James Thomason, CTO and co-founder of EDJX. Image courtesy of EDJX.

The company, which has raised $5 million in funding, has deals with organizations such as ITRenew, which buys used gear from the large cloud providers to repurpose as machines that run EDJX’s protocols. James Thomason, CTO and co-founder of EDJX, says the protocols can run on large servers or on devices as small as a Raspberry Pi. The idea is that companies will contract with EDJX to send their computing workloads to the most capable machine that’s nearby for processing.

This might be a server in a co-location facility or a computer that’s running near a set of sensors that want to offload some of their data for almost-local edge processing. Companies can also run the EDJX protocols on their own hardware, creating an intranet of compute, as it were. Each endpoint in the EDJX network essentially offers caching and HTTP proxying, DNS, serverless functions, and  serverless database storage.

The goal is to reduce the cost and complexity associated with using AWS Lambda or Microsoft Azure Functions by taking the workloads out of the cloud. Doing so eliminates the bandwidth and storage costs that can accrue when a developer tries to save a little money by using serverless computing as needed as opposed to holding onto a traditional computing instance even when they’re not using it.

Serverless computing spins up computing power on demand — in other words, when a device reports in and needs to compute or take action — and so is cheaper than keeping a server “hot” all the time. But developers have to buy additional services to make serverless functions work, and costs can fluctuate wildly. The EDJX team wants to remove some of that uncertainty around cost and help make the computing happen even faster by finding available machines that are geographically close to the device that is sharing data or needs computing.

The EDJX network is also ephemeral in the sense that a company can compute what it needs using the data it has, then dump the data after getting its insight. As I wrote last week, there are several companies betting on computing architectures that are designed for dynamic environments, where data is constantly changing and the insights derived from that data are only valuable for a short period of time.

Thomason says that in these situations the most efficient model is a peer-to-peer distributed network of machines, which can be owned by different companies as long as they all run the EDJX protocol. That protocol basically routes inquiries around the available infrastructure, breaking it up as needed to fit the available computing resources, then assembling the necessary insight.

The technical details are a lot to get into, but the main disadvantage for traditional developers who want to use the platform is that there will be a learning curve. This isn’t a container-style architecture that can take existing code; applications need to be newly written to take advantage of the EDJX architecture, much like it would need rearchitecting to take advantage of serverless on any platform. It’s also using a data model that’s eventually consistent, which means that if you need every server around the world to have the same picture of existing data at the same time (or as close to it as you can imagine), this isn’t the architecture for you.

But since many of these dynamic edge workloads are emerging just as players such as EDJX make them more economical, I think that hurdle will be overcome. After all, if you need this type of real-time edge processing, you’re probably willing to pay for it.

The good news is you won’t have to shell out anything until 2021. The platform is currently in beta, and the plan is to offer customers a metered set of capabilities. As customers go beyond their paid-for capabilities, they’ll get a notice asking them to cut back on usage or pay for the next tier.

All hail our new distributed architecture. Maybe now is its time to shine.

 

Updated: This story was updated on Oct. 20, 2020, to correct the amount of funding the company raised and what each endpoint can do. 

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago