By Randy Johnson, Staff Software Engineer | Nov 8, 2023
Earlier this week we announced Threeport, an open-source application orchestrator. While this may sound like a radical departure from what you’re used to, we wanted to take the time to discuss why we think it is a logical next-step in the evolution of software. We will take a look at a brief history of computing and show that Threeport fits nicely into it as a solution to common problems that development teams face today.
Software is all about abstractions. Within each layer of a typical software engineering stack we tend to follow a similar pattern: solve problems with the simplest solution possible (often homegrown tooling and scripts), until that approach breaks or no longer scales. Then, we’ll either create a new abstraction or use a well-established one that’s been proven elsewhere.
At the infrastructure layer, this has taken many forms. Physical infrastructure has been abstracted by VMs. Within these VMs, we developed containers to handle the problem of process isolation. To solve the problems of scheduling, bin-packing, service discovery, and networking for all of these containers, we ended up with container orchestration tools like Kubernetes. All of these paradigm shifts have a number of things in common:
While each new abstraction presents a programmable interface for concerns that are in-scope, there is also a distinct set of problems that remain out-of-scope. The easiest way of tackling these leftover concerns is often called “automation” and involves various CLI tooling packaged into a script containing some form of synchronous, event-triggered pipeline. For example, configuring a bare-metal server to host an application was eventually replaced by simply asking a hypervisor to set up a new VM for us. In either case, the client making the request was a problem left for a pipeline. The thing that evolved over time is where the complexity of this action is handled. If you’re familiar with the concept of shift-left testing (designing development processes to catch errors as early as possible), this can be thought of as the opposite for handling complexity: while we use simpler CLI tools in our workflow, we “shift right” the underlying complexity and delegate it to software.
Nowadays, things aren’t a whole lot different. An advanced infrastructure may consist of a Kubernetes cluster whose configurations are generated via various YAML-wrangling tools in a pipeline before being deployed. It’s a similar story for provisioning this cluster and its underlying infrastructure via infrastructure-as-code tooling. On one hand, this is a big improvement over what this might have looked like without a hypervisor, Docker daemon, or Kubernetes control plane. On the other hand, we’re still left with pipeline tasks which can often be categorized as undifferentiated heavy-lifting. In this 2006 AWS blog post on this topic, Jeff Bar writes:
Remarkable aspects of this heavy lifting include server hosting, bandwidth management, contract negotiation, scaling and managing physical growth, dealing with the accumulated complexity of heterogeneous hardware, and coordinating the large teams needed to take care of each of these areas.
Many of these problems have thankfully “shifted right” since then and are now reduced to APIs. However, the overall pattern remains. At Qleet, we believe we’re due for an appropriate abstraction to handle the undifferentiated heavy-lifting of today.
At each stage we discussed previously, operations teams were left with a set of common problems. They had to re-invent the wheel until the next feasible abstraction came along to make their lives easier. Let’s go over today’s problems we believe are ready to be abstracted.
Before we dive in, it’s helpful to revisit why we’re here in the first place. All of the previously discussed technologies are a small part of one big problem - “I have an application that serves a core business function and it needs to live somewhere.” The technologies themselves are not an outcome that we care about, but meeting customer needs is. This is a useful starting point for discussing the motivations behind Threeport.
Let’s say Acme Corp. has a web application that is ready for production. What’s the highest-level, most generic description of what this application really needs?
With the current state of DevOps and Platform Engineering, there are dozens of ways to accomplish the above using chains of disparate tools. At the end of the day, all DevOps and Platform Engineering teams are solving for the same set of requirements separately and in parallel of each other. Our hypothesis with Threeport is that this set of problems represents what should be abstracted in the next evolution of software delivery. Similar to advancements before with VMs, containers, and container orchestration, what if we could “shift-right” the redundant complexities before us into an abstraction that is actively managed for us by software? In this regard, Threeport is just the next iteration of Unix philosophy and software engineering applied to a set of software delivery problems - there’s nothing new under the sun.
Ready to get started with Threeport? Check out our getting started guide here.