The Hyperconvergence Revolution
Part 1 - The Composable System
A revolution is taking place at the network edge. In the early days, rack mount computers were purchased in various sizes, mounted, and dedicated to a particular application. Over time, virtualization took hold, as well as the idea of separating storage from the processor and memory with it, leading to a disaggregated solution. Storage was often connected through the network as a Storage Area Network (SAN) or Network Attached Storage (NAS), providing pooled, high efficiency storage for large scale applications. The trend, in recent years, has been towards converged systems, which primarily means moving the storage back into the box, as a device locally attached to the processor’s bus.
Why this movement? As the cost of storage has dropped over the last several years, the cost to performance tradeoff has shifted. Sometimes storage is inexpensive enough to justify waste storage attached to processors to gain performance; sometimes storage is expensive enough to work around the performance costs of storage accessed over a network. A new trend started several years ago, however, that is already shifting this tradeoff: hyperconvergence. In hyperconvergence, several devices share a pool of storage that are attached to processors through a network, but are controlled using software sitting on the processor. The figure below illustrates each of the various steps in this chain.
In this illustration, the progression from a more traditional bare metal design to a hyperconverged design is shown. In the traditional design, each processor, along with the attached storage, memory, and network, are dedication to a particular application. In the traditional virtualized, the processor is virtualized, running multiple virtual machines, and some portion of the memory, and network are set aside for each virtual machine. Storage is centralized, accessible over a single fabric or a separate fabric designed specifically for storage. In the converged model, the storage is again local; virtual machines are given some amount of local storage alongside the memory and network resources. Finally, in the hyperconverged model, storage that is locally attached to each processor’s bus is pooled by a storage model so it appears the same as a centralized resource among all the virtual machines, much like in the diagram marked traditional virtualized.
While hyperconvergence does not appear to be a radical change, it actually is. While centralized storage systems are actually scale out systems—to add more storage, you need to make the centralized storage system larger—in the hyperconvergence world, storage is scaled out. When you add a new processor and memory, you also add new storage. The virtual storage manager simply “consumes” the new storage, adding it to the shared pool. This allows hyperconverged systems to be more flexible in their configuration and scaling.
There is a newer version of hyperconvergence that takes this concept one step farther—the composable system.
The figure below illustrates a composable system.
In the composable system, every component is connected to a central bus, generally built using either an extended version of PCIe or some form of Ethernet. These pools of resources are managed, with some amount of process, memory, network, and storage assigned to virtual machines or containers as needed. This creates a completely scale-out system that allows storage, compute, memory, and network resources to be mixed and matched as needed for just about any workload, and all using white box components.
The network resources here, of course, are Network Interface Cards (NICs), that must then connect into the network (or a data center fabric) to allow the applications running on these resources to talk to the rest of the world. The obvious question, at this point, should be—how does this all impact the network beyond the composable system sitting at the end of the network?
This is a question that will need to wait until next time.