Looking at Openflow
Openflow is the “father of software defined networks” in the minds of many engineers. To understand Openflow, however, you cannot just look at the protocol itself; rather you must go back to the beginning, in the mists of old networking.
It is important to start here: in the “old days,” when I was still taking network cases in the routing protocols team at a major vendor, when we wanted to understand how routing worked, we looked at the code. What is the point? Networking has always been about software control planes. So networks have always been built on software; hence the “software” in software defined networking (SDN) cannot have ever meant “building control planes in software,” because that is the way it has always been done.
It did not take long for engineers to realize that packet switching could not be done effectively in software, either. The first commercial custom ASIC for switching packets was (apparently) the Cisco SSP, available for the Cisco 7000 (not the new ones, the old ones—the really old ones!) which shipped sometime in 1994. Today, most every packet switching chipset supports some form of Openflow as part of its interface. Hence, the “software” part of SDNs doesn’t have to do with switching packets in software, either (contrary to some rumors you might have heard elsewhere).
So, if Openflow was not created in order to build control planes that reside in software, nor to switch packets in software, why was Openflow created? In the “old days,” if you wanted to try a new routing protocol, you had to have access to some vendor’s code. Network operating systems were (and still largely are) highly customized pieces of software, with their own APIs, schedulers, memory managers, etc. This “closed environment” made building new, experimental control planes rather difficult (if not impossible) for anything but the most trivial of networks (in terms of forwarding requirements). Openflow was actually created as a way to provide an open, standardized interface into switching platforms so new ways of building forwarding tables could be tried. From the beginning, however, Openflow has been a sort of “battering ram,” breaking down the mental roadblocks to rethinking the puzzles of the control and forwarding planes.
Openflow quickly morphed from a research platform into a new way of building control planes. Specifically, it is Openflow that brought in a resurgence of off box (“centralized”) controllers pulling topology and reachability information off of individual boxes, calculating best paths, and then installing the resulting routes back into the forwarding tables of the forwarding devices.
Openflow today is often seen as “the little protocol that couldn’t.” The original idea of implementing control over individual flows (as in 5+ tuple sets) has proven to be unworkable in the real world. Not that this should be a surprise; “traditional” implementations, with fully optimized APIs between the routing and forwarding tables, would struggle with pushing enough information to provide per-flow forwarding information, and very few chipsets could support a forwarding entry per flow.
The solution for this scaling problem in Openflow is the caching system; converting the control plane from supplying information proactively to supplying forwarding information only when packets need to be switched, and keeping forwarding information in a cache for some time, was the original plan. However, waiting for forwarding information to be needed before installing it, and trying to manage a cache of forwarding information, create difficulties of their own. The reality today is that Openflow is primarily used to install what appears to be largely traditional routing information, covering hosts, subnets, and aggregated reachability information, and that most of this information is installed proactively, rather than reactively.
All of this makes sense from a performance perspective—but does it mean that Openflow has “failed” in some sense? In the original, research sense, Openflow has succeeded wildly. While most vendors producing chipsets don’t support every possible Openflow implementation option, and there are still performance issues to deal with, Openflow is available widely enough to allow the kinds of experimentation it was originally designed to facilitate. In the commercial world, Openflow has succeeded in battering down the gates of the proprietary operating system world, at least in making engineers ask questions and rethink their assumptions. Beyond this, Openflow has proven to be a useful open interface into hardware forwarding engines that can be used to “standardize” the API between the routing and forwarding tables, as well as providing some measure of “off box control” (even if it is not fulfilling the dream of “one controller to rule them all”).
How does Openflow fit into the classification system developed early in this series?
The chart below is helpful -
In the original Openflow model, both policy and reachability are definitely centralized, or the calculation of both are transferred to a controller that (presumably) resides outside the forwarding device. Hybrid models are difficult with Openflow, because there is typically no facility in the forwarding table to choose between competing routes. Negotiation between multiple control planes is normally handled in the routing table, rather than forwarding table, and there is generally no meaningful feedback path from the forwarding table to the routing table. Because of this, there is almost no way for a BGP process that is installing routes into the RIB to know a route has been overwritten by Openflow in the FIB, for instance.
Openflow might not be the star of the SDN show any longer, but it is still useful in many places, and the job Openflow did in bringing the concepts of SDN to the broader packets switching world have been invaluable.