Too Much of a Good Thing
Take Aways From MEF17
Among the many interesting presentations and concepts at this year’s MEF17 conference, there was one relatively disruptive comment that created significant discussion on the show floor. In Kevin O’Toole’s comments on behalf of Comcast Business, a good portion of the presentation focused on the idea of the impact of Gigabit services. Towards the end of his talk, he questioned whether many of the features that the SDN community is pushing – things like bandwidth on demand – really make sense in a world where the end user can economically have access to more bandwidth than he can reasonably use.
This is a really interesting idea, and it brings up a very good point. We have seen this happen before in other technologies. When the first Microsoft programs came out, they had to fit onto a floppy disk. As a result, software programming techniques were designed to be very efficient (if somewhat complex) to minimize disk space. In the age when hard drives were measured in megabits and not terabits, there were many solutions available on the market to optimize and compress files on the fly to better utilize that scarce resource. Even in telecoms, there have been devices deployed that would optimize WAN connections via intelligent de-duplication or information compression. And many of us recall the many, many layers of ATM prioritization that were going to allow us to better utilize bandwidth on large pipes.
Now that economically available hard drives on computers are routinely measured in the terabytes, no one worries about file compression. Combine that with the advent of software delivery via the internet, and no one really worries about program size any more. The complex software engineering techniques designed to reduce program size have been abandoned in favor of more feature-rich and containerized programming options. And almost no one talks about putting in extra delays to improve bandwidth capabilities over smaller pipes. And ATM died a painful death years ago before anyone ever had much use for the various VBR flavors.
So, are we going to look back in a few years at some of the concepts in SDN and NFV and smile knowingly at the ignorance of our past selves the same way that we now recall the limitations of the floppy disk? It’s an interesting question.
If consumers – not just homes, but enterprise businesses – can get all of the bandwidth that they need and more at a reasonable price, why would they worry about the complexity of an on-demand bandwidth model? In this industry, we like to talk about the customer portal, where the IT manager can log in and increase the bandwidth for a short time for a specific process. If bandwidth is plentiful and cheap, why bother? The question could even extend to SD-WAN. If we can now make reliable delay-sensitive Skype calls over the internet (so the reliability issue is moot) and we can buy all of the bandwidth that we can use from two separate providers (to avoid downtime issues), is there a long-term reason to deploy SD-WAN?
Most certainly there will always be edge cases where there is not enough bandwidth available. No one will argue that large data center interconnections are significantly undersubscribed and there are major businesses out there with bandwidth requirements well in excess of Comcast’s gigabit speeds. Likewise, there are places where SD-WAN makes a lot of sense and will for a long time.
But it is an interesting thought that we might indeed be putting a lot of effort into a problem that could solve itself with bigger pipes.