Nice Notes – Take Aways from NGON & DCI Europe 2018
Part III: Sliced and Diced
At the 20th anniversary NGON event this year, 5G and optical was a huge topic of discussion. And any time 5G and transport networks are discussed, the concept of network slicing must be included. NGON 2018 was no exception, with no fewer than three different approaches to network slicing being presented in the few sessions that I was able to personally attend. Slicing, like much about the coming 5G network revolution, is being defined and redefined regularly by operators and vendors alike as we all try to come to terms with the application requirements and economics of 5G transport networks.
The concept of slicing in a network generally means that different applications see different logical networks on top of a common physical network. Slicing is intended to be a way of avoiding building multiple transport networks to support the multiple applications supported by 5G, because building multiple networks would be way too expensive. 5G radios have a range of about 1 mile, meaning that 5G sites will be very dense and numerous. Building multiple networks to support such a dense radio network would be quite profitable for companies like ECI who sell the equipment supporting those networks, but would be too expensive for the companies that are actually building and operating the networks. So there has to be a compromise, and network slicing looks to be the best bet.
At least one analyst asked, “can’t we just put in more bandwidth to solve this problem?” That is a legitimate question, and one that has been asked of other technologies. For example, many of the business cases for SDN (bandwidth on demand, network balancing, SLA maintenance) could indeed be solved with a lot of extra bandwidth. We are no longer building software that fits on a floppy disk (or even a DVD) now that storage and connectivity is cheap and ubiquitous. Could we not assume that bandwidth will follow the same curve and make all of these discussions about slicing moot?
Unfortunately (since we’d love to sell you more equipment to provide more bandwidth), bandwidth alone isn’t the issue. The famous 5G applications triangle of competing requirements – low latency/high reliability, high bandwidth, and enormous scalability/IoT – demonstrates that only one of the applications can be solved with more bandwidth. The others are more complicated. If one application requires extremely low latency, then storage, compute, and routing functions for that application will need to be nearer to the tower. If another application requires extremely high reliability, then redundant transport resources will need to be allocated at all points along the service route.
While it might be true that additional compute, storage, and transport resources could be included for ALL applications, the costs in terms of equipment, power, space, cooling, and maintenance would be enormous. A once-size-fits-all solution simply is not feasible. We need to find a way to do more with less, and virtualization (by way of slicing in this case) looks to be the right answer.
Where the slicing takes place is a bigger question, and one that for now has multiple answers. Most people talking about slicing are assuming that slicing will take place in the electrical/packet domain. Techniques are being developed and migrated from NFV and SDN to allow a single piece of hardware to operate as separate logical devices. For example, one backhaul switch could look like a high capacity forwarding device to high bandwidth services while looking like part of a local switch/router network another application. Each service on such a device would be given separate priorities, characteristics, and capabilities depending on the application requirements. Electrical/packet level slicing makes a lot of sense when you assume that no one service will dominate enough of the traffic to require an entire wavelength (10Gbps, 25Gbps, 100Gbps), and you assume that the amount of bandwidth required for each application will vary over time – all very logical assumptions.
However, at least one presenter at NGON showed slicing occurring at the OTN layer. If one assumes that traffic will be somewhat aggregated so that the amount of aggregate bandwidth required for each application type is generally stable – but still less than an entire wavelength – then OTN-level slicing could be an interesting option. Not surprisingly, this solution is preferred in areas where OTN is more ubiquitous (primarily Asia and some of Europe). In areas where an OTN layer is generally eschewed in favor of an all-packet network, OTN-layer slicing will likely be a harder sell. Still, the connection-oriented, synchronous, and dedicated nature of OTN makes it an interesting option for network slicing at the aggregation layer.
At an even higher (or lower on the OSI model) level, one presenter showed slicing happening at the wavelength layer. This presentation was based on a laboratory experiment to prove that dynamic WDM could be used to separate the traffic, even in the access. While certainly interesting, the economics of tunable optics, ROADMs, and WDM at every access point do not currently make sense. However, as a technology experiment, WDM slicing at the access was certainly interesting and drove long discussions about ways to decrease the cost of optics in these types of networks.
In the final analysis, we are going to have to wait to see which slicing technique works the best to meet network operators’ application-based requirements. There will likely be a mix of slicing technologies deployed (ECI can do all three types mentioned) with some dominating in specific geographic areas or operator domains while others dominate in different networks. Vendors are going to have to be nimble – and will have to be very clear in making their case for their own favorite brand of network slicing.