5G Slicing: Concepts, Architectures, and Challenges
Part 2: Network Slicing Concepts and Requirements
In the previous blog in this series, I talked about the basic tenants of network slicing. We discussed what network slicing was, and started by defining some requirements. Mainly I discussed the role of resources and virtualization. In this blog, I will discuss a few more basic requirements for network slicing. Specifically, orchestration, isolation and autonomous behaviors.
In its general sense, orchestration can be defined as the art of both bringing together and coordinate disparate things into a coherent whole.
In a slicing environment, where the resources involved are so diverse, an orchestrator is needed to coordinate seemingly disparate network processes for creating, managing and delivering services.
The industry has yet to define a unified vision and scope for orchestration, or orchestrators. However, in general, orchestration is defined as the continuing process of selecting resources to fulfill client service demands in an optimal manner. The idea of optimal refers to the optimization policy that governs ‘orchestrator’ behavior. In other words, to meet all the specific policies and SLAs of the specific services and with the fewest possible resources. The term continuing means that available resources, service demands and optimization criteria may change overtime.
However with network slicing orchestration cannot be performed by a single centralized entity, not only because of the complexity and broad scope or orchestration tasks, but also because it is necessary to preserve management independence and support the possibility of recursion.
In my point of view, the network needs a framework where each virtualized function has an entity performing the orchestration. The orchestrating entities should exchange information (at the API layer) and delegate/change/add functionalities between them to ensure that the services delivered satisfies the required performance levels with optimal resource and its LCM.
To operate simultaneous slices on a common shared underlying infrastructure, strong isolation is a must. One must look at isolation in terms of:
- Performance: each slice is defined to meet particular service requirements, usually expressed in the form of KPIs. Performance isolation is an E2E issue and has to ensure that service-specific performance requirements are always met by each slice, regardless of congestion or performance of other slices.
- Security and privacy: attacks, failures or faults occurring on one slice cannot impact on other slices. Moreover, each slice must have independent security functions and definitions to prevent unauthorized entities from access to slice-specific configuration/management/accounting information. With the ability to record these attempts, whether authorized or not.
- Management: each slice must be independently managed as a separate network that starts from the base station all the way to the core.
To achieve isolation, a set of appropriate, consistent policies and mechanisms have to be defined at each virtualization level. The policies need to include a list of rules that describe how different manageable entities must be properly isolated. The mechanisms (how it is to be done) are the processes that are implemented to enforce the defined policies. To fully realize isolation level, one must employ both virtualization and orchestration.
Once a slice is set up, no matter if it’s used, that specific slice that was set up should function autonomously with no human intervention. The reason that autonomous functionality is required is due to the fact the network cannot be expected to know or understand what going on in next slice, or how the demand will change over time.
Autonomous behavior will require several components:
- Scheduling Algorithm - to schedule and analyze resources required to support all changes. The algorithm needs to send the information collected to the orchestrator.
- Resource Management – which is needed to receive an accurate, real-time status of resource utilization, schedule resources for upcoming tasks and alert when utilization has been maxed out. The orchestrator will then decide upon next steps.
- Machine Learning – As opposed to 4G/LTE, 5G networks should be able to handle extreme situations in an ambiguous, changing environment. Reducing the number of changes in the LCM without using KPIs or thresholds, is a major challenge. ML should not only help to provide "forecast" of demand, but also help prepare the network by increasing/decreasing network resources within the domains or zones.
Netflix is a great example of such autonomous service. Stay tuned for part 3 in the series where I discuss the architectures required for network slicing.