A Question of BalanceAnother blog in the series – the New Metro, Moshe Shimon, ECI’s VP of Product Management, takes a look at why service providers might consider deploying a Multi-access Edge Computing (MEC) solutions in the metro network. And how to balance, amongst other things, capabilities and costs to meet the ever-hungry needs of mobile and business customers.
As more and more traffic moves to the metro network, operators are under more pressure to meet customers’ needs. “We need more bandwidth!” they cry. “And compute power!” they demand. “And lower latency!” they insist. It seems the answer everyone’s turning to, or at least seriously considering, is the MEC.
A quick point of order
Up until fairly recently, MEC was mostly talked about in the context of mobile in general, and 5G specifically. However, over time MEC morphed to the all-encompassing ‘multi-access edge computing’, because the industry has come to realize MEC can be beneficial for more than ‘mobile’.
MEC devices vary, from purpose built devices to COTS platforms to plug in blades. But all serve the same purpose, to provide compute resources which can be moved around the network where needed – but mostly to the edge. So you can run low-latency services, put compute where it is required or reduce the amount of traffic going back to the core.
MEC use cases
For a mobile services example, let's say I have remote surveillance cameras in a customer location that process and backup in the cloud. The feed from those cameras, (amounting to Terabytes of traffic) would ordinarily be backhauled to the core. However, if I use MEC to do the majority of the camera processing, management and analytics closer to the customer or base station. The end result? The backhaul traffic is reduced considerably and latency is reduced.
Here’s another example: Let's say you want to watch the highlights of the Liverpool vs Real Madrid Champions League Cup final (Liverpool FC fans, commiserationsL). Using a MEC, local caching solution will store the popular video closer to the end customer, thereby reducing traffic to the core and improving customer experience.
A more challenging example is driverless cars, which demand ultra-low latency connectivity (uLLC). If all the processing and meta data had to be sent to the core first to then be told, “Break! You're about to hit the garage door!” then these vehicles simply couldn’t exist. So to overcome the distances and the processing capacity, the compute has to be done at the node closest to the car, otherwise it will take too long. Of course, in the case of latency-sensitive applications like driverless cars, the logical solution is to put the MEC in the vehicle itself. But there are pros and cons to how many MEC nodes or devices you deploy and where, which I’ll come to in a moment. But first...
MEC: what are some of the considerations?
- Compute only or in conjunction with connectivity?
Today’s MEC platforms are also available as plug-in blades into connectivity (packet) platforms. While, in some cases the 2 in 1 concept doesn’t offer the same type of performance, there are some benefits to combining the two (compute and connectivity).
For example, when you integrate the compute inside the platform with a plug-in blade, you can off-load some of the processing from the CPU to the hardware. This gives you more compute power on the same system compared to running it on a dedicated appliance. It also means that the plug-in blade receives the same carrier grade fault and performance monitoring as the rest packet platform, which is important for the services being offered.
- How many platforms to deploy?
There are some obvious drawbacks to deploying too much edge computing. While you are reducing backhaul traffic because you’re doing more processing in the access, you’re also adding more complexity and cost. Compute power isn’t something you add to your kit and forget about. Compute power increases dramatically every one or two years, so you'd need to upgrade the compute system to match, like you do your smartphone.
If you have hundreds of central offices (COs) and thousands of street cabinets and access platforms, it quickly becomes very expensive from an OPEX and CAPEX perspective. In some scenarios, it’s unavoidable. For example, driverless cars I mentioned earlier. But in all, how many MEC platforms you deploy is an important consideration.
- Dedicated appliance or NVF infrastructure?
The benefits of COTS appliances with NFV (Network Function Virtualization) infrastructures versus dedicated appliances has been discussed in depth throughout the industry.
Virtualized applications provide more agility and flexibility to the organization, enabling carriers and enterprises to reduce the number of dedicated appliance, and combine a variety of applications on 1 COTS appliance. However, the trade-off is performance which can never reach the levels of dedicated hardware.
- Where to deploy the MEC?
We see a lot of service providers and operators putting MEC in COs, sometimes, 5, 10, or 20km from their customers, but other cases it makes better sense to push MEC out of the CO and into a street cabinet even closer to customers. Because the closer the better, but you can’t move everything to the network edge due to the cost and added complexity, so there is a compromise to be made.
There is no right answer, and you can mix both options. A standalone or dedicated appliance suits a CO where you have more space and more control over environmental conditions. But in street cabinets, where there is little control over temperatures and space, a more streamlined plug-in-blade in may offer more compute power closer to where it’s needed.
So what is ECI doing in the MEC arena? Our Mercury NFV solution which is available as both a stand-alone or plug–in blade, can serve as an effective first step in deploying MEC.
Some key advantages, particularly of our plug-in MEC solutions include:
- more compute power per 1u
- can be deployed in multiple locations – i.e. in the street cabinet, rather than the Central Office (CO), cloud or data center
- in more rugged conditions – such as outdoors, limited space, requiring lower power consumption
- offers more bandwidth and higher performance where it’s needed
- balances latency and availability in multiple edge locations
- supports 5G mesh topology and faster data processing