Enhancing Network Efficiency – The Power to Save Is in Your Hands
This is the sixth post in the series – Strategies for Maximizing Network Efficiency
As networks grow and become more complex, an increasing amount of network designers are turning their attention to power consumption. Along with spiraling costs, service providers are also facing a space issue as they try to jam more and more servers, racks and cooling equipment into a single location.
Many engineers feel that optimizing power consumption is a simple task—choose the hardware with the best energy rating and plug it in. Not so fast though—reducing power consumption isn’t that simple. With multiple factors at play in the power consumption and cooling space, controlling costs requires a more than a quick power calculation—it comes down to a detailed understanding of design.
Moving to a High Density Environment
Often, network operators believe that as networks scale high density cards are the answer to reducing power consumption. Today it’s easy to design a network that crams as much power into a single location to reduce the network’s footprint. Theoretically, this is a good idea but it leads to cooling concerns.
The denser the servers, the greater the CPU demands and the hotter the servers will run which then increases cooling requirements. In some cases, managing temperatures of dense servers is just as costly as spreading them out. At the same time, network operators often feel the need to choose between traditional rack servers and sleeker blade servers. While both types do have their efficiency merits, blade servers work well for applications that require high-powered processers or need to distribute workloads across multiple servers. However, rack server architecture is still necessary for creating a hot-row/cold-row design that reduces overall cooling needs and power consumption. That way, you can still implement high capacity cards and racks without overstepping your power limits.
Give it a rest—Sleep mode is a new option
If other aspects of businesses can work smarter, why can’t data centers? Don’t continue to run 10 machines at 10% capacity—it’s just a waste of energy. Technology exists to manage server requests and consolidate workloads into fewer servers, running them at higher capacity to ultimately reduce power consumption.
By using hibernate mode, studies show that companies can reduce their power consumption by 30% to 40%. Not only will power consumption decrease, costs of cooling will decrease too, because you’ll be running fewer servers.
Another way to drastically cut cooling costs is to simply raise the temperature of your space. Research shows that many network operators have their servers cooled to a temperature somewhere in the mid-60°F. Servers can perform in temperatures up to 77°F, so don’t waste your resources bringing temperatures unnecessarily low levels.
Go where it’s cold
It is no secret that many providers are looking to rent/lease/construct data centers in geographies where the external temperatures are low. External temperatures can also facilitate the reduction of power costs by reducing the ambient heat in the data center. The lower the temperature, the less the expense of cooling.
Power consumption is tricky, but not impossible
There are plenty of options available for the network operator looking to maximize network efficiency by reducing power consumption. The key is to take advantage of new technology without totally scrapping traditional architectures and equipment that can still serve a purpose. There’s no one-size-fits-all approach to power utilization, but with the right design mindset, any operator can become more resource efficient.
How have you managed to balance network performance and energy efficiency? Leave a comment below and tell us what has worked best for you.
Long Term Evolution mobile backhaul networks are becoming a key factor for end-user quality of experience. Download this free white paper to learn the 3 key challenges that any future proof backhaul network must deal with.