Monday, November 10, 2008

Solving IT energy conservation issues requires holistic approach to management and planning, say HP experts

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The critical and global problem of energy management for IT operations and data centers has emerged as both a cost and capacity issue. The goal is to find innovative means to conserve electricity use so that existing data centers don't need to be expanded or replaced -- at huge cost.

In order to promote a needed close matching of tight energy supply with the lowest IT energy demand possible, the entire IT landscape needs to be considered. That means an enterprise-by-enterprise examination of the "many sins" of energy mis-management. Wasted energy use, it turns out, has its origins all across IT and business practices.

To learn more about how enterprises should begin an energy-conservation mission, I recently spoke with Ian Jagger, Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group, and Andrew Fisher, manager of technology strategy in the Industry Standard Services group at HP.

Here are some excerpts:
Data centers typically were not designed for the computing loads that are available to us today ... (and so) enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments (IT and Facilities) to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs. One of the biggest issues out there is that the industry, by and large,drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy. ... If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria.

You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management. ... But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

This is a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

So, there is rarely a single silver bullet to solve this complex problem. ... The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

... There’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

... There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

No comments:

Post a Comment