Thursday, February 11, 2010

Smart Grid for data centers better manages electricity to slash IT energy spending, frees-up wasted capacity

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Nowadays, CIOs need to both cut costs and increase performance. Energy has never been more important in working toward this productivity advantage.

It's now time for IT leaders to gain control over energy use -- and misuse -- in enterprise data centers. More often than not, very little energy capacity analysis and planning is being done on data centers that are five years old or older. Even newer data centers don’t always gather and analyze the available energy data being created amid all of the components.

Finally, smarter, more comprehensive energy planning tools and processes are being directed at this problem. It reqiures a lifecycle approach from the data centers to more toward fuller automation.

And so automation software for capacity planning and monitoring has been newly designed and improved to best match long-term energy needs and resources in ways that cut total costs, while gaining the available capacity from old and new data centers.

Such data gathering, analysis and planning can break the inefficiency cycle that plagues many data centers where hotspots can mismatch cooling needs, and underused and under-needed servers are burning up energy needlessly. These so-called Smart Grid solutions jointly cut data center energy costs, reduce carbon emissions, and can dramatically free up capacity from overburdened or inefficient infrastructure.

By gaining far more control over energy use and misuse, solutions such as Hewlett Packard's (HP) Smart Grid for Data Center can increase capacity from existing facilities by 30-50 percent.

This podcast features two executives from HP to delve more deeply into the notion of Smart Grid for Data Center. Now join Doug Oathout, Vice President of Green IT Energy Servers and Storage at HP, and John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: Data center transformation (DCT) is focused on three core concepts, and energy is another key focus for that all to work. The drivers behind data center transformation are customers who are trying to reduce their overall IT spending, either flowing it to the bottom-line or, in most cases, trying to shift that spending away from management and maintenance and onto business projects.

We also see increasing mandates to improve sustainability. It might be expressed as energy efficiency in handling energy costs more effectively or addressing green IT.

DCT is really about helping customers build out a data center strategy and an infrastructure strategy. That is aligned to their business plans and goals and objectives. That infrastructure might be a traditional shared infrastructure model. It might be a fabric infrastructure model of which HP’s converged infrastructure is probably the best and most complete example of that in the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some combination of the above for a lot of customers.

The secret is doing so through an integrated roadmap of data-center projects, like consolidation, business continuity, energy, and such technology initiatives as virtualization and automation.

Problem area

Energy has definitely been a major issue for data-center customers over the past several years. The increased computing capability and demand has increased the power needed in the data center. Many data centers today weren’t designed for modern energy consumption requirements. Even data centers that were designed even five years ago are running out of power, as they move to these dense infrastructures. Of course, older facilities are even further challenged. So, customers can address energy by looking at their facilities.

Increasingly, we're finding that we need to look at management -- managing the infrastructure and managing the facilities in order to address the energy cost issues and the increasing role of regulation and to manage energy related risk in the data center.

That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center as a key way of managing it effectively and dynamically.

Oathout: We're really talking about is a problem around energy capacity in data centers. Most IT professionals or IT managers never see an energy bill from the utility. It's usually handled by the facility. They never really concentrate on solving the energy consumption problem.

Where problems have arisen in the past is when a facility person says that they can’t deploy the next server or storage unit, because they're out of capacity to build that new infrastructure to support a line of business. They have to build a new data center. What we're seeing now is customers starting to peel the onion back a little bit, trying to find out where the energy is going, so they can increase the life of their data center.

To date, very few clients have deployed comprehensive software strategies or facility strategies to corral this energy consumption problem. Customers are turning their focus to how much energy is being absorbed by what and then, how do they get the capacity of the data center increase so they can support the new workloads.

What we're seeing today is that software, hardware, and people need to come together in a process that John described in DCT, an energy audit, or energy management.

All those things need to come together, so that customers can now start taking apart their data center, from an analysis perspective, to find out where they are either over-provisioned or under-provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they can then take some steps to get more capability out of their current solution or get more capability out of their installed equipment by measuring and monitoring the whole environment.

Adding resources

The concept of converged infrastructure applies to data center energy management. You can deploy a particular workload onto an IT infrastructure that is optimally designed to run efficiently and optimally designed to continually run in an efficient way, so that you know you're getting the most productive work from the least energy and the more energy efficient equipment infrastructure sitting underneath it.

As workloads grow over time, you then have the auditing capability built into the software ... so that you can add more resources to that pool to run that application. You're not over-provisioning from the start and you're not under-provisioning, but you're getting the optimal settings over time. That's what's really important for energy, as well as efficiency, as well as operating within a data center environment.

You must have tools, software, and hardware that is not only efficient, but can be optimized and run in an optimized way over a long period of time.

Collect information

The key to that is to understand where the power is going. One of the first things we recommend to a client is to look at how much power is being brought into a data center and then where is it going.

What you want to do is start collecting that information through software to find out how much power is being absorbed by the different pieces of IT equipment and associate that with the workloads that are running on them. Then, you have a better view of what you're doing and how much energy you're using.

Then, you can do some analysis and use some applications like HP SiteScope to do some performance analysis, to say, "Could I match that workload to some other platform in the infrastructure or am I running it in optimal way?"

Over time, what you can do is you can migrate some of your older legacy workloads to more efficient newer IT equipment, and therefore you are basically building up a buffer in your data center, so that you can then go deploy new workloads in that same data center.

You use that software to your benefit, so that you're freeing up capacity, so that you can support the new workload that the businesses need.

The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.



Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure level, the less you need to change the facilities themselves. Of course, the issue with facilities-related work is that it can affect both quality of service and outages and may end up costing you a pretty penny, if you have to retrofit or design new data centers.

Oathout: Smart Grid for Data Centers gives a CIO or a data-center manager a blueprint to manage the energy being consumed within their infrastructure. The first thing that we do with a Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's really understanding how that all works together and how the whole topology comes together.

The second thing we do is visualize all the data. It's very hard to say that this server, that server, or that piece of facilities equipment uses this much power and has this kind of capacity. You really need to see the holistic picture, so you know where the energy is being used and understand where the issues are within a data center.

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.

Today, our servers and our storage are much more efficient than the ones we had three or four years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can you get an analysis that says, "Here is how much energy is being consumed," you can actually set caps on the IT equipment that says you can’t use more than this. Not only can you monitor and manage your power envelope, you can actually get a very predictable one by capping everything in your data center.

You know exactly, how much the max power is going to be for all that equipment. Therefore, you can do much better planning. You get much more efficiency out of your data center, and you get more predictable results, which is one of the things that IT really strives for, from an SLA to getting those predictable results, day in and day out.

Mapping infrastructure

S
o, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's about visualizing it to make decisions. Then, it's about automating and capping what you’ve got, so you have better predictable results and you're managing it, so that you are not having out wires, you're not having problems in your data centers, and you're meeting your SLA.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

No comments:

Post a Comment