Showing posts with label HP POD. Show all posts
Showing posts with label HP POD. Show all posts

Monday, June 6, 2011

HP rolls out EcoPOD modular data center, provides high-density converged infrastructure with extreme energy efficiency

LAS VEGAS – HP today at Discover here unveiled what it says is the world’s most efficient modular data center, a compact and self-contained Performance Optimized Data Center (POD) that supports more than 4,000 servers in 10 percent of the space and with 95 percent less energy than conventional data centers.

The HP POD 240a also costs 25 percent of what traditional data centers cost up front, and it can be deployed in 12 weeks, said HP. It houses up to 44 industry standard racks of IT equipment.

The EcoPOD joins a spectrum of other modular data center offerings, filling a gap on the lower end of other PODs like the shipping-container-sized Custom PODs, the HP POD 20c - 40c, and the larger bricks and mortar HP Flexible Data Center facilities. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The EcoPOD can be filled with HP blade servers and equipment, but also supports servers from third-parties. It is optimized for HP converged infrastructure components, however. HP says the EcoPOD can be ordered and delivered in three months, and then just requires an electric power and network connect to become operational.

The modular design, low capital and operating costs and rapid deployment will be of interest to cloud providers, Web 2.0 applications providers, government and oil industry users. I was impressed with its role in business continuity and disaster recovery purposes. The design and attributes also will help those organizations that need physical servers in a certain geography or jurisdiction for compliance and legal reasons, but at low cost despite the redundancy of the workloads.

The HP EcoPOD also provides maximum density for data center expansion or as temporary capacity during data center renovations or migrations, given that it streamlines a 10,000-square-foot data center into a compact, modular package in one-tenth the space, said HP.

The design allows for servers to be added and subtracted physically or virtually, and the cooling and energy use can be dialed up and down automatically based on load and climate, as well as via set policies. It can use outside air when appropriate for cooling ... like my house most of the year.

The HP POD 240a is complemented by a rich management capability, the HP EcoPOD Environmental Control System, with its own APIs and including its own remote dashboards and control suite, as well as remote client access from tablet computers, said HP.

The cost savings are eye-popping. HP says an HP POD 240a costs $552,000 a year to operated, versus $15.4 million for traditional systems energy use.

Built at a special HP facility in Houston, HP POD-Works, the EcoPODs will be available in the Q4 of this year in North America, and rolling out globally into 2012.

HP is also offering leasing arrangement, whereby the costs of the data center are all operating expenses, with little up-front costs.

You may also be interested in:

Monday, November 10, 2008

Solving IT energy conservation issues requires holistic approach to management and planning, say HP experts

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The critical and global problem of energy management for IT operations and data centers has emerged as both a cost and capacity issue. The goal is to find innovative means to conserve electricity use so that existing data centers don't need to be expanded or replaced -- at huge cost.

In order to promote a needed close matching of tight energy supply with the lowest IT energy demand possible, the entire IT landscape needs to be considered. That means an enterprise-by-enterprise examination of the "many sins" of energy mis-management. Wasted energy use, it turns out, has its origins all across IT and business practices.

To learn more about how enterprises should begin an energy-conservation mission, I recently spoke with Ian Jagger, Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group, and Andrew Fisher, manager of technology strategy in the Industry Standard Services group at HP.

Here are some excerpts:
Data centers typically were not designed for the computing loads that are available to us today ... (and so) enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments (IT and Facilities) to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs. One of the biggest issues out there is that the industry, by and large,drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy. ... If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria.

You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management. ... But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

This is a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

So, there is rarely a single silver bullet to solve this complex problem. ... The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

... There’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

... There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.