Wednesday, June 8, 2011

Deep-dive panel discussion on HP's new Converged Infrastructure, EcoPOD and AppSystem releases at Discover

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

This latest BriefingsDirect panel discussion on converged infrastructure and data center transformation explores the major news emerging from this week's HP Discover 2011 conference in Las Vegas.

HP has updated and expanded its portfolio of infrastructure products and services, debuted a mini, mobile data center called the EcoPOD, unveiled a unique dual cloud bursting capability, and rolled out a family of AppSystems, appliances focused on specific IT solutions like big data analytics.

To put this all in context, a series of rapidly maturing trends around application types, cloud computing, mobility, and changing workforces is reshaping what high-performance and low-cost computing is about. In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We know, for example, that we’ll see most data centers converge their servers, storage, and network platforms intelligently for high efficiency and for better management and security. We know that we’ll see higher levels of virtualization across these platforms and for more applications, and that, in turn, will support the adoption of hybrid and cloud models.

In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus.



We’ll surely see more compute resources devoted to big data and business intelligence (BI) values that span ever more applications and data types. And of course, we’ll need to support far more mobile devices and distributed, IT-savvy workers.

How well companies modernize and transform these strategic and foundational IT resources will then hugely impact their success and managing their own agile growth and in controlling ongoing costs and margins. Indeed the mingling of IT success and business success is clearly inevitable.

So, now comes the actual journey. At HP Discover, the news is largely about making this inevitable future happen more safely by being able to transform the IT that supports businesses in all of their computing needs for the coming decades. IT executives must execute rapidly now to manage how the future impacts them and to make rapid change an opportunity, not an adversary.

How to execute

Please then meet the panel: Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solutions for HP Enterprise Business; Jon Mormile, Worldwide Product Marketing Manager for Performance-Optimized Data Centers in HP's Enterprise Storage Servers and Networking (ESSN) group within HP Enterprise Business; Jason Newton, Manager of Announcements and Events for HP ESSN, and Brad Parks, Converged Infrastructure Strategist for HP Storage in HP ESSN. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Tang: Last year, HP rolled out this concept of the Instant-On Enterprise, and it’s really about the fact that we all live in a very much instant-on world today. Everybody demands instant gratification, and to deliver that and meet other constituent’s needs, an enterprise really needs to become more agile and innovative, so they can scale up and down dynamically to meet these demands.

In order to get answers straight from our customers on how they feel about the state of agility in their enterprise, we contracted with an outside agency and conducted a survey earlier this year with over 3,000 enterprise executives. These were CEOs, CIOs, CFOs across North America, Europe, and Asia, and the findings were pretty interesting.

Less than 40 percent of our respondents said, "I think we are doing okay. I think we have enough agility in the organization to be able to meet these demands."

Not surprising

So the number is so low, but not very surprising to those of us who have worked in IT for a while. As you know, compared to other enterprise disciplines, IT is a little bit more pre-Industrial Revolution. It’s not a streamlined. It’s not a standardized. There's a long way to go. That clearly spells out a big opportunity for companies to work on that area and optimize for agility.

We also asked, "What do you think is going to change that? How do you think enterprises can increase their agility?" The top two responses coming back were about more innovative, newer applications.

But, the number one response coming from CEOs was that it’s transforming their technology environment. That’s precisely what HP believes. We think transforming that environment and by extension, converged infrastructure, is the fastest path toward not only enterprise agility, but also enterprise success.

Storage innovation news

Parks: A couple of years ago, HP took a step back from the current trajectory that we were on as a storage business and the trajectory that the storage industry as a whole was on. We took a look at some of the big trends and problems that we were starting to hear from customers around virtualization or on the move to cloud computing, this concept of really big everything.

We’re talking about data, numbers of objects, size, performance requirements, just everything at massive, massive scale. When we took a look at those trends, we saw that we were really approaching a systemic failure of the storage that was out there in the data center.

The challenge is that most of the storage deployed out in the data center today was architected about 20 years ago for a whole different set of data-center needs, and when you couple that with these emerging trends, the current options at that time were just too expensive.

They were too complicated at massive scale and they were too isolated, because 20 years ago, when those solutions were designed, storage was its own element of the infrastructure. Servers were managed separately. Networking was managed separately, and while that was optimized for the problems of the day, it in turn created problems that today’s data centers are really dealing with.

Thinking about that trajectory, we decided to take a different path. Over the last two years, we’ve spent literally billions of dollars through internal innovation, as well as some external acquisitions, to put together a portfolio that was much better suited to address today’s trends.

Common standard

A
t the event here, we're talking about HP Converged Storage, and this addresses some of the gaps that we’ve seen in the legacy monolithic and even the legacy unified storage that’s out there. Converged Storage is built on a few main principles we're trying to drive towards common industry-standard hardware, building on ProLiant BladeSystem based DNA.

We want to drive a lot more agility into storage in the future by using modern Scale-Out software layers. And last, we need to make sure that storage is incorporated into the larger converged infrastructure and managed as part of a converged stack that spans servers and storage and network.

When we're able to design on industry-standard platforms like BladeSystem and ProLiant, we can take advantage of the massive supply chain that HP has and roll out solution that are much lower upfront cost point from a hardware perspective.

Second, using that software layer I mentioned, some of the technologies that we bring to bear are like thin provisioning, for example. This is a technology that helps customers cut their initial capacity requirements around 50 percent by just eliminating their over-provisioning that is associated with some of the legacy storage architectures.

One of the things we've seen and talk about with customers worldwide is that data just doesn't go away. It is around forever.



Then, operating expense is the other place where this really is expensive. That's where it helps to consolidating the management across servers and storage and networking, building in as much automation into the solutions as possible, and even making them self-managing.

For example, our 3PAR Storage solution, which is part of this converged stack, has autonomic management capabilities which, when we talk to our customers, has reduced some of their management overhead by about 90 percent. It's self-managing and can load balance, and because of its wide straightening architecture, it can respond to some of the unpredictable workloads in the data center, without requiring the administrative overhead.

Converged Infrastructure

Newton: We're really excited about the AppSystems announcements. We're in a great position as HP to be the ones to deliver on a promise of converging server, storage, network, management, security application all into individual solutions.

So, 2009 was about articulating the definition of what that should look like and what that data center in the future should be. Last year, we spent a lot of time in new innovations in blades and mission-critical computing and strategic acquisitions around storage, network, and other places.

The result last year was what we believe is one of the most complete portfolios from a single vendor in marketplace to deliver converged infrastructure. Now, what we’re doing in 2011 is building on that to bring all that together and simplify that into integrated solutions and extending that strategy all the way out to the application.

If we look at what kind of applications customers are deploying today and the ways that they’re deploying them, we see three dominant new models that are coming to bear. One is applications in a virtualized environment and on virtual machines and that have got very specific requirements and demands for performance and concerns about security, etc.

Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.



We see a lot of acceleration and interest in applications delivered as a service via cloud. Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.

The third way that we wanted to address was a dedicated application environment. These are data warehousing, analytics types of workloads, and collaboration workloads, where performance is really critical, and you want that not on shared resources, but in a dedicated way. But, you also want to make sure that that is supporting applications in a cloud or virtual environment.

So in 2011, it's about how to bring that portfolio together in the solution to solve those three problems. The key thing is that we didn't want to extend sprawl and continue the problem that’s still out there in the marketplace. We wanted to do all that on one common architecture, one common management model, and one common security model.

Individual Solutions

What if we could take that common architecture management security model, optimize it, integrate it into individual solutions for those three different application sets and do it on the stuff that customers are already using in the legacy application environment today and they could have something really special?

What we’re announcing this week at Discover is this new portfolio we called Converged Systems. For that virtual workload, we have VirtualSystems or the dedicated application environment, specifically BI, and data management and information management. We have the AppSystems portfolio. Then, for where most customers want to go in the next few years, cloud, we announced the CloudSystem.

So, those are three portfolios, where common architecture addresses a complete continuum of customer’s application demands. What's unique here is doing that in a common way and being built on some of the best-of-breed technologies on the planet for virtualization, cloud, high performance BI, and analytical applications.

Our acquisition of Vertica powers the BI appliance. The architecture is one of the most modern architectures out there today to handle the analytics in real time.

Before, analytics in a traditional BI data warehouse environment was about reporting. Call up the IT manager, give them some criteria. They go back and do their wizardry and come back with sort of a status report, and it's just looking at the dataset that’s in one of the data stores he is looking.

It sort of worked, I guess, back when you didn’t need to have that answer tomorrow or next week. You could just wait till the next quarterly review. With the demands of big everything, as Brad was speaking of, the speed and scale at which the economy is moving the business, and competition is moving, you've got to have this stuff in real-time.

So we said, "Let’s go make a strategic acquisition. Let’s get the best-in-class, real-time analytics, a modern architecture that does just that and does it extremely well. And then, let’s combine that with the best hardware underneath that with HP Converged Infrastructure, so that customers can very easily and quickly bring that capability into their environment and apply it in a variety of different ways, whether in individual departments or across the enterprise.

Real-time analytics

There are endless possibilities of ways that you can take advantage of real-time analytics with this solution. Including it into AppSystem makes it very easy to consume, bring it into the environment, get it up and running, start connecting the data sources literally in minutes, and start running queries and getting answers back in literally seconds.

What’s special about this approach is that most analytic tools today are part of a larger data warehouse or BI-centered architecture. Our argument is that in the future of this big everything thing that’s going on, where information is everywhere, you can’t just rely on the data sources inside your enterprise. You’ve got to be able to pull sources from everywhere.

In buying a a monolithic, one-size-fits-all OLTP, data warehousing, and a little bit of analytics, you're sacrificing that real-time aspect that you need. So keep the OLTP environment, keep the data warehouse environment, bring in its best in class real-time analytic on top of it, and give your business very quickly some very powerful capabilities to help make better business decisions much faster.

Data center efficiency

Mormile: When you talk about today’s data centers, most of them were built 10 years ago and actually a lot of our analyst’s research talks about how they were built almost 14-15 years ago. These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient. More of them require two to three times the amount of power to run the IT, due to inefficient cooling and power distribution systems.

These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient.



In addition to these systems, these monolithic data centers are typically over-provisioned and underutilized. Because most companies cannot build new facilities all the time and continually, they have to forecast future capacity and infrastructure requirements that are typically outdated before the data centers are even commissioned.

A lot of our customers need to reduce construction cost, as well as operational expenses. This places a huge strain on companies' resources and their bottom lines. By not changing their data center strategy, businesses are throttled and simply just can’t compete in today’s aggressive marketplace.

HP has a solution: Our modular computing portfolio, and it helps to solve these problems.

Modular computing

Our modular computing portfolio started about three years ago, when we first took a look at and modified an actual shipping container, turning it into a Performance Optimized Data Center (POD).

This was followed by continuous innovation in the space with new POD designs, the deployment of our POD-Works facility, which is the world’s first assembly line data centers, the addition of flexible data center product, and today, with our newest edition, the POD 240A, which gives all the benefits of a container data center without sacrificing traditional data center look and feel.

Also, with the acquisition of EYP, which is now HP Critical Facilities Services, and utilizing HP Technical Services, we are able to offer a true end-to-end data center solution from planning and installation of the IT and the optimized infrastructure go with it, to onsite maintenance and onsite support globally.

When you combine in-house rack and power engineering, delivering finely tuned solutions to meet customers’ growing power and rack needs, it all comes together. You're talking about taking that IT and those innovations and then taking it to the next level as far as integrating that into a turnkey solution, which should actually be a POD or modular data center product.

You take the POD, and then you talk about the Factory Express services where we are actually able to take the IT integrate it into a POD, where you have the server, storage, and networking. You have integrated applications, and you've cabled and tested it.

The final step in the POD process is not only that we're providing Factory Express services, but we're also providing POD-Works. At POD-Works, we take the integrated racks that will be installed in the PODs and we provide power, networking, as well as chilled water and cooling to that, so that every aspect of the turnkey data center solution is pre-configured and pre-tested. This way, customers will have a fully integrated data center shipped to them. All they need to do is plug-in the power, networking, and/or add chilled water to that.

Game changer

B
eing able to have a complete data center on site up and running in a little as six weeks is a tremendous game changer in the business, allowing customers to be more agile and more flexible, not only with their IT infrastructure needs, but also with their capital and operational expense.

When you bring all that together, PODs offer customers the ability to deploy fully integrated, high performing, efficient scalable data centers at somewhere around a quarter of the cost and up to 95 percent more efficient, all the while doing this 88 percent faster than they can with traditional brick and mortar data center strategies.

Start services

Newton: There are a multitude of professional services and support announcements at this show. We have some new professional services. I call them start services. We have an AppStart, a CloudStart, and a VirtualStart service. These are the services, where we can engage with the customer, sit down, and assess their level of maturity -- what they have in place, what their goals are.

These services are designed to get each of these systems into the environment, integrated into what you have, optimized for your goals and your priorities, and get this up and running in days or weeks, versus months and years that that process would have taken in the past for building and integrating it. We do that very quickly and simply for the customer.

We have got a lot of expertise in these areas that we've been building on the last 20 years. Just like we're doing on the hardware-software application side simplifications, these start services do the same thing. That extends to HP Solutions support, which then kicks in and helps you support that solution across that lifecycle.

There is a whole lot more, but those are two really key ones that customers are excited about this week.

Parks: HP ExpertONE has also recently come out with a full set of training and certification courseware to help our channel partners, as well as internal IT folks that are customers, to learn about these new storage elements and to learn how they can take these architectures and help transform their information management processes.

Tang: This set of announcements are significant additions in each of their own markets, having the potential to transform, for example, storage, shaking up an industry that’s been pretty static for the last 20 years by offering completely new architecture design for the world we live in today.

That’s the kind of innovation we’ll drive across the board with our customers and everybody that talked before me has talked about the service offering that we also bring along with these new product announcements. I think that’s key. The combination of our portfolio and our expertise is really going to help our customers drive that success and embrace convergence.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Talend brings unified data integration platform to public clouds

Talend, an open-source middleware provider, today announced Talend Cloud, a unified integration platform for the cloud and hybrid IT environments.

An extension of the recently announced Unified Integration Platform, Talend Cloud is designed for organizations looking to manage their data integration processes, whether on-premise, in the cloud, via software as a service (SaaS) or for hybrid environments. [Disclosure: Talend is a sponsor of BriefingsDirect podcasts.]

Talend is not providing its own public cloud offering at this time, but is making Talend Cloud available now to enable other cloud and enterprise hybrid users to mange data via a community-enhanced portfolio of data and services connectors.

For organizations with hybrid IT environments – that combine on-premise, private cloud, public cloud and SaaS – application and data integrations are difficult, yet critical to leveraging these multi-sourced models. Concerns surrounding latency, bandwidth, permissions and security are causing new forms of integration and data management challenges.

Although cloud has become ubiquitous in today’s IT deployments, many organizations are still trying to determine how to function in hybrid environments.



Talend Cloud provides flexible and secure integration of on-premise systems, cloud-based systems, and SaaS applications, said Talend. It also provides a common environment for users to manage the lifecycle of integration processes including a graphical development environment, a deployment mechanism and runtime environment for operations and a monitoring console for management – all built on top of a shared metadata repository.

It strikes me that these services are directly applicable to business intelligence and master data management for analytics, as the data can be cleansed, accessed and crunched in clouds, even as it originates from multiple locations. Hybrid data cloud analytics can be very powerful, the Talend is helping to jump-start this value.

“Although cloud has become ubiquitous in today’s IT deployments, many organizations are still trying to determine how to function in hybrid environments,” said Bertrand Diard, co-founder and CEO of Talend, in a release. “Using Talend Cloud, customers can address these issues within a single platform that addresses a broad range of integration needs and technologies, ranging from data-oriented services to data quality and master data management, via a unified environment and a flexible deployment model.”

Deployment Flexibility

The new platform provides deployment flexibility for Talend’s solutions and technologies within the Unified Integration Platform, including data integration, data quality, master data management and enterprise service bus. All components can be installed transparently in the cloud, on premise, or in hybrid mode. Key features include:
  • The ability to expand and contract deployments as required
  • Support for standard systems and protocols
  • An open-source model that makes resources accessible by various of platforms and devices
  • Modular architecture that allows organizations to add, modify or remove functionality as requirements change over time
  • The ability to maintain security and reliability of integration, allowing organizations to meet customer service-level agreements (SLAs)
Talend Cloud provides automated deployment on such popular cloud platforms such as Amazon EC2, Cloud.com and Eucalyptus. Also included is the addition of new connectors offering native connectivity to a broad range of key cloud technologies and applications as well as the most popular SaaS applications.

New connectors continue to be added on a regular basis, either by the open source community or by Talend’s R&D organization. The Talend Exchange provides the latest connectors which can be downloaded and installed directly within the Talend Studio, at no per-connector cost.

Talend Cloud is available immediately. More information is available at http://www.talend.com/products-talend-cloud/.

You may also be interested in:

Tuesday, June 7, 2011

HP takes plunge on dual cloud bursting: public and-or private apps support comes of age

LAS VEGAS – HP today at Discover here introduced advancements to its CloudSystem solutions with the means for cloud provider and enterprises to accomplish dual cloud bursting, one of the Holy Grails of hybrid computing.

The CloudSystem targets service providers by giving them the ability to allow their enterprise customers to extend their private cloud-based applications bursting capabilities to third-party public clouds too. See more news on the HP AppSystems portfolio.

HP CloudSystem, announced in January and expanded in the spring with a partner program, is designed to enable enterprises and service providers to build and manage services across private, public and hybrid cloud environments. As a result, clients have a simplified yet integrated architecture that is easier to manage and can be scaled on demand, said HP. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

In a demo on stage here today at Discover, HP's Dave Donatelli, executive vice president and general manager of Enterprise Servers, Storage and Networking for the Enterprise Business at HP, showed some unique features. The HP CloudSystem demo showed heterogeneous cloud bursting with drag and drop, on HP and 3rd party x86 boxes. Management and set-up ease seemed simple and automatic.

HP CloudSystem should appeal to both cloud providers and enterprises, because it forms a common means to get them both on cloud options spectrum. HP dual bursting works for public clouds that use HP CloudSystem or not, said HP.

HP CloudSystem dual bursting also seems to allow tiered bursting, data on private cloud, web tier on public clouds, just works, said HP. This seems quite new and impactful. And it's now available.

Based on HP Converged Infrastructure and HP Cloud Service Automation software, HP CloudSystem helps automate the application-to-infrastructure lifecycle and operations deployment flexibility, said HP. HP CloudSystem helps businesses package, provision and manage cloud services to users regardless of where those services are sourced, whether from CloudSystem’s “on-premises” resources or from external clouds.

Managing applications resources as elastic compute fabrics that span an enterprise's data centers and one or more public cloud partners offers huge benefits and advantages. Businesses that depend more on customer-facing applications, for example, can hone utilization rates and vastly reduce total cost of ownership while greatly reducing the risk that those applications and their data will not always be available, regardless of seasonal vagaries, unexpected spikes or any issues around business continuity.

"Capacity never runs out," said James Jackson, Vice President for Marketing Strategy, Enterprise Servers, Storage and Networking in HP Enterprise Business.

With CloudSystem, HP is providing the management, security and governance requirements for doing dual-burst hybrid computing, including hardware, software and support services with. Automated management capabilities help ensure that performance, compliance and cost targets are met by allocating private and public cloud resources based on a client’s pre-defined business policies. As a result, clients can create and deliver new services in minutes, said HP.

HP also announced HP CloudAgile, a program that spans the HP enterprise portfolio including CloudSystem. To speed time to revenue and improve financial flexibility for a broad range of service providers, the program provides participants with direct access to HP’s global sales force and its network of channel partners.

HP expects to co-sell and co-market such hybrid services with telcos, VARs, SIs, and a wide range of new and emerging service providers. I expect many of these providers to customize their offerings, but based on an HP or other cloud stack vendor foundation.

Current approaches to cloud computing can create fragmentation and can only address a portion of the capabilities required for a complete cloud solution, aid HP. Over time more enterprise applications may be sourced directly to public clouds, but for the foreseeable future private clouds and hybrid models are expected to predominate. See more news on converged infrastructure, and EcoPOD developments.

HP CloudSystem is powered by HP BladeSystem with the Matrix Operating Environment and HP Cloud Service Automation. It is optimized for HP 3PAR Utility Storage, and protected by HP security solutions, including offerings from TippingPoint, ArcSight and Fortify. HP CloudSystem also supports third-party servers, storage and networking, as well as all major hypervisors, said HP.

HP said that its customers that have already invested in HP Converged Infrastructure technology can expand their current architectures to achieve private, public or complete hybrid cloud environments.

HP announced yesterday that it is making up to $2 billion available to help clients finance their way to the cloud through HP Financial Services Co., HP’s leasing and asset management subsidiary.

Furthermore, HP is offering HP Cloud Consulting Services and HP Education services for CloudSystem, including HP CloudStart, to fast track building a private cloud. HP CloudSystem Matrix Conversion Service helps transition current BladeSystem environments to CloudSystem, said HP.

HP Solution Support for CloudSystem simplifies problem prevention and diagnosis with end-to-end support for the entire environment. These services deliver solutions right-sized for the client’s environment, protect investments when transitioning from a virtual infrastructure to a private cloud solution and rapidly deploy CloudSystem in a hybrid, multi-sourced cloud environment.

HP also unveiled at Discover two new cloud security services, HP Cloud Services Vulnerability Scanning and HP Cloud Vulnerability Intelligence. Available now worldwide, these allow cloud services providers to identify and remedy missing patches or network node vulnerabilities. The second service recommends remediation to infrastructure, as a service, and provides actionable advice to avoid vulnerabilities before they can manifest.

You may also be interested in:

Monday, June 6, 2011

HP at Discover releases converged infrastructure products and services aimed at helping IT migrate rapidly to the future

LAS VEGAS -- Cloud computing and mobility are redefining how people live, how organizations operate and how the world interacts. Enterprises must constantly adjust then to meet the changing needs of users, customers and the public by driving innovation and agility through technology.

Yet IT sprawl and outdated IT models and processes are causing enterprise complexity and crippling organizations’ abilities to keep pace with enterprise demands. Enterprises know they need to change, and they also have a pretty good idea of the IT operations and support they'd like to have. Now, it's a matter of getting there.

To help mobilize IT for the new order, HP today at HP Discover announced several Converged Infrastructure solutions that improve enterprise agility by simplifying deployment and speeding IT delivery. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Designed to be a key element in an Instant-On Enterprise, these offerings are designed in coordination to help reduce IT sprawl and turn technology assets into interoperable, shared pools of resources with a common management platform. They include:
  • Converged Systems, a new portfolio of turnkey, optimized and converged hardware, software, tailored consulting and HP Solution Support services that enable users to be up and running with new applications in hours vs. months.
  • Converged Storage architecture and portfolio, which integrates HP Store360 scale-out software with HP BladeSystem and HP ProLiant hardware to reduce storage administration, speed time to service delivery, increase energy efficiency and improve access for any data type or application workload. The offerings are complemented by new Storage Consulting services.

    The new solutions announced today extend the benefits of convergence to deliver new levels of speed, simplicity and efficiency that enable clients to capitalize and anticipate change.


  • Converged Data Center, a new class of HP Performance Optimized Data Centers (PODs) that can be deployed in just 12 weeks – and at a quarter of the cost when compared to a traditional brick-and-mortar data center. The HP POD 240a, also referred to as the “HP EcoPOD,” uses 95 percent less facilities energy. See separate blog on the EcoPOD.
  • HP Server Automation 9.1, which provides heterogeneous, automated server life cycle management for enterprise servers and applications for both converged and traditional environments. With new integrated database automation, it enables IT to significantly reduce the time it takes to achieve full application life cycle automation
Research conducted on behalf of HP found that 95 percent of private and public sector executives consider agility important to the success of their organizations. Plus, more than two-thirds of C-suite executives believe that enterprise agility is driven by technology solutions.

For me, enterprise IT strategists now basically know what they need for their data centers to meet the coming hybrid and cloud requirements. They will be using more virtualization, relying on standard hardware, managing their servers, storage and networks with increased harmony, supporting big data business intelligence, and dealing with more mobile devices.

More ways to move to modernization

HP is coming out with data center assets and services that -- pretty much better than ever for IT -- provide many on-ramps to modernizing all core IT infrastructure. The new and augmented products can be used by many types for organizations -- and at any stages of maturity -- to set out to meet modern and compete IT requirements. And they can do so knowing the capital and operating costs can be measured, managed and contained. These total IT costs are also being driven down from advancements in utilization, management, modular data center growth and pervasive energy conservation.

There are only a very few vendors that can supply the end-to-end data center transformation portfolio for the major domains of servers, storage, network and operational management. And HP is providing these globally with holistic and strategic integration so that the operational reliance and flexibility to scale and adapt become givens.

Most vendors are either hardware-heavy or software-centric, or lack depth in a major category like networking. HP has stated it plans to augment its software, and in the mean time is supporting best-of-breed choice on heterogeneous platforms, middleware and business applications -- including open source.

Additions to HP Technology Services were also announced, aimed at a life cycle of consulting support including strategy, assessment, design, test, implementation and training. HP Solution Support provides single-point-of-contact services for the entire turnkey solution, including third-party software.

Converged Storage for rapid response

Legacy monolithic and unified storage architectures were designed to address predictable workloads, structured data and dedicated access. Today’s requirements, however, are exactly opposite, with unpredictable application workloads, such as cloud, virtualization and big data applications, said Martin Whittaker, Vice President for Systems and Solutions Engineering, Enterprise Servers, Storage and Networking (ESSN), HP Enterprise Business.

HP’s Converged Storage architecture changes how data is accessed by integrating scale-out storage software with converged server and storage hardware platforms. Advanced management tools that span the architecture help speed IT service delivery. As a result, users can deploy and grow storage 41 percent faster while reducing administration time by up to 90 percent.

New solutions include:
  • HP X5000 G2 Network Storage Systems, which are built on HP BladeSystem technology. They can be deployed in minutes and reduce power requirements up to 58 percent, cooling needs up to 63 percent and storage footprint up to 50 percent.
  • HP X9000 IBRIX Network Storage Systems that optimize retention of unstructured data with new compliance features and the capacity for more than one million snapshots. The solution provides insight to the enterprise on trends, market dynamics and other pertinent facts by simplifying management of massive data sets. Policy management capabilities automate the movement of data to optimize resources.
Enhanced storage systems and services

HP’s fifth-generation Enterprise Virtual Array (EVA) family includes the new HP P6000 EVA, offering thin provisioning and Dynamic LUN Migration software along with 8Gb Fibre Channel, 10 Gb iSCSI and fibre channel over ethernet (FCoE) support. This enables users to consolidate application data to speed administration and reduce total cost of ownership.

HP Enterprise Services has integrated HP 3PAR Storage into the HP Data Center Storage Package, which offers storage management services that allow users to “flex” their storage needs up and down with changes in demand.

HP also offers new Storage Consulting Services that help users to design and deploy a Converged Storage environment, optimize the storage infrastructure, reduce costs, and protect and align data while preparing storage for cloud computing.

HP VirtualSystem

V
irtualization technology has been widely adopted to consolidate servers, gain flexibility and optimize return on investment. However, virtualized environments can be difficult to plan, deploy and operate due to the proliferation of management tools, uncertain performance characteristics and unaddressed security concerns.

The new HP VirtualSystem portfolio, based on HP Converged Infrastructure, consists of turnkey server and client virtualization solutions. See more news on the HP AppSystems portfolio.

The offerings are built on the HP BladeSystem platform, HP Lefthand/3PAR storage and HP FlexFabric networking technologies. As a result, they support up to three times more virtual machines per server, three times the I/O bandwidth and twice the memory as competing offerings. Also, the HP VirtualSystem portfolio is heterogeneous, supporting existing IT investments, multiple hypervisor strategies and operating systems with a common architecture, management and security model.

In a world where enterprises must instantly react to changing markets, clients are turning to HP Converged Infrastructure to dramatically improve their agility.



Further, HP VirtualSystem provides a path to cloud computing by utilizing similar hardware infrastructure and management environments as HP CloudSystem. To extend their environments and evolve to cloud computing, users follow a simple, rapid upgrade process, said James Jackson, Vice President for Marketing Strategy, ESSN in HP Enterprise Business.

HP offers three scalable deployment systems for small, midsize and large enterprises. Each includes leading hypervisor technologies from Microsoft or VMware, as well as the leading operating systems and applications.

The open, modular design of HP VirtualSystem simplifies management with a single-pane-of-glass view into each layer of the virtualized stack. HP TippingPoint security can be added for comprehensive threat protection of both physical and virtual platforms.

Users can increase the availability, performance, capacity allocation and real-time recovery of their HP VirtualSystem solutions with HP SiteScope, HP Data Protector, HP Insight Control and HP Storage Essentials software extensions.

Client Virtualization Reference Architecture for Enterprise includes Citrix or VMware software and HP Mission Critical Virtualization Reference Architecture complement the HP VirtualSystem solutions. The reference architecture resources contain a consistent set of architectural best practices, which enable users to rapidly deploy virtualized systems, improve security and performance, and reduce operating costs.

Life cycle support and consulting

To further reduce the complexity of virtual environments, HP Technology Services provides a full life cycle of consulting services, from strategy, assessment, design, test, implementation, training, and then transition to HP Solution Support for ongoing peace of mind.

HP VirtualSystem solutions are expected to begin shipping in the third quarter of this year. Availability of HP Client Virtualization Reference Architecture for Enterprise is expected in June and HP Mission Critical Virtualization Reference Architecture is expected in the third quarter of this year.

On-demand replays of the HP Discover press conference are available at www.hp.com/go/agileIT. Additional information about HP’s announcements at HP DISCOVER is available at www.hp.com/go/agility2011.

You may also be interested in:

HP delivers applications appliance solutions that leverage converged infrastructure for virtualization, data management

LAS VEGAS -- As part of its new Converged Infrastructure offerings, HP today here at the DISCOVER 2011 event rolled out AppSystems Portfolio, which offers a fully integrated appliance-like technology stack that includes hardware, management software, applications, tailored consulting and HP Solution Support services.

The new HP AppSystems portfolio is designed to improve application performance and reduce implementation from months to a matter of minutes. New application deployments can be complex, taking up to 18 months to roll out and optimize for the business.

The complexity of maintaining and integrating these environments often results in missed deadlines, incomplete projects, increased costs and lost opportunities. In fact, only 32 percent of application deployments are rated as “successful” by organizations, in a recent HP survey. [Disclosure: HP is a sponsor of Briefings Direct podcasts.]

HP AppSystem solutions can be rapidly deployed, supports a choice of applications, and is built on open standards to seamlessly integrate within existing infrastructures. The portfolio includes the following:

The complexity of maintaining and integrating these environments often results in missed deadlines, incomplete projects, increased costs and lost opportunities.


  • The HP Business Data Warehouse Appliance, which reduces the complexities and costs faced by many midmarket users when deploying data warehouses. Optimized for Microsoft SQL Server 2008 R2, the system can be implemented 66 percent faster than competing solutions for less than $13,000 per terabyte. Jointly engineered with Microsoft, the solution results in up to 50 percent faster input/output bandwidth to speed data load and query response.
  • The HP Database Consolidation Solution optimized for Microsoft SQL Server 2008 R2, which simplifies the management of virtualized infrastructures associated with the proliferation of SQL server databases. Optimized for Microsoft SQL Server 2008 R2, it consolidates hundreds of transactional databases into a single, virtual environment while enabling applications to access data.Once installed, new high-performance SQL Server databases can be provisioned in minutes and migrations can be accomplished with near-zero downtime.
  • The new HP VirtualSystem portfolio, also based on HP Converged Infrastructure, consists of turnkey server and client virtualization solutions. The offerings are built on the HP BladeSystem platform, HP Lefthand/3PAR storage and HP FlexFabric networking technologies. As a result, they support up to three times more virtual machines per server, three times the I/O bandwidth and twice the memory as competing offerings. Also, the HP VirtualSystem portfolio is heterogeneous, supporting existing IT investments, multiple hypervisor strategies and operating systems with a common architecture, management and security model.
These new solutions expand HP’s line of turnkey appliances, which also includes: HP Enterprise Data Warehouse Appliance and HP Business Decision Warehouse Appliance.

HP Technology Services also provides a full life cycle of consulting services, from strategy, assessment, design, test, implementation, training and HP Solution Support.

Vertica Analytics: Exadata Killer?

A key component of the new Converged Infrastructure offerings is the HP Vertica Analytics System, a potential Exadata killer for real-time Big Data analytics. Traditional relational database management systems (RDBMS) and enterprise data warehouse (EDW) systems were designed for the business needs of nearly 20 years ago. Today, vast amounts of structured and unstructured data are being created everywhere, every instant and from a variety of sources.

Built on the HP Converged Infrastructure, the new HP Vertica Analytics System provides an appliance-like, integrated technology stack that includes hardware, management software applications, consulting and HP Solution Support services.

The scale out cluster architecture of the HP Vertica Analytics System utilizes columnar storage and massively parallel processing (MPP) architecture. This enables users to load data up to 1,000 times faster than traditional row-stored databases and supports hundreds of nodes as well as petabytes of data efficiently without performance degradation, said HP. Further, because the systems are able to query data directly in compressed form, clients can store more data, achieve faster results, and use less hardware.

Only 32 percent of application deployments are rated as “successful” by organizations.



The HP Vertica Analytics System provides:
  • A data-compression technology that delivers a 50 to 90 percent reduction in database storage requirements with 12 separate compression algorithms.
  • An integrated, next-generation analytics database engine that can be deployed in minutes.
  • An optimized physical database design for user query needs with the Database Designer tool.
  • Faster response that can generate results in seconds versus hours and in real time.
  • Ease of integration with existing business analytics applications, reporting tools and open source software frameworks that support data-intensive distributed applications, such as Apache Hadoop.
The HP Vertica Analytics System is available immediately, in quarter, half and full-rack configurations. The HP Vertica Analytics Platform (software) also may be deployed on existing x86 hardware with the ability to run the Linux operating system.

When HP acquired Vertica early in 2011, I wondered if this was their path to a Exadata killer. Exadata, you may recall, was a join warehouse appliance effort between Oracle and HP before Oracle bought Sun. The HP hardware part of the Exadata line kind of fizzled out as Sun hardware was then used.

But now, Vertica plus HP converged infrastructure is architected to leverage in-memory data analytics of, for and by Big Data in the petabytes range. Oracle has its OLTP strengths, but for real-time analytics at scale and affordable cost, HP is betting big on Vertica. It's a critical element at the heart of HP’s growth strategy. These announcements around ease of deployment and support should go a long way to helping users explore and adopt it.

HP Vertica Systems real time analytics platform already has more than 350 clients in a variety of industries including finance, communications, online web and gaming, healthcare, consumer marketing and retail, said HP.

"We're winning deals against Exadata," said Martin Whittaker, Vice President for Systems and Solutions Engineering, Enterprise Servers, Storage and Networking (ESSN), HP Enterprise Business.

You may also be interested in:

HP rolls out EcoPOD modular data center, provides high-density converged infrastructure with extreme energy efficiency

LAS VEGAS – HP today at Discover here unveiled what it says is the world’s most efficient modular data center, a compact and self-contained Performance Optimized Data Center (POD) that supports more than 4,000 servers in 10 percent of the space and with 95 percent less energy than conventional data centers.

The HP POD 240a also costs 25 percent of what traditional data centers cost up front, and it can be deployed in 12 weeks, said HP. It houses up to 44 industry standard racks of IT equipment.

The EcoPOD joins a spectrum of other modular data center offerings, filling a gap on the lower end of other PODs like the shipping-container-sized Custom PODs, the HP POD 20c - 40c, and the larger bricks and mortar HP Flexible Data Center facilities. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The EcoPOD can be filled with HP blade servers and equipment, but also supports servers from third-parties. It is optimized for HP converged infrastructure components, however. HP says the EcoPOD can be ordered and delivered in three months, and then just requires an electric power and network connect to become operational.

The modular design, low capital and operating costs and rapid deployment will be of interest to cloud providers, Web 2.0 applications providers, government and oil industry users. I was impressed with its role in business continuity and disaster recovery purposes. The design and attributes also will help those organizations that need physical servers in a certain geography or jurisdiction for compliance and legal reasons, but at low cost despite the redundancy of the workloads.

The HP EcoPOD also provides maximum density for data center expansion or as temporary capacity during data center renovations or migrations, given that it streamlines a 10,000-square-foot data center into a compact, modular package in one-tenth the space, said HP.

The design allows for servers to be added and subtracted physically or virtually, and the cooling and energy use can be dialed up and down automatically based on load and climate, as well as via set policies. It can use outside air when appropriate for cooling ... like my house most of the year.

The HP POD 240a is complemented by a rich management capability, the HP EcoPOD Environmental Control System, with its own APIs and including its own remote dashboards and control suite, as well as remote client access from tablet computers, said HP.

The cost savings are eye-popping. HP says an HP POD 240a costs $552,000 a year to operated, versus $15.4 million for traditional systems energy use.

Built at a special HP facility in Houston, HP POD-Works, the EcoPODs will be available in the Q4 of this year in North America, and rolling out globally into 2012.

HP is also offering leasing arrangement, whereby the costs of the data center are all operating expenses, with little up-front costs.

You may also be interested in:

Friday, June 3, 2011

MuleSoft takes full-service integration to the cloud with iON iPaaS ESB platform

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Join the iON beta program. Sponsor: MuleSoft.

Enterprise application integration (EAI) as a function is moving out of the enterprise and into the cloud. So called integration platform as a service (iPaaS) has popped up on the edge of the enterprise. But true cloud integration as a neutral, full service, and entirely cloud-based offering has been mostly only a vision.

Yet, if businesses need to change rapidly as the cloud era unfolds, to gain and use new partners and new services, then new and flexible integration capabilities across and between extended applications and services are essential.

The older point-to-point methods of IT integration, even for internal business processes, are slow, brittle, costly, complex and hard to manage. Into this opportunity for a new breed of cloud integration services steps MuleSoft, a market leading, open-source enterprise service bus (ESB) provider, which aims to create a true cloud integration platform called Mule iON. [Disclosure: MuleSoft is a sponsor of BriefingsDirect podcasts.]

MuleSoft proposes nothing short of an iPaaS service that spans software as a service (SaaS) to legacy, SaaS to SaaS, and cloud to cloud integration. In other words, all of the above, when it comes to integrations outside of the enterprise.

BriefingsDirect recently learned more about MuleSoft iON, how it works and its pending general availability in the summer of 2011. There's also the potential for an expanding iON marketplace that will provide integration patterns as shared cloud applications, with the likelihood of spawning constellations of associated business to business ecosystems.

Explaining the reality for a full-service cloud-based integration platform solution are two executives from MuleSoft, Ross Mason, Chief Technology Officer and Founder, and Ali Sadat, the Vice President of Mule iON at MuleSoft. They are interviewed by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: It strikes me that the number of integrations that need to be supported are further and further toward the edge -- and then ultimately outside the organization.

Like it or not, any company of any size has some if not most of its data now outside of the firewall.



Mason: We describe it internally as the center of the enterprise gravity is shifting. The web is the most powerful computing resource we’ve had in the information age, and it’s starting to drag the data away from the enterprise outside into the platform itself. What this means for enterprises is, like it or not, any company of any size, has some if not most of its data now outside of the firewall.

I'm not talking about the Fortune 2000. They still have 95 percent of their data behind the firewall, but they’re also changing. But, for all of the enterprises and for forward-thinking CIOs, this is a very big and important difference in the way that you run your IT infrastructure and data management and security and everything else.

It turns a lot of things on its head. The firewall is constructed to keep everything within. What’s happening is the rest of the world is innovating at a faster speed and we need to take advantage of that inside enterprises in order to compete and win in our respective businesses.

There are a number of drivers in the marketplace pushing us toward integration as a service and particularly iPaaS. First of all, if we look back 15 years, integration became a focal point for enterprises, because applications were siloing their data in their own databases and for business to be more effective, they have to get that data out of those silos and into a more operational context, where they could do extended business processes, etc.

What we're seeing with cloud, and in particular the new wave of SaaS applications, is that we're doing a lot of the same mistakes for the same behaviors that we did 10 years ago in the enterprise. Every new SaaS application becomes a new data silo in the cloud and it’s creating an integration challenge for anyone that has the data across multiple SaaS providers.

New computing models

And it's not just SaaS. The adoption of SaaS is one key thing, but also the adoption of cloud and hybrid computing models means that our data also no longer lives behind the firewall. Couple that with the drivers around mobile computing that are enabling our workforce and consumers, when they are on the go, again, outside of the firewall.

Add the next social media networks and you have a wealth of new information about your employees, customers, and consumers, available through things like LinkedIn and Facebook. You've also got the big data explosion. The rise of things like Hadoop for managing unstructured data has meant that we end up pushing more data outside of our firewalls to third party services that help us understand our data better.

There are four key drivers: the adoption of SaaS applications; the movements by using more cloud and hybrid models; mobile is driving a need to have data outside of the enterprise; and then social media and also big data together are redefining where we find and how we read our information.

Gardner: It also appears that there will be a reinforcing effect here. The more that enterprises use cloud services, the more they’ll need to integrate. The more they integrate, the more capable and powerful the cloud services, and so on and so on. I guess we could anticipate a fairly rapid uptake in the need for these external integrations.

Mason: We think we might be a bit early in carving out the iPaaS market, but the response we're hearing, even from our largest organizers, is that most have lots of needs around cloud integration, even if it's just to help homogenize departmental applications. We’ve been blown away at MuleSoft at the demand for iON already. [Join the iON beta program.]

New open enterprises

The open-source model is absolutely critical, and the reason is that one of the biggest concerns for anyone adopting technology is who am I getting into bed with? If I buy from Amazon, ultimately, I'm getting into there with Amazon and their whole computing model, and it’s not an easy thing to get out.

With integration, it’s even more of a concern for people. We’ve looked through the vendor lock-in of the 1990s and 2000s, and people are a little bit gun-shy from the experiences they had with the product vendors like Atria and IBM and Oracle.

When they start thinking about IaaS or the cloud, then having a platform that’s open and freely available and that they can migrate off or on to and manage themselves is extremely important. Open source, and particularly MuleSoft and the Mule ESB, provides that platform.

Gardner: Ali, how do you see iPaaS process enablement happening?

Sadat: It’s a pretty interesting problem that comes up. The patterns and the integrations that you need to do now are getting, in a sense, much more complex, and it’s definitely a challenge for a lot of folks to deal with it.

We’re talking not only to cloud-to-cloud or enterprise-to-enterprise, but now extending it beyond the enterprise to the various cloud and the direction of data can flow either from the enterprise to the cloud or from the cloud to the enterprise. The problems are getting a little more challenging to solve.

The other thing that we’re seeing out there is that a lot of different application programming interfaces (APIs) are popping up -- more and more every day. There are all kinds of different technologies either being exposed to traditional web services or REST-based web services.

We’re seeing quite a few APIs. By some accounts, we're in the thousands or tens of thousands right now. In terms of APIs, they're going to be exposed out there for folks who are trying to figure out and how to integrate. [Join the iON beta program.]

Gardner: What do you propose for that?

Hybrid world

Sadat: It’s something a hybrid world, and I think the answer to that is a hybrid model, but it needs to be very seamless from the IT perspective.

If I want to do a real-time integration between Salesforce and an SAP, how do I enable that? If I poke holes from my firewall that’s going to definitely expose all kinds of security breaches that my network security folks are not going to like that. So how do I enable that? This is where iON comes into play.

We’re going to sit there in a cloud, open up a secure public channel where Salesforce can post events to iON, and then via a secured connection back to the enterprise, we can deliver that directly to SAP. We can do on the reverse side too. This is something that the traditional TIBCOs and WedMethods of of the world weren’t designed to solve and they weren’t even thinking about this problem when they designed and developed that application. [Join the iON beta program.]

The difference between integration running on-premise or in the cloud shouldn't matter as much, and the tooling should be the same. So, it should be able to support both a cloud-based management, and also be able to manage and drive us in the enterprise, and set up on-premise tools.

One of the things you’ll see about iON is a lot of familiar components. If you have been a Mule user or Mule ESB user, you will see that at the heart of iON itself. What we're providing now is the ability to be able to deploy your solutions, your integration applications to a cloud and be able to manage it there, but we also are going to give you the capability to be able to integrate back into the enterprise.

Gardner: Why not just use what Salesforce provides you and let that be the integration point? Why would you separate the integration cloud capability?

Sadat: Integration, as a whole, is much better served by neutral party than just going by any one of the application vendors. You can certainly write custom code to do it, and then people have been doing it, but they've seen over and over that that doesn’t necessarily work.

Having a neutral platform that speaks to APIs on both sides is very important. You’re not going to find Salesforce, for example, adopting SAP APIs, and vice versa. So, having that neutral platform is very important. Then, having that platform and being able to carry out all the kinds of different integration patterns that you need is also important.

We do understand the on-premise side of it. We understand the cloud side of the problem. We're in a unique position to bring those two together.

Having that platform and being able to carry out all the kinds of different integration patterns that you need is also important.



Gardner: Ross, please define for me what you consider the top requirements for this unique new neutral standalone integration cloud?

Mason: I'll start with the must-haves on the PaaS itself. In my mind the whole point of working with a PaaS is not just to do integration, but it’s for a provider, such as MuleSoft, to take all the headache and hard work out of the architecture as well.

For me, a true PaaS would allow a customer to buy a service level agreement (SLA) for the integration applications. That means we are not thinking about CPUs, about architecture, or I/O or memory usage, and just defining the kind of characteristics they want from their application. That would be my Holy Grail of why a PaaS is so much better?

For integration, you need that, plus you need deep expertise in that integration itself. Ali just mentioned that people do a lot of their own point to points and SaaS providers do their own point integrations as well.

We spend a lot of money in the enterprise to integrate applications. You do want a specialist there, and you want someone who is independent and will adopt any API that makes sense for the enterprise in a neutral way.

We’re never going to be pushing our own customer relationship management (CRM) application. We're not going to be pushing our own enterprise resource planning (ERP). So, we’re a very good choice for being able to pull data from whichever application you're using. Neutrality is very important.

Hugely important

Finally, going back to the open-source thing again, open source is hugely important, because I want to know that if I build an integration on a Switzerland platform, I can still take that away and run it behind my firewall and still get support. Or, I just want to take it away and run it and manage it myself.

With iON, that’s the promise. You’ll be able to take these integration apps and the integration flows that you build, and run them anywhere. We're trying to be very transparent on how you can use the platform and how you can migrate on as well as off. That’s very important. [Join the iON beta program.]

Gardner: You’ve come out on May 23 with the announcement about iON and describing what you intend.

Sadat: That’s correct. We started with our private beta, which is coming to an end. As you mentioned, we’re now releasing a public beta. Pretty much anybody can come in, sign up, and get going in a true cloud fashion. [Join the iON beta program.]

We're allowing ourselves a couple months before the general availability to take in feedback during the beta release. We’re going to be actively working with the beta community members to use the product and tell us what they think and what they'd like changed.

One of the other things we’re doing soon after the general availability is releasing a series of iON applications that we'll be building and releasing. These will be both things that we’ll offer as ways to monetize certain integrations, but also as reference applications for partners and developers to look at, be able to mimic, and then be able to build their own applications on top of it.

Gardner: What is it they are going to get?

Sadat: At the core of it, they get Mule. That’s pretty essential, and there’s a whole lot of reasons why they do that. They get a whole series of connectors and various transports they can use. One of the things that they do get with iON is the whole concept of this virtual execution environment sitting in the cloud. They don’t have to worry about downloading and installing Mule ESB. It’s automatically provided. We'll scale it out, monitor it, and provide all that capability in the cloud for them.

Once they’re ready, they push it out to iON, and they execute it. They can then manage and monitor all the various flows that are going through the process.



They just need to focus on their application, the integration problems that they want to solve, and use our newly released Mule Studio to orchestrate these integrations in a graphical environment. Once they’re ready, they push it out to iON, and they execute it. They can then manage and monitor all the various flows that are going through the process.

The platform itself will have a pretty simple pricing model. It’s going to be composed of couple of different dimensions. As you need to scale out your application, you can run more of these units of work. You'll be able to handle the volume and throughout that you need, but we are also going to be tracking events. So this is, in Mule terminology, equivalent to a transaction. Platform users will be able to buy those in select quantities and then be able to get charged for any overage that they have.

Also, partners and ISVs today don’t have a whole lot of choices in terms of being able to build and embed OEM services in a cloud fashion into various applications or technologies that they are building. So, iON is going to provide that platform for them.

Embeddable platform

One of the key things of the platform itself is that it is very embeddable. Everything is going to be exposed as a series of APIs. SIs and SaaS providers can easily embed that in their own application and even put their own UI on top of it, so that underneath it it says iON, but on top, it’s their own look and feel, seamlessly integrated into their own applications and solutions. This is going to be a huge part of iON.

Gardner: Looking at the future how does the mobile trend in particular affect the need for a neutral third-party integration capability?

Mason: Mobile consumers are consuming data, basically. The mobile application model has changed, because now you get data from the server and you render on the device itself. That’s pretty different from the way we’ve been building applications up till fairly recently.

What that means is that you need to have that data available as a service somewhere for those applications to pick it up. An iPaaS is a perfect way of doing that, because it becomes the focal point where it can bring data in, combine it in different ways, publish it, scrub it, and push it out to any type of consumer. It's not just mobile, but it’s also point-of-sale devices, the browser, and other applications consuming that data.

Mobile is one piece, because it must have an API to grab the data from, but it’s not the only piece. There are lots of other embedded devices in cars, medical equipment, and everything else.

If you think about that web, it needs to talk to a centralized location, which is no one enterprise. The enterprise needs to be able to share its data with integration outside of its own firewall in order to create these applications.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Join the iON beta program. Sponsor: MuleSoft.

You may also be interested in:

Wednesday, June 1, 2011

HP's IT Performance Suite empowers IT leaders with unified view into total operations, costs

The IT cobbler's kids finally have their own shoes.

Some 20 years into enterprise resource planning (ERP), whereby IT made business performance metrics and business intelligence (BI) a science for driving the modern corporation, ERP and BI for IT is finally at hand.

HP today unveiled a new suite of software designed to measure and improve IT performance at a comprehensive level via an IT system of record approach. The HP IT Performance Suite gives CIOs the ability to optimize application development, infrastructure and operations management, security, information management, financial planning, and overall IT administration.

The suite and its views into operations via accurate metrics, rather than a jumble of spreadsheets, also sets up the era of grasping true IT costs at the business process level, and therefore begins empirical costs-benefits analysis to properly evaluate hybrid computing models. Knowing your current costs -- in total and via discrete domains -- allows executives to pick the degree to which SaaS, cloud and outsourcing form the best bet for their company.

Included with the suite is an IT Executive Scorecard that provides visibility in critical performance indicators at cascading levels of IT leadership. It's founded on the open IT data model with built-in capabilities to integrate data from multiple sources, including third parties to deliver a single holistic view of ongoing IT metrics. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP has identified 150 standard, best-in-class key performance indicators (KPIs), 50 of which are included in the Executive Scorecard as a starting dashboard. KPIs are distributed in customizable dashboards that provide real-time, role-based performance insights to technology leadership, allowing alignment across common goals for an entire IT organization.

"With this we can leverage our human capital better," said Alexander Pasik, PhD, CIO at the IEEE, based in Piscataway, NJ. "The better automation you can apply to IT operations, the better. It frees us up to focus on the business drivers."

Lifecycle approach

The Performance Suite uses HP's lifecycle approach to software development and management and integrates industry standards such as ITIL.

The first solution to be offered in the suite is the CIO Standard Edition, which includes the Executive Scorecard, along with financial planning and analysis, project and portfolio management (PPM), and asset manager modules. This edition automatically integrates data from the modules to provide more than 20 best-practice KPIs covering financial and project health, enabling the optimization of IT performance from a business investment point of view.

"Our use of IT is about driving the actual business," said Pasik, who is adopting elements of the suite and looks forward to putting the scorecard to use soon. "We need to measure IT overall. We will have legitimate metrics on internal operations."

The scorecard can be very powerful at this time in computing, said Piet Loubser, HP senior director of product marketing, because the true capital expenses versus operations expenses for IT can be accurately identified. This, in turn, allows for better planning, budgeting and transitioning to IT shared services and cloud models. Such insight also allows IT to report to the larger organization with authority on its costs and value.

Using the scorecard, said Loubser, IT executives can quickly answer with authority two hitherto-fore vexing questions: Is IT on budget, and is IT on time?

What's especially intriguing for me is the advent of deeper BI for IT, whereby data warehouses of the vast store of IT data can be assembled and analyzed. There is a treasure trove of data and insights into how business operate inside of the IT complex.

Applying BI best practices, using pre-built data models, and developing ongoing reference metrics on business processes to the IT systems that increasingly reflect the business operations themselves portends great productivity and agility benefits. Furthermore, getting valid analysis on IT operations allows for far better planning on future data center needs, modernization efforts, applications lifecycle management, and comparing and contrasting for hybrid models adoption ... or not.

For more information, visit the suite's HP website, www.hp.com/go/itperform.

The announcement of the HP IT Performance Suite comes less than a week before HP's massive Discover conference in Las Vegas, where additional significant news is expected. I'll be doing a series of on-site podcasts from the conference on HP user case studies and on the implications and analysis of the news and trends. Look for them on this blog or BriefingsDirect partner site.

You may also be interested in: