Wednesday, September 9, 2009

Harnessing enterprise clouds: Many technical underpinnings already reside in today's data centers

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Our latest BriefingsDirect podcast uncovers how to quickly harness the technical benefits of current data centers for cloud computing approaches. We examine how enterprises are increasingly focused on delivery and consumption of cloud-based infrastructure and services.

The interest in cloud adoption is being fueled by economics, energy concerns, skills shortages, and complexity. Getting the best paybacks from cloud efforts early and often and by bringing them on-premises, can help prevent missing the rewards of cloud models later by being unprepared or inexperienced now.

We expect that the way the clouds are built will be refined for more and more enterprises over time. The early goal is gaining the efficiency, control and business benefits of an everything-as-a-service approach, without the downside and risks.

Yet much of what makes the cloud tick is already being used inside of many data centers today. So now we'll examine how many of the technical underpinnings of cloud are available now for organizations to leverage in their in-house data centers, whether it’s moving to highly scalable servers and storage, deeper use of virtualization technologies, improved management and automation for elastic compute provisioning, or services management and governance expertise.

Here to help us better understand how to make the most of cloud technologies are four experts from Hewlett-Packard (HP): Pete Brey, worldwide marketing manager for HP StorageWorks group; Ed Turkel, manager of business development for HP Scalable Computing and Infrastructure; Tim Van Ash, director of software as a service (SaaS) products in the HP Software and Solutions group, and Gary Thome, chief strategist for infrastructure software and blades at HP. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Van Ash: When IT looks at becoming a service provider, technology is a key part of it, architecting yourself to be able to support the service levels around delivering a service, as opposed to some of the more traditional ways that we saw IT evolve. Then, applications were added to the environment, and the environment was expanded, but it wasn’t necessarily architected around the application.

When IT moves to a service provider role, it's as much about how they structure their organization to be able to deliver those services. That means being able to not only have the sort of operational teams that are running and supporting the application, but also have the customer-facing sides, who are managing the business relationships, whether they would be internal or external customers, and actually starting to run it as if it were a business.

... It’s also about realizing that it's not just a cost model, but it is very much a business model. That means you need to be actively out there recruiting new customers. You need to be out there marketing yourself. And, that’s one area that IT traditionally has been quite poor at -- recognizing how to structure themselves to deliver as a business.

The technology is really one of the key enablers that come into that and, more importantly, enables you to get scale and standardization across the board, because one of the issues that IT has traditionally faced is that often architecture is forced on them, based on the application selection by the business.

When you start to move into cloud environments, which feature, in many cases, high levels of virtualization, you start to decouple those layers, as the service provider has a much stronger control over what the architecture looks like across the different layers of the stack. This is really one of the areas where cloud [can] accelerate this process enormously.

Brey: Now, not only do you have your scale-out compute environments, you need to also pay attention to the storage piece of the equation and delivering the platforms. The storage platforms need not only to scale to the degree that we talk about into the petabyte ranges, but they also need to be very simple and easy to use, which will drive down your total cost of ownership and will drive down your administrative costs.

They also deliver a fundamentally new level of affordability that we have never really seen before in the storage marketplace in particular. So these combination of things, scalability, manageability, ease of use and overall affordability, are driving what I consider almost a revolution in the storage marketplace these days.

Turkel: In those [cloud] environments, the way that they look at management of the environment, the resilience or reliability of individual servers, storage, and so on, is done a little differently, partially because of the scale of the environments that they are creating.

If you look at many of the cloud providers, what they've done is they've implemented a great deal of resilience in their application environment, in a sense, moving the issues of resiliency away from the hardware and more into software. When you look at an environment that is as large as what they are doing, it's somewhat natural to expect that components of that environment will fail at some level of frequency.

Their software infrastructure has to be able to deal with that. ... The way that [enterprise IT] service -- and the way that they design -- the environment has to be somewhat similar to those cloud providers.

Thome: When customers are thinking about going to a cloud infrastructure or shared-service model, they really want to look at how they are going to get a payback from that. They're looking at how they can get applications up and running much faster and also how they can do it with less effort and less time. They can redirect administrative time or people time from just simply getting the basic operations, getting the applications up and running, getting the infrastructure up and running for the applications, to doing more innovative things instead.

... Unlike the cloud that Ed was talking about earlier where they are able to put things like the resilience and scalability into the software, many enterprises don't own all their applications, and there are a variety of different applications on a variety of different operating systems.

So, they really need a more flexible platform that gives them an abstraction between the applications and the hardware itself. Products like BladeSystem Matrix, with technologies such as our Insight Orchestration and our Virtual Connect technology, allows customers to get that abstraction.

Customers are looking for those things, as well as the cloud model, a shared-services platform, to be able to get higher utilization out of the equipment.

Turkel: [The cloud] approach ... is much

Customers are looking for those things, as well as the cloud model, a shared-services platform, to be able to get higher utilization out of the equipment.

more of a holistic view of the IT environment and selling a broader solution, than simply going in and selling a server with some storage and so on for a particular application. It tends to touch a broader view of IT, of the data center, and so on.

IT has to look at working with the CIO or senior staff within the enterprise IT infrastructure, looking fundamentally at how they change their model of how they deliver their own IT service to their internal customers.

Rather than just providing a platform for an application, they are looking at how they provide an entire service to their customer base by delivering IT as a service. It's fundamentally a different business model for them, even inside their own organizations.

... We're also seeing some interesting crossover from another part of our market that has been very traditionally a scale-out market. That's the high-performance computing (HPC) or technical computing market, where we are seeing a number of large sites that have been delivering technical computing as a service to their customers for some time, way back when they called it time sharing. Then, it became utility computing or grid, and so on.

Now, they're more and more delivering their services via cloud models. In fact, they're working very closely with us on a joint-research endeavor that we have between HP Labs, Yahoo, and Intel called the Cloud Computing Test Bed, more recently called the Open Cirrus Project.

Van Ash: The thing that we're seeing from our customers is how they extend enterprise control in the cloud, because cloud has the potential to be the new silo in the overall architecture. As you said, in a heterogeneous environment, you potentially have multiple cloud providers. In fact, you almost certainly will have a multi-sourced environment.

So, how do you extend the capabilities, the control, and the governance across your enterprise in

If you look at many of the cloud providers, what they've done is they've implemented a great deal of resilience in their application environment, in a sense, moving the issues of resiliency away from the hardware and more into software.

the cloud to ensure that you are delivering the most agile and the most cost effective solution, whether it would be in-house or leveraging cloud to accelerate those values?

What we're seeing from customers is a demand for existing enterprise tools to expand their role and to manage both private cloud and public cloud technologies.

... One of the most exciting examples that I have seen recently has been taking the enterprise technology around provisioning of both physical and virtual servers in a self-service and a dynamic fashion and taking it to the service provider.

Verizon recently announced one of their cloud offerings, which is Compute as a Service, and that's all based on the business service automation technology that was developed for the enterprise.

It was developed to provide data-center automation, providing provisioning and dynamic provisioning to physical and logical servers, networks, storage, and tying it altogether through run book automation, through what we call Operations Orchestration.

Verizon has taken that technology and used that to build a cloud service that they are now delivering to their customers. So, we're seeing service providers adopting some of the existing enterprise technology, and really taking it in a new direction.

So, while cloud is currently going in a very exciting direction, it really represents an evolution of many of the technologies that we at HP have focused on now for the last 20 years.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Tuesday, September 8, 2009

What ever happened to the withering RIA market?

This guest post comes courtesy of ZapThink's senior analyst Ron Schmelzer.

By Ronald Schmelzer
Traditional market research focuses on the size and growth of well-defined market segments. As vendors enter and compete in those markets, customers participate by purchasing products and services within those segments, and market research seeks to establish the patterns of such transactions in order to predict the future trends for such markets.

In the information technology (IT) space, however, many markets are transitory in that as new technologies and behavior patterns emerge, what might formerly have been separate markets vying for customer dollars merge into a single market in order to address evolving customer needs. Over time these separately identifiable markets lose their distinct identity, as products and customer demand both mature. The Rich Internet Application (RIA) market is certainly no exception to this pattern of market behavior.

As we originally covered in a ZapFlash back in 2004, a RIA combines elements of rich user interactivity and client-side logic once solely the domain of desktop and client/server applications with the distributed computing power of the Internet. In essence, a RIA is a hybrid client-server/web application model that attempts to bridge the gap between those two computing approaches and address the limitations of each.

However, in the subsequent half-decade since that first report came out, it is becoming clear that the concept of RIA spans the gamut of applications from those that barely have any richness to them at all in one extreme, to considerably rich and interactive applications that made use of a wide range of RIA capabilities in the other. From this perspective, it’s evident that an application can have all of the characteristics of an RIA application, none of the characteristics, or somewhere in between resulting in a spectrum of richly enabled applications.

At some point, won’t all Internet applications be rich, and all desktop applications become Internet-enabled?

From a services oriented architecture (SOA) perspective, RIAs are simply the user interface to composite services. This is why we care about the RIA market: To the extent that organizations can abstract the presentation of their services from the composition of those services, and in turn from the implementation of the services, we can introduce greater flexibility into the sort of applications we deliver to the business without sacrificing functionality.

However, more importantly, as an increasing range of applications add richness to their capabilities, what it means to be an RIA is increasingly becoming blurry. At some point, won’t all Internet applications be rich, and all desktop applications become Internet-enabled? If so, then does it even matter if a separately discernable RIA market exists?

RIAs: The application boundary disappears

Macromedia, now part of Adobe Systems, introduced the RIA term in the 1990s to delineate products that addressed the limitations at the time in the richness of application interfaces, media and content available on the Internet. Today, RIAs comprise mostly Web-based applications that have some of the characteristics of desktop applications, where the RIA environment typically delivers application capabilities via Web browser plug-ins, native browser capabilities, or vendor-specific virtual machines. In the past few years, new RIA solutions have also emerged to provide desktop capabilities that leverage the same technologies available in Web applications.

In our recent Evolution of the Rich Internet Application Market report, we identified a classification system by which organizations can classify the richness of its applications according to three axes:

  • Richness of Internet Capabilities – The extent to which the application or technology leverages the full functionality of the Internet.
  • Richness of User Interface – The extent to which the application or technology delivers interactive, deep, and broad user interface (UI) capabilities.
  • Richness of Client Capabilities – The extent to which the application offers client computing capabilities that utilize the local machine power, such as storing information locally, using local memory and disk storage, and shifting processing power to the desktop from the server.
The following is a visualization of the three axes and the scope of potential RIA solutions:

As can be gleaned from the above picture, there’s no sharp delineation between what can clearly be identified as an RIA and what cannot. As new technologies and patterns emerge that increase the capability of the web application, browser, and desktop, that delineation will continue to blur.

When Adobe acquired Macromedia in 2005, it also acquired a legacy that included Shockwave, Flash, and Flex. This legacy of RIA experience has culminated in the recent release of the Adobe Integrated Runtime (AIR), an RIA environment that facilitates the construction browser-independent Web applications that have many of the features of desktop applications, including offline capabilities -- in other words, RIAs. The ubiquity of Adobe’s Flash plug-in has helped to make the vendor a dominant player in the industry, even though it does not have its own browsers, operating systems, or general-purpose application development environments.

However, while Adobe is currently the biggest and most experienced RIA vendor selling commercial RIA licenses, it faces serious challenges on multiple fronts, most notably from Microsoft. Microsoft’s dominance in desktop and Internet application development, as well as its commanding market share of Web browsers and desktop operating systems means that it should be taken seriously as a threat to Adobe’s commanding share of the market with the introduction of the company’s Silverlight offering. Also at the end of 2008, Sun released JavaFX, its long-awaited entrant in the RIA race. The question still remains, however, how the battle for the RIA space will be fought before the time it’s absorbed into other markets.

In the past few years, an approach to RIA capabilities emerged that utilized native browser technology, most notably JavaScript, DHTML, and XML. These disparate approaches, collectively known as Ajax, have matured considerably since 2006 as browsers’ standards compliance and JavaScript support has improved, diminishing the need for proprietary plug-ins to fill RIA capabilities. Many of these Ajax-based RIA approaches are open source offerings and a few are commercial offerings from niche vendors.

The ZapThink take

As the line between browser-based and desktop-based applications blurs, and as approaches for abstracting functionality and information from user interfaces develop, other markets will eventually merge with the currently separately identifiable RIA market. Furthermore, as the Internet continues to penetrate every aspect of our lives, both business and personal, the distinction between “Internet application” and “application” will disappear, rich or not.

Earlier this year, ZapThink surveyed a number of enterprise end-users to obtain more information about the context for RIAs in their environments. The single consistent theme across these interviews is the enterprise context for RIAs. Because these practitioners are architects, their scope of interest covers the entire enterprise application environment, rather than usage of RIA for one specific application. Within this context, RIAs are the user interface component of broader enterprise applications.

For those architects who are implementing SOA, the RIA story focuses on the service consumer, which is the software that consumes services in the SOA context. Such consumers don’t necessarily have user interfaces, but when they do, RIAs typically meet the needs of the business more than traditional browser interfaces or desktop applications.

As a result, there is increasing demand for RIA capabilities in the enterprise, although people don’t identify the applications that leverage such capabilities as RIAs. Rather, RIA capabilities are features of those applications. This further serves to make indistinct a separately identifiable RIA market.

However, this dissolution of the RIA market as a separate market is still several years away, as all indications are that the RIA environments market in particular will continue to experience healthy growth for years to come.

This guest post comes courtesy of ZapThink's senior analyst Ron Schmelzer.