Wednesday, September 16, 2009

Jericho Forum aims to guide enterprises through risk mitigation landscape for cloud adoption

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: The Open Group.

My latest podcast discussion comes from The Open Group’s 23rd Enterprise Architecture Practitioners Conference and associated 3rd Security Practitioners Conference in Toronto.

We're talking about security in the cloud and decision-making about cloud choices for enterprises. There has been an awful lot of concern and interest in cloud and security, and they go hand in hand.

We'll delve into some early activities among several standards groups, including the Jericho Forum. They are seeking ways to help organizations approach cloud adoption with security in mind.

Here to help on the journey toward safe cloud adoption, we're joined by Steve Whitlock, a member of the Jericho Board of Management. The interview is conducted by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Whitlock: A lot of discussions around cloud computing get confusing, because cloud computing appears to be encompassing any service over the Internet. The Jericho Forum has developed what they call a Cloud Cube Model that looks at different axis or properties within cloud computing, issues with interoperability, where is the data, where is the service, and how is the service structured.

The Cube came with a focus on three dimensions: whether the cloud was internal

The in-source-outsource question is still relevant. That’s essentially who is doing the work and where their loyalty is.

or external, whether it’s was open or proprietary, and, originally, whether it was insourced or outsourced. ... There are a couple of other dimensions to consider as well. The insource-outsource question is still relevant. That’s essentially who is doing the work and where their loyalty is.

They've also coupled that with the layered model that looks at hierarchical layer of cloud services, starting at the bottom with files services and moving up through development services, and then full applications.

The Jericho Forum made its name early on for de-perimeterization or the idea that barriers between you and your business partners were eroded by the level of connectivity you needed do the business. Cloud computing could be looked at the ultimate form of de-perimeterization. You no longer know even where your data is.

... Similar to SOA, the idea of direct interactive services on demand is a powerful concept. I think the cloud extends it. If you look at some of these other layers, it extends it in ways where I think services could be delivered better.

It would be nice if the cloud-computing providers had standards in this area. I don’t see them yet. I know that other organizations are concerned about those. In general, the three areas concerned with cloud computing are, first, security, which is pretty obvious. Then, standardization. If you invest a lot of intellectual capital and effort into one service and it has to be replaced by another one, can you move all that to the different service? And finally, reliability. Is it going to be there when you need it?

... There are concerns, as I mentioned before -- where the data is and what is the security around the data -- and I think a lot of the cloud providers have good answers. At a really crude level, the cloud providers are probably doing a better job than many of the small non-cloud providers and maybe not as good as large enterprises. I think the issue of reliability is going to come more to the front as the security questions get answered.

... It’s very important to be able to withdraw from a cloud service, if they shut down for some reason. If your business is relying them for day-to-day operations, you need to be able to move to a similar service. This means you need standards on the high level interfaces into these services. With that said, I think the economics will cause many organizations to move to clouds without looking at that carefully.

Formal relationship

The Jericho Forum is also working with the Cloud Security Alliance on their framework and papers. ... It's a very complementary [relationship]. They arose separately, but with overlapping individuals and interests. Today, there is a formal relationship. The Jericho Forum has exchanged board seats with the Cloud Security Alliance, and members of the Jericho Forum are working on several of the individual working groups in the Cloud Security Alliance, as they prepare their version 2.0 of their paper.

... In addition to the cube model, there is the layered model, and some layers are easier to outsource. For example, if it’s storage, you can just encrypt it and not rely on any external security. But, if it’s application development, you obviously can’t encrypt it because you have to be able to run code in the cloud.

I think you have to look at the parts of your business that are sensitive to needs for encryption or export protection and other areas, and see which can fit in there. So, personally identifiable information (PII) data might be an area that’s difficult to move in at the higher application level into the cloud.

I think the interest in how to protect data, no matter

It’s very important to be able to withdraw from a cloud service, if they shut down for some reason. ... You need to be able to move to a similar service.

where it is, is what it really boils down to. IT systems exist to manipulate, share, and process data, and the reliance on perimeter security to protect the data hasn’t worked out, as we’ve tried to be more flexible.

We still don’t have good tools for data protection. The Jericho Forum did write a paper on the need for standards for enterprise information protection and control that would be similar to an intelligent version of rights management, for example.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: The Open Group.

Tuesday, September 15, 2009

Economic and climate imperatives combine to elevate Green IT as cost-productive priority

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Welcome to a podcast discussion on Green IT and the many ways to help reduce energy use, stem carbon dioxide creation, and reduce total IT costs -- all at the same time. We're also focusing on how IT can be a benefit to a whole business or corporate-level look at energy use.

We'll look at how current IT planners should view energy concerns, some common approaches to help conserve energy, and at how IT suppliers themselves can make "green" a priority in their new systems and solutions.

[UPDATE: HP on Wednesday released a series of products that help support these Green IT initiatives.]

[UPDATE 2: HP named "most green" IT vendor by Newsweek.]

Here to help us better understand the Green IT issues, technologies, and practices impacting today's enterprise IT installations and the larger businesses they support, we're joined by five executives from HP: Christine Reischl, general manager of HP's Industry Standard Servers; Paul Miller, vice president of Enterprise Servers and Storage Marketing at HP; Michelle Weiss, vice president of marketing for HP's Technology Services; Jeff Wacker, an EDS Fellow, and Doug Oathout, vice president of Green IT for HP's Enterprise Servers and Storage. The panel was moderated be me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Oathout: The current cost of energy continues to rise. The amount of energy used by IT is not going down. So, it's becoming a larger portion of their budget. ... [Executives] want to look at energy use and how they can reduce it, not only from a data center perspective, but also from consumption of the monitors, printers, and desktop PCs as well. So, the first major concern is the cost of energy to run IT.

[They also] want to extend the life of their data center. They don't want to have to spend $10 million, $50 million, or $100 million to build another data center in this economic environment. So, they want to know anything possible, from best practices to new equipment to new cooling designs, to help them extend the life of the data center.

Lastly, they're concerned with regulations coming in the marketplace. A number of countries already have a demand to reduce power consumption through most of their major companies. We have a European Code of Conduct, that's optional for data centers, and then the U.S. has regulations now in front of Congress to start a cap-and-trade system.

IT can multiply the effects of intelligence being built into the system. IT is the backbone of digitization of information, which allows smart business people to make good, sound decisions. ... This is a must-do. The business environment is saying, "You've got to reduce cost," and then the government is going to come in and say, "You're going to have to reduce your energy." So, this is a must-do.

Miller: One of the key issues is who owns the problem of energy within the business and within the data center. IT clearly has a role. The CFO has a role. The data center facilities manager has a role. ... You can't manage what you can't see. There are very limited tools today to understand where energy is being used, how efficient systems are, and how making changes in your data center can help the end customer.

Our expertise in knowing where and how changes to different equipment, different software models, and different service models can drive a significant impact to the amount of energy that customers are using and also help them grow their capacity at the same time.

... Everyone needs an ROI that's as quick as possible. It's gone from 12 months down to 6 months. With our new ProLiant G6 servers, the cost and energy savings alone is so significant, when you tie in technologies like virtualization and the power and performance we have, we're seeing up to three months ROI over older servers by companies being able to save on energy plus software costs.

Reischl: Well, we have been investing in that area for several years now. We will have an energy power cooling roadmap and we will continuously launch innovation as we go along. We also have an overall environment around power and cooling, which we call the Thermal Logic environment. Under this umbrella, we are not only innovating on the hardware side, but on the software side as well, to ensure that we can benefit on both sides for our customers.

In addition to that, HP ProCurve, for example, has switches that now use 40 percent less energy than industry average network switches. We also have our StorageWorks Enterprise Virtual Array, which reduces the cost of power and cooling by 50 percent using thin provisioning and larger capacity disks.

Weiss: IT tends to think in terms of a lifecycle. If you think about ITIL and all of the processes and procedures most IT people follow, they tend to be more process oriented than most groups. But, there is even more understanding now about that latter stage of the lifecycle and not just in terms of disposing of equipment.

The other area that people are really thinking about now is data -- what do you do at the end of the lifecycle of data? How do you keep the data around that you need to, and what do you do about data that you need to archive and maybe put on less energy-consuming devices? That's a very big area.

Wacker: [At EDS] we look for total solutions, as opposed to spot solutions, as we approach the entire ecology, energy, and efficiency triumvirate. It's all three of those things in one. It's not just energy. It's all three.

We look from the origination all the way through the delivery of the data in a business process. Not only do we do the data centers, and run servers, storage, and communications, but we also run applications.

Applications are also high on the order of whether they are green or not. First of all, it means reconciling an application's portfolio, so that you're not running three applications in three different places. That will run three different server platforms and therefore will require more energy.

It's being able to understand the inefficiencies with which we've coded much of our application services in the past, and understanding that there are much more efficient ways to use the emerging technologies and the emerging servers than we've ever used before. So, we have a very high focus on building green applications and reconciling existing portfolios of applications into green portfolios.

How you use IT

Moving onto the business processes, the best data delivered into the worst process will not improve that process at all. It will just have extended it. Business process outsourcing, business process consulting, and understanding how you use IT in the business is continuing to have a very large impact on environmental and green.

You've already identified the major culprit in this. That is that the cost of energy is going to continue to accelerate, and to be higher and higher, and therefore a major component of your cost structure in running IT. So everybody is looking at that.

Cloud is, by its definition, moving a lot of processes into a very few number of boxes -- ultra virtualization, ultra flexibility. So it's a two-sided sword and both sides have to be looked at. One, is for you to be able to get the benefits of the cloud, but the other one is to make sure that the cost of the cloud, both in terms of capabilities as well as the environment, are in your mindset as you contract.

One of the things about what has been called cloud or Adaptive Infrastructure is that you've got to look at it from two sides. One, if you know where you're getting your IT from, you can ask that supplier how green is your IT, and hold that supplier to a high standard of green IT.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Active Endpoints debuts ActiveVOS 7.0 with BPMN 2 support, improved RIA interfaces

Take the BriefingsDirect middleware/ESB survey now.

In a move to meet the growing demand for business process agility, Active Endpoints is readying the next release of its business process management (BPM) suite. The Waltham, Mass.-based modeling tool and process execution firm is rolling out ActiveVOS 7.0 later this month, and I got a sneak peek last week.

Active Endpoints' value has long been modeling, testing, deploying, running and managing business process applications – both system and human tasks. But CEO Mark Taber says version 7 pioneers a new approach to BPM. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

“Enterprises are looking to a new generation of process applications to increase agility and improve efficiency. As attractive as building business process applications is, it has been hard for many organizations to do so because the tools have, until now, been too cumbersome, proprietary and expensive,” Taber said. “ActiveVOS 7.0 overcomes these challenges by being innovative, lean, open and affordable.”

What’s New in 7.0?

ActiveVOS 7.0 looks and feels different than its predecessors. For starters, the software has a new design canvas that uses the Business Process Modeling Notation (BPMN) 2.0 specification to create executable BPEL processes. On the innovation front, Active Endpoints points to “structured activities” that accelerate process modeling by offering time-saving drag-and-drop constructions.

In viewing a demo of ActiveVOS 7.0, I was struck by how the business analysts needs are targeted visually, with a rich and responsive interface via the AJAX-based forms designer. The latest version uses the "fit" client approaches, leveraging the better graphics and performance of a RIA. I also liked a ease of the process simulation and improved dashboards and auditing.

Moving the presentation tier power from the server to client gives process designers more flexible access to services directly from forms. These forms can issue standard SOAP calls to access services. The result: end users have direct access to information critical to decision-making.

Finally, Active Endpoints’ latest effort debuts ActiveVOS Central, a customizable application that consolidates user interaction with the BPMN into a single user interface. There’s also support for continuous integration and permalinks for ActiveVOS forms.

Active Endpoints isn’t introducing bells and whistles for the sake of rolling out a new iteration. The company points to key benefits for companies that use version 7: reduced dependence on consultants, application delivery on schedule, and more protection for your investment. All of these features aim to improve productivity and quicken results.

As I told the crew at Active Endpoints: Gone are the days when productivity gains could be realized with a new, faster chip -- or a better, faster database. Instead, a "new" Moore’s Law has begun to take hold.

This new era law declares that productivity today is better gained from improving business processes and the way human tasks and machines tasks are combined to rapidly improve results. Productivity needs to come from ongoing process innovation and refinement.

ActiveVOS 7.0 ships this month.

Take the BriefingsDirect middleware/ESB survey now.

Monday, September 14, 2009

Open Group ramps up cloud and security activities as extension of boundaryless organization focus

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Standards and open access are increasingly important to users of cloud-based services. Yet security and control also remain top-of-mind for enterprises. How to make the two -- cloud and security -- work in harmony?

The Open Group is leading some of the top efforts to make cloud benefits apply to mission critical IT. To learn more about the venerable group's efforts I recently interviewed Allen Brown, president and CEO of The Open Group. We met at the global organization's 23rd Enterprise Architecture Practitioners Conference in Toronto.

Here are some excerpts:
Brown: We started off in a situation where organizations recognized that they needed to break down the boundaries between their organizations. They're now finding that they need to continue that, and that investing in enterprise architecture (EA) is a solid investment developing for the future. You're not going to stop that just because there is a downturn.

In fact, some of our members who I've been speaking to see EA as critical to ready their organization for coming out of this economic downturn.

... We're seeing the merger of the need for EA with security. We've got a number of security initiatives in areas of architecture, compliance, audit, risk management, trust, and so on. But the key is bringing those two things together, because we're seeing a lot of evidence that there are more concerns about security.

... IT security continues to be a problem area for enterprise IT organizations. It's an area where our members have asked us to focus more. Besides the obvious issues, the move to cloud does introduce some more security concerns, especially for the large organizations, and it continues to be seen as an obstacle.

On the vendor side, the cloud community recognizes they've got to get security, compliance, risk, and audit sorted out. That's the sort of thing our Security Forum will be working on. That provides more opportunity on the vendor side for cloud services.

... We've always had this challenge of how do we breakdown the silos in the IT function. As we're moving towards areas like cloud, we're starting to see some federation of the way in which the IT infrastructure is assembled.

As far as the information, wherever it is, and what parts of it are as a service, you've still got to be able to integrate it, pull it together, and have it in a coherent manner. You’ve got to be able to deliver it not as data, but as information to those cross-functional groups -- those groups within your organization that may be partnering with their business partners. You've got to deliver that as information.

The whole concept of Boundaryless Information Flow, we found, was even more relevant in the world of cloud computing. I believe that cloud is part of an extension of the way that we're going to break down these stovepipes and silos in the IT infrastructure and enable Boundaryless Information Flow to extend.

One of the things that we found internally in moving from the business side of what our architecture is that the stakeholders understand to where the developers can understand, is that you absolutely need that skill in being able to be the person that does the translation. You can deliver to the business guys what it is you're doing in ways that they understand, but you can also interpret it for the technical guys in ways that they can understand.

As this gets more complex, we've got to have the equivalent of city-plan type architects, we've got to have building regulation type architects, and we've got to have the actual solution architect.

... We've come full circle. Now there are concerns about portability around the cloud platform opportunities. It's too early to know how deep the concern is and what the challenges are, but obviously it's something that we're well used to -- looking at how we adopt, adapt, and integrate standards in that area, and how we would look for establishing the best practices.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Wednesday, September 9, 2009

Harnessing enterprise clouds: Many technical underpinnings already reside in today's data centers

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Our latest BriefingsDirect podcast uncovers how to quickly harness the technical benefits of current data centers for cloud computing approaches. We examine how enterprises are increasingly focused on delivery and consumption of cloud-based infrastructure and services.

The interest in cloud adoption is being fueled by economics, energy concerns, skills shortages, and complexity. Getting the best paybacks from cloud efforts early and often and by bringing them on-premises, can help prevent missing the rewards of cloud models later by being unprepared or inexperienced now.

We expect that the way the clouds are built will be refined for more and more enterprises over time. The early goal is gaining the efficiency, control and business benefits of an everything-as-a-service approach, without the downside and risks.

Yet much of what makes the cloud tick is already being used inside of many data centers today. So now we'll examine how many of the technical underpinnings of cloud are available now for organizations to leverage in their in-house data centers, whether it’s moving to highly scalable servers and storage, deeper use of virtualization technologies, improved management and automation for elastic compute provisioning, or services management and governance expertise.

Here to help us better understand how to make the most of cloud technologies are four experts from Hewlett-Packard (HP): Pete Brey, worldwide marketing manager for HP StorageWorks group; Ed Turkel, manager of business development for HP Scalable Computing and Infrastructure; Tim Van Ash, director of software as a service (SaaS) products in the HP Software and Solutions group, and Gary Thome, chief strategist for infrastructure software and blades at HP. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Van Ash: When IT looks at becoming a service provider, technology is a key part of it, architecting yourself to be able to support the service levels around delivering a service, as opposed to some of the more traditional ways that we saw IT evolve. Then, applications were added to the environment, and the environment was expanded, but it wasn’t necessarily architected around the application.

When IT moves to a service provider role, it's as much about how they structure their organization to be able to deliver those services. That means being able to not only have the sort of operational teams that are running and supporting the application, but also have the customer-facing sides, who are managing the business relationships, whether they would be internal or external customers, and actually starting to run it as if it were a business.

... It’s also about realizing that it's not just a cost model, but it is very much a business model. That means you need to be actively out there recruiting new customers. You need to be out there marketing yourself. And, that’s one area that IT traditionally has been quite poor at -- recognizing how to structure themselves to deliver as a business.

The technology is really one of the key enablers that come into that and, more importantly, enables you to get scale and standardization across the board, because one of the issues that IT has traditionally faced is that often architecture is forced on them, based on the application selection by the business.

When you start to move into cloud environments, which feature, in many cases, high levels of virtualization, you start to decouple those layers, as the service provider has a much stronger control over what the architecture looks like across the different layers of the stack. This is really one of the areas where cloud [can] accelerate this process enormously.

Brey: Now, not only do you have your scale-out compute environments, you need to also pay attention to the storage piece of the equation and delivering the platforms. The storage platforms need not only to scale to the degree that we talk about into the petabyte ranges, but they also need to be very simple and easy to use, which will drive down your total cost of ownership and will drive down your administrative costs.

They also deliver a fundamentally new level of affordability that we have never really seen before in the storage marketplace in particular. So these combination of things, scalability, manageability, ease of use and overall affordability, are driving what I consider almost a revolution in the storage marketplace these days.

Turkel: In those [cloud] environments, the way that they look at management of the environment, the resilience or reliability of individual servers, storage, and so on, is done a little differently, partially because of the scale of the environments that they are creating.

If you look at many of the cloud providers, what they've done is they've implemented a great deal of resilience in their application environment, in a sense, moving the issues of resiliency away from the hardware and more into software. When you look at an environment that is as large as what they are doing, it's somewhat natural to expect that components of that environment will fail at some level of frequency.

Their software infrastructure has to be able to deal with that. ... The way that [enterprise IT] service -- and the way that they design -- the environment has to be somewhat similar to those cloud providers.

Thome: When customers are thinking about going to a cloud infrastructure or shared-service model, they really want to look at how they are going to get a payback from that. They're looking at how they can get applications up and running much faster and also how they can do it with less effort and less time. They can redirect administrative time or people time from just simply getting the basic operations, getting the applications up and running, getting the infrastructure up and running for the applications, to doing more innovative things instead.

... Unlike the cloud that Ed was talking about earlier where they are able to put things like the resilience and scalability into the software, many enterprises don't own all their applications, and there are a variety of different applications on a variety of different operating systems.

So, they really need a more flexible platform that gives them an abstraction between the applications and the hardware itself. Products like BladeSystem Matrix, with technologies such as our Insight Orchestration and our Virtual Connect technology, allows customers to get that abstraction.

Customers are looking for those things, as well as the cloud model, a shared-services platform, to be able to get higher utilization out of the equipment.

Turkel: [The cloud] approach ... is much

Customers are looking for those things, as well as the cloud model, a shared-services platform, to be able to get higher utilization out of the equipment.

more of a holistic view of the IT environment and selling a broader solution, than simply going in and selling a server with some storage and so on for a particular application. It tends to touch a broader view of IT, of the data center, and so on.

IT has to look at working with the CIO or senior staff within the enterprise IT infrastructure, looking fundamentally at how they change their model of how they deliver their own IT service to their internal customers.

Rather than just providing a platform for an application, they are looking at how they provide an entire service to their customer base by delivering IT as a service. It's fundamentally a different business model for them, even inside their own organizations.

... We're also seeing some interesting crossover from another part of our market that has been very traditionally a scale-out market. That's the high-performance computing (HPC) or technical computing market, where we are seeing a number of large sites that have been delivering technical computing as a service to their customers for some time, way back when they called it time sharing. Then, it became utility computing or grid, and so on.

Now, they're more and more delivering their services via cloud models. In fact, they're working very closely with us on a joint-research endeavor that we have between HP Labs, Yahoo, and Intel called the Cloud Computing Test Bed, more recently called the Open Cirrus Project.

Van Ash: The thing that we're seeing from our customers is how they extend enterprise control in the cloud, because cloud has the potential to be the new silo in the overall architecture. As you said, in a heterogeneous environment, you potentially have multiple cloud providers. In fact, you almost certainly will have a multi-sourced environment.

So, how do you extend the capabilities, the control, and the governance across your enterprise in

If you look at many of the cloud providers, what they've done is they've implemented a great deal of resilience in their application environment, in a sense, moving the issues of resiliency away from the hardware and more into software.

the cloud to ensure that you are delivering the most agile and the most cost effective solution, whether it would be in-house or leveraging cloud to accelerate those values?

What we're seeing from customers is a demand for existing enterprise tools to expand their role and to manage both private cloud and public cloud technologies.

... One of the most exciting examples that I have seen recently has been taking the enterprise technology around provisioning of both physical and virtual servers in a self-service and a dynamic fashion and taking it to the service provider.

Verizon recently announced one of their cloud offerings, which is Compute as a Service, and that's all based on the business service automation technology that was developed for the enterprise.

It was developed to provide data-center automation, providing provisioning and dynamic provisioning to physical and logical servers, networks, storage, and tying it altogether through run book automation, through what we call Operations Orchestration.

Verizon has taken that technology and used that to build a cloud service that they are now delivering to their customers. So, we're seeing service providers adopting some of the existing enterprise technology, and really taking it in a new direction.

So, while cloud is currently going in a very exciting direction, it really represents an evolution of many of the technologies that we at HP have focused on now for the last 20 years.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Tuesday, September 8, 2009

What ever happened to the withering RIA market?

This guest post comes courtesy of ZapThink's senior analyst Ron Schmelzer.

By Ronald Schmelzer
Traditional market research focuses on the size and growth of well-defined market segments. As vendors enter and compete in those markets, customers participate by purchasing products and services within those segments, and market research seeks to establish the patterns of such transactions in order to predict the future trends for such markets.

In the information technology (IT) space, however, many markets are transitory in that as new technologies and behavior patterns emerge, what might formerly have been separate markets vying for customer dollars merge into a single market in order to address evolving customer needs. Over time these separately identifiable markets lose their distinct identity, as products and customer demand both mature. The Rich Internet Application (RIA) market is certainly no exception to this pattern of market behavior.

As we originally covered in a ZapFlash back in 2004, a RIA combines elements of rich user interactivity and client-side logic once solely the domain of desktop and client/server applications with the distributed computing power of the Internet. In essence, a RIA is a hybrid client-server/web application model that attempts to bridge the gap between those two computing approaches and address the limitations of each.

However, in the subsequent half-decade since that first report came out, it is becoming clear that the concept of RIA spans the gamut of applications from those that barely have any richness to them at all in one extreme, to considerably rich and interactive applications that made use of a wide range of RIA capabilities in the other. From this perspective, it’s evident that an application can have all of the characteristics of an RIA application, none of the characteristics, or somewhere in between resulting in a spectrum of richly enabled applications.

At some point, won’t all Internet applications be rich, and all desktop applications become Internet-enabled?

From a services oriented architecture (SOA) perspective, RIAs are simply the user interface to composite services. This is why we care about the RIA market: To the extent that organizations can abstract the presentation of their services from the composition of those services, and in turn from the implementation of the services, we can introduce greater flexibility into the sort of applications we deliver to the business without sacrificing functionality.

However, more importantly, as an increasing range of applications add richness to their capabilities, what it means to be an RIA is increasingly becoming blurry. At some point, won’t all Internet applications be rich, and all desktop applications become Internet-enabled? If so, then does it even matter if a separately discernable RIA market exists?

RIAs: The application boundary disappears

Macromedia, now part of Adobe Systems, introduced the RIA term in the 1990s to delineate products that addressed the limitations at the time in the richness of application interfaces, media and content available on the Internet. Today, RIAs comprise mostly Web-based applications that have some of the characteristics of desktop applications, where the RIA environment typically delivers application capabilities via Web browser plug-ins, native browser capabilities, or vendor-specific virtual machines. In the past few years, new RIA solutions have also emerged to provide desktop capabilities that leverage the same technologies available in Web applications.

In our recent Evolution of the Rich Internet Application Market report, we identified a classification system by which organizations can classify the richness of its applications according to three axes:

  • Richness of Internet Capabilities – The extent to which the application or technology leverages the full functionality of the Internet.
  • Richness of User Interface – The extent to which the application or technology delivers interactive, deep, and broad user interface (UI) capabilities.
  • Richness of Client Capabilities – The extent to which the application offers client computing capabilities that utilize the local machine power, such as storing information locally, using local memory and disk storage, and shifting processing power to the desktop from the server.
The following is a visualization of the three axes and the scope of potential RIA solutions:


As can be gleaned from the above picture, there’s no sharp delineation between what can clearly be identified as an RIA and what cannot. As new technologies and patterns emerge that increase the capability of the web application, browser, and desktop, that delineation will continue to blur.

When Adobe acquired Macromedia in 2005, it also acquired a legacy that included Shockwave, Flash, and Flex. This legacy of RIA experience has culminated in the recent release of the Adobe Integrated Runtime (AIR), an RIA environment that facilitates the construction browser-independent Web applications that have many of the features of desktop applications, including offline capabilities -- in other words, RIAs. The ubiquity of Adobe’s Flash plug-in has helped to make the vendor a dominant player in the industry, even though it does not have its own browsers, operating systems, or general-purpose application development environments.

However, while Adobe is currently the biggest and most experienced RIA vendor selling commercial RIA licenses, it faces serious challenges on multiple fronts, most notably from Microsoft. Microsoft’s dominance in desktop and Internet application development, as well as its commanding market share of Web browsers and desktop operating systems means that it should be taken seriously as a threat to Adobe’s commanding share of the market with the introduction of the company’s Silverlight offering. Also at the end of 2008, Sun released JavaFX, its long-awaited entrant in the RIA race. The question still remains, however, how the battle for the RIA space will be fought before the time it’s absorbed into other markets.

In the past few years, an approach to RIA capabilities emerged that utilized native browser technology, most notably JavaScript, DHTML, and XML. These disparate approaches, collectively known as Ajax, have matured considerably since 2006 as browsers’ standards compliance and JavaScript support has improved, diminishing the need for proprietary plug-ins to fill RIA capabilities. Many of these Ajax-based RIA approaches are open source offerings and a few are commercial offerings from niche vendors.

The ZapThink take

As the line between browser-based and desktop-based applications blurs, and as approaches for abstracting functionality and information from user interfaces develop, other markets will eventually merge with the currently separately identifiable RIA market. Furthermore, as the Internet continues to penetrate every aspect of our lives, both business and personal, the distinction between “Internet application” and “application” will disappear, rich or not.

Earlier this year, ZapThink surveyed a number of enterprise end-users to obtain more information about the context for RIAs in their environments. The single consistent theme across these interviews is the enterprise context for RIAs. Because these practitioners are architects, their scope of interest covers the entire enterprise application environment, rather than usage of RIA for one specific application. Within this context, RIAs are the user interface component of broader enterprise applications.

For those architects who are implementing SOA, the RIA story focuses on the service consumer, which is the software that consumes services in the SOA context. Such consumers don’t necessarily have user interfaces, but when they do, RIAs typically meet the needs of the business more than traditional browser interfaces or desktop applications.

As a result, there is increasing demand for RIA capabilities in the enterprise, although people don’t identify the applications that leverage such capabilities as RIAs. Rather, RIA capabilities are features of those applications. This further serves to make indistinct a separately identifiable RIA market.

However, this dissolution of the RIA market as a separate market is still several years away, as all indications are that the RIA environments market in particular will continue to experience healthy growth for years to come.

This guest post comes courtesy of ZapThink's senior analyst Ron Schmelzer.

Friday, September 4, 2009

VMworld, Red Hat Summit news takes cloud computing beyond the hype curve

Three industry conferences this week -- one underlying theme: enterprise cloud computing.

If you could sum up VMworld 2009, the Red Hat Summit and JBoss World with one uber topic, cloud takes it -- which begs whether the cloud hype curve has yet peaked.

Or more compelling yet, is the interest in cloud models more than just hype, more than a knee-jerk reaction to selling IT wares in a recession, more than an evolutionary step in the progression of networked computing?

Although the slew of announcements coming out of San Francisco and Chicago this week weren’t solely focused on the cloud, the pattern is unmistakable and could cause naysayers to think again.

It all started with VMworld on Monday. Dell and VMware took the stage to announce an expansion of their existing partnership where Dell will bundle VMware View as an option on some of its server and client platforms. The result: an end-to-end solution from the desktop to the data center as a foundation for cloud computing.

HP wouldn’t be excluded from the VMware announcement fray. VMware and HP took the cover off a solution that lets enterprises manage both physical and virtual infrastructures through the VMware vCenter console. The new HP Insight Control for VMware vCenter Server took center stage at the conference with a focus on tighter integration, simpler user experiences and greater control within virtualized environments. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Ones to Watch

In other cloud news, virtual machine management solutions firm VMLogix announced its LabManager Cloud Edition at VMworld. The LabManager Cloud Edition that lets software teams run virtual labs on the Amazon Elastic Compute Cloud (EC2).

Meanwhile, Zoho inked a deal with VMware to deliver private cloud software-as-a-service (SaaS) solutions for enterprise customers. F5 hooked up with VMware to make a way for companies to securely migrate to and from public or private clouds with no downtime or interruption. And 1,000-plus service providers – including AT&T, Verizon, and Terremark – are going to offer cloud services based on VMware’s Cloud OS.

Some newer names made some major announcements at VMworld. Virtustream announced it has raised $25 million in equity financing, validating the firm as a player in the enterprise cloud market with its strategy, integration and managed services offerings. And Mellanox Technologies and Intalio are ones to watch. The Intalio|Cloud Appliance, accelerated by Mellanox 40Gb/s Infiniband, won the Best of VMworld 2009 award in the Cloud Computing Technologies category.

Reviewing the Red Hat Summit

Even as the cloud-oriented stories continue to emerge from VMworld 2009, we’re seeing some interesting cloud headlines coming out of the Red Hat Summit in Chicago, too. For the first time, Red Hat hosted the Summit and JBoss World together. But let’s take the news one at a time.

Perhaps the biggest Summit news on the cloud front is Red Hat and HP expanding their collaboration to drive the next generation of converged server, storage and networking infrastructure solutions. Red Hat Enterprise Linux 5.4 is now available on HP BladeSystem and HP ProLiant servers. The idea is to drive customers to virtualization and cloud computing.

Jumping into JBoss World

Red Hat also delivered on its JBoss Open Choice strategy during the Summit. The JBoss Enterprise Application Platform 5.0 is now available. It represents the next generation Java platforms and will play a central role in Red Hat’s cloud foundation. This is significant because the JBoss Enterprise Application Platform is the first commercially available Java EE application server available in Amazon's EC2.

Ingres sent a clear message that building open source Java applications in the cloud offers companies opportunities to lower costs without losing scalability or robustness. Suggesting that social networking platforms have become a new platform for developers to launch products and services, Ingres offered a look at how to use open source technologies on Facebook.

And on the entertainment front, DreamWorks Animations discussed how the company has leveraged cloud computing technologies to product films like Antz, Shrek2 and Madagascar, partnering with RedHat and its open source technologies.

The cloud topic still remains too amorphous and enterprises are only beginning to grapple with how to move to cloud adoption in ways that support their goals. But, riding the wave of virtualization and SOA adoption, both vendors and IT architects are treating cloud computing as far ore than a passing fancy.

Many of the concepts first proposed and extolled during the Internet hype curve in the mid-1990s are now bearing fruit. Perhaps we should think of cloud computing as less than a separate hype curve, and more as the realization of the original Internet value curve , now some 15 years into its mainstream maturity.

(BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.)

Wednesday, September 2, 2009

Proper cloud adoption requires a governance support spectrum of technology, services, best practices

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

View a free e-book on HP SaaS and learn more about cost-effective IT management as a service.

It's hard to over-estimate the importance of performance monitoring and governance in any move to cloud computing.

Yet most analysts expect cloud computing to become a rapidly growing affair. That is, infrastructure, data, applications, and even management itself, originating as services from different data centers, under different control, and perhaps different ownership.

What then becomes essential in effectively moving to cloud adoption is proper cross-organizational governance. There needs to be a holistic embrace of such governance -- with a full spectrum of technologies, services, best practices, and hosting options guidance -- to manage the complexity and relationships.

The governance strength will likely determine if enterprises can actually harvest the expected efficiencies and benefits that cloud computing portends. [UPDATE: More cloud activities are spreading across the "private-public" divide, as VMware announced this week, upping the need for governance ante.]

To learn more on accomplishing such visibility and governance at scale and in a way that meets enterprise IT and regulatory compliance needs, I recently interviewed two executives from Hewlett-Packard's (HP's) Software and Solutions Group, Scott Kupor, former vice president and general manager of HP's software as a service (SaaS) operations, and Anand Eswaran, vice president of Professional Services.

Here are some excerpts:
Kupor: You hear people use lots of terms today about infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS. Our idea is that all these things ultimately are variants of cloud-based environments. ... So lots of customers are looking at things like Amazon EC2 or Microsoft's Azure as environments in which they might want to deploy an application.

But when you put your application out there you still care about how that application is going to perform. Is it going to be secure? What does it look like from an overall management and governance perspective? That's where, in that specific example, Cloud Assure can be very helpful, because essentially it provides that trust, governance, and audit of that application in a cloud-based environment.

Eswaran: If you look at today's IT environments, we hear of 79-85 percent of costs being spent on managing current applications versus the focus on innovation. What cloud does is basically take away the focus on maintenance and on just keeping the lights on.

When you view it from that perspective, the people who are bothered about, worried about, or excited about the cloud span the whole gamut. It goes from the CIO, who is looking at it from value -- how can I create value for my business and get back to innovation to make IT a differentiator for the business -- all the way down to people in the IT organization.

These are the apps leaders, the operations leaders, the enterprise architects, all of them viewing the cloud as a key way to transform their core job responsibilities from keeping the lights on to innovation.

In the context of that, cloud is going to be one of the principal enablers, where the customer or the organization can forget about technology so much, focus on their core business, and leverage the cloud to consume a service, which enables them to innovate in the core business in which they operate.

Once the IT organization is free to think about innovation, to think about

The whole focus shifts, and that is the key. At the heart of it, this allows organizations to compete in the marketplace better.

what cutting edge services can they provide to the business, the focus then transforms from “how can I use technology to keep the lights on,” to “how can I use technology to be a market differentiator, to allow my organization to compete better in the marketplace.”

So given that, now the business user is going to see a lot better response times, and they are going to see a lot of proactive IT participation, allowing them to effectively manage their business better. The whole focus shifts, and that is the key. At the heart of it, this allows organizations to compete in the marketplace better.

Kupor: This is really what's interesting to us about cloud. We're seeing demand for cloud being driven by line-of-business owners today. You have a lot of line-of-business owners who are saying, "I need to roll out a new application, but I know that my corporate IT is constrained by either headcount constraints or other things in this environment, in particular."

We're seeing a lot of experimentation, particularly with a lot of our enterprise customers, from line-of-business owners essentially looking toward public clouds as a way for them to accelerate, to Anand's point, innovation and adoption of potentially new applications that might have otherwise taken too long or not been prioritized appropriately by the internal IT departments.

... The thing that people are worried about from an IT perspective in cloud is that they've lost some element of control over the application. ... In cloud now, what you've done is you've disintermediated the IT administrator from the application itself by having him access that environment publicly.

Things like performance now become critically important, as well as availability of the application, security, and how I manage data associated with those applications. None of those is a new problem. Those are all same problems that existed inside the firewall, but now we've complicated that relationship by introducing a third-party with whom the actual infrastructure for the application tends to reside.

Eswaran: What the cloud does is get you back to thinking about a shared service for the entire organization. Whether you think of shared service at an organizational level, which is where you start thinking about elements like the private cloud, or you think about shared applications, which are offered as a service in a publicly available domain including the cloud, it just starts to create exactly the word Scott used, a sense of disintermediation and a loss of control.

... HP Software has traditionally been a management vendor.

. . . we've taken all of that knowledge and expertise that we've been working on for companies inside the firewall and have given those companies an opportunity to effectively point that expertise at an application that now lives in a third-party cloud environment.

Historically, most of our customers have been managing applications that live inside the firewall. They care about things like performance availability and systems management.

What we've done with Cloud Assure is we've taken all of that knowledge and expertise that we've been working on for companies inside the firewall and have given those companies an opportunity to effectively point that expertise at an application that now lives in a third-party cloud environment.

... As a service, we can point that set of tests against an application running in an external environment and ensure the service levels associated with that application, just as they would do if that application were running inside their firewall. It gives them that holistic service-level management, independent of the physical environment, whether it's a cloud or non-cloud the application is running in.

Kupor: We don't expect customers to throw out existing implementations of successfully developed and running applications. What we do think that will happen over time is that we will live in kind of this mixed environment. So, just as today customers still have mainframe environments that have been around for many years, as well as client-server deployments, we think we will see cloud application start to migrate over time, but ultimately live in the concept of mixed environments.

... From an opinion point of view, we expect cloud to be a very big inflection point in technology. We think it's powerful enough to probably be the second, after what we saw with the Internet as an inflection point.

This is not just one more technology fad, according to us. We've talked about one concept, which is going to be the biggest business driver. It's utility-based computing, which is the ability for organizations to pay based on demand for computing resources, much like you pay for the utility industry.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

View a free e-book on HP SaaS and learn more about cost-effective IT management as a service.