Friday, September 4, 2009

VMworld, Red Hat Summit news takes cloud computing beyond the hype curve

Three industry conferences this week -- one underlying theme: enterprise cloud computing.

If you could sum up VMworld 2009, the Red Hat Summit and JBoss World with one uber topic, cloud takes it -- which begs whether the cloud hype curve has yet peaked.

Or more compelling yet, is the interest in cloud models more than just hype, more than a knee-jerk reaction to selling IT wares in a recession, more than an evolutionary step in the progression of networked computing?

Although the slew of announcements coming out of San Francisco and Chicago this week weren’t solely focused on the cloud, the pattern is unmistakable and could cause naysayers to think again.

It all started with VMworld on Monday. Dell and VMware took the stage to announce an expansion of their existing partnership where Dell will bundle VMware View as an option on some of its server and client platforms. The result: an end-to-end solution from the desktop to the data center as a foundation for cloud computing.

HP wouldn’t be excluded from the VMware announcement fray. VMware and HP took the cover off a solution that lets enterprises manage both physical and virtual infrastructures through the VMware vCenter console. The new HP Insight Control for VMware vCenter Server took center stage at the conference with a focus on tighter integration, simpler user experiences and greater control within virtualized environments. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Ones to Watch

In other cloud news, virtual machine management solutions firm VMLogix announced its LabManager Cloud Edition at VMworld. The LabManager Cloud Edition that lets software teams run virtual labs on the Amazon Elastic Compute Cloud (EC2).

Meanwhile, Zoho inked a deal with VMware to deliver private cloud software-as-a-service (SaaS) solutions for enterprise customers. F5 hooked up with VMware to make a way for companies to securely migrate to and from public or private clouds with no downtime or interruption. And 1,000-plus service providers – including AT&T, Verizon, and Terremark – are going to offer cloud services based on VMware’s Cloud OS.

Some newer names made some major announcements at VMworld. Virtustream announced it has raised $25 million in equity financing, validating the firm as a player in the enterprise cloud market with its strategy, integration and managed services offerings. And Mellanox Technologies and Intalio are ones to watch. The Intalio|Cloud Appliance, accelerated by Mellanox 40Gb/s Infiniband, won the Best of VMworld 2009 award in the Cloud Computing Technologies category.

Reviewing the Red Hat Summit

Even as the cloud-oriented stories continue to emerge from VMworld 2009, we’re seeing some interesting cloud headlines coming out of the Red Hat Summit in Chicago, too. For the first time, Red Hat hosted the Summit and JBoss World together. But let’s take the news one at a time.

Perhaps the biggest Summit news on the cloud front is Red Hat and HP expanding their collaboration to drive the next generation of converged server, storage and networking infrastructure solutions. Red Hat Enterprise Linux 5.4 is now available on HP BladeSystem and HP ProLiant servers. The idea is to drive customers to virtualization and cloud computing.

Jumping into JBoss World

Red Hat also delivered on its JBoss Open Choice strategy during the Summit. The JBoss Enterprise Application Platform 5.0 is now available. It represents the next generation Java platforms and will play a central role in Red Hat’s cloud foundation. This is significant because the JBoss Enterprise Application Platform is the first commercially available Java EE application server available in Amazon's EC2.

Ingres sent a clear message that building open source Java applications in the cloud offers companies opportunities to lower costs without losing scalability or robustness. Suggesting that social networking platforms have become a new platform for developers to launch products and services, Ingres offered a look at how to use open source technologies on Facebook.

And on the entertainment front, DreamWorks Animations discussed how the company has leveraged cloud computing technologies to product films like Antz, Shrek2 and Madagascar, partnering with RedHat and its open source technologies.

The cloud topic still remains too amorphous and enterprises are only beginning to grapple with how to move to cloud adoption in ways that support their goals. But, riding the wave of virtualization and SOA adoption, both vendors and IT architects are treating cloud computing as far ore than a passing fancy.

Many of the concepts first proposed and extolled during the Internet hype curve in the mid-1990s are now bearing fruit. Perhaps we should think of cloud computing as less than a separate hype curve, and more as the realization of the original Internet value curve , now some 15 years into its mainstream maturity.

(BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at and

Wednesday, September 2, 2009

Proper cloud adoption requires a governance support spectrum of technology, services, best practices

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

View a free e-book on HP SaaS and learn more about cost-effective IT management as a service.

It's hard to over-estimate the importance of performance monitoring and governance in any move to cloud computing.

Yet most analysts expect cloud computing to become a rapidly growing affair. That is, infrastructure, data, applications, and even management itself, originating as services from different data centers, under different control, and perhaps different ownership.

What then becomes essential in effectively moving to cloud adoption is proper cross-organizational governance. There needs to be a holistic embrace of such governance -- with a full spectrum of technologies, services, best practices, and hosting options guidance -- to manage the complexity and relationships.

The governance strength will likely determine if enterprises can actually harvest the expected efficiencies and benefits that cloud computing portends. [UPDATE: More cloud activities are spreading across the "private-public" divide, as VMware announced this week, upping the need for governance ante.]

To learn more on accomplishing such visibility and governance at scale and in a way that meets enterprise IT and regulatory compliance needs, I recently interviewed two executives from Hewlett-Packard's (HP's) Software and Solutions Group, Scott Kupor, former vice president and general manager of HP's software as a service (SaaS) operations, and Anand Eswaran, vice president of Professional Services.

Here are some excerpts:
Kupor: You hear people use lots of terms today about infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS. Our idea is that all these things ultimately are variants of cloud-based environments. ... So lots of customers are looking at things like Amazon EC2 or Microsoft's Azure as environments in which they might want to deploy an application.

But when you put your application out there you still care about how that application is going to perform. Is it going to be secure? What does it look like from an overall management and governance perspective? That's where, in that specific example, Cloud Assure can be very helpful, because essentially it provides that trust, governance, and audit of that application in a cloud-based environment.

Eswaran: If you look at today's IT environments, we hear of 79-85 percent of costs being spent on managing current applications versus the focus on innovation. What cloud does is basically take away the focus on maintenance and on just keeping the lights on.

When you view it from that perspective, the people who are bothered about, worried about, or excited about the cloud span the whole gamut. It goes from the CIO, who is looking at it from value -- how can I create value for my business and get back to innovation to make IT a differentiator for the business -- all the way down to people in the IT organization.

These are the apps leaders, the operations leaders, the enterprise architects, all of them viewing the cloud as a key way to transform their core job responsibilities from keeping the lights on to innovation.

In the context of that, cloud is going to be one of the principal enablers, where the customer or the organization can forget about technology so much, focus on their core business, and leverage the cloud to consume a service, which enables them to innovate in the core business in which they operate.

Once the IT organization is free to think about innovation, to think about

The whole focus shifts, and that is the key. At the heart of it, this allows organizations to compete in the marketplace better.

what cutting edge services can they provide to the business, the focus then transforms from “how can I use technology to keep the lights on,” to “how can I use technology to be a market differentiator, to allow my organization to compete better in the marketplace.”

So given that, now the business user is going to see a lot better response times, and they are going to see a lot of proactive IT participation, allowing them to effectively manage their business better. The whole focus shifts, and that is the key. At the heart of it, this allows organizations to compete in the marketplace better.

Kupor: This is really what's interesting to us about cloud. We're seeing demand for cloud being driven by line-of-business owners today. You have a lot of line-of-business owners who are saying, "I need to roll out a new application, but I know that my corporate IT is constrained by either headcount constraints or other things in this environment, in particular."

We're seeing a lot of experimentation, particularly with a lot of our enterprise customers, from line-of-business owners essentially looking toward public clouds as a way for them to accelerate, to Anand's point, innovation and adoption of potentially new applications that might have otherwise taken too long or not been prioritized appropriately by the internal IT departments.

... The thing that people are worried about from an IT perspective in cloud is that they've lost some element of control over the application. ... In cloud now, what you've done is you've disintermediated the IT administrator from the application itself by having him access that environment publicly.

Things like performance now become critically important, as well as availability of the application, security, and how I manage data associated with those applications. None of those is a new problem. Those are all same problems that existed inside the firewall, but now we've complicated that relationship by introducing a third-party with whom the actual infrastructure for the application tends to reside.

Eswaran: What the cloud does is get you back to thinking about a shared service for the entire organization. Whether you think of shared service at an organizational level, which is where you start thinking about elements like the private cloud, or you think about shared applications, which are offered as a service in a publicly available domain including the cloud, it just starts to create exactly the word Scott used, a sense of disintermediation and a loss of control.

... HP Software has traditionally been a management vendor.

. . . we've taken all of that knowledge and expertise that we've been working on for companies inside the firewall and have given those companies an opportunity to effectively point that expertise at an application that now lives in a third-party cloud environment.

Historically, most of our customers have been managing applications that live inside the firewall. They care about things like performance availability and systems management.

What we've done with Cloud Assure is we've taken all of that knowledge and expertise that we've been working on for companies inside the firewall and have given those companies an opportunity to effectively point that expertise at an application that now lives in a third-party cloud environment.

... As a service, we can point that set of tests against an application running in an external environment and ensure the service levels associated with that application, just as they would do if that application were running inside their firewall. It gives them that holistic service-level management, independent of the physical environment, whether it's a cloud or non-cloud the application is running in.

Kupor: We don't expect customers to throw out existing implementations of successfully developed and running applications. What we do think that will happen over time is that we will live in kind of this mixed environment. So, just as today customers still have mainframe environments that have been around for many years, as well as client-server deployments, we think we will see cloud application start to migrate over time, but ultimately live in the concept of mixed environments.

... From an opinion point of view, we expect cloud to be a very big inflection point in technology. We think it's powerful enough to probably be the second, after what we saw with the Internet as an inflection point.

This is not just one more technology fad, according to us. We've talked about one concept, which is going to be the biggest business driver. It's utility-based computing, which is the ability for organizations to pay based on demand for computing resources, much like you pay for the utility industry.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

View a free e-book on HP SaaS and learn more about cost-effective IT management as a service.

Tuesday, September 1, 2009

XDAS standard aims to empower IT audit trails from across complex events, perhaps clouds

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Welcome to the latest BriefingsDirect podcast discussion, recorded at The Open Group’s 23rd Enterprise Architecture Practitioners Conference and the associated 3rd Security Practitioners Conference in Toronto.

We're going to take a look at an emerging updated standard called XDAS, which looks at audit trail information from a variety of systems and software across the enterprise IT environment.

This is an emerging standard that’s being orchestrated through The Open Group, but it’s an open-source standard that is hopefully going to help in compliance and regulatory issues and in improving automation of events across heterogeneous environments. This could be increasingly important, as we get deeper into virtualization and cloud computing.

Here to help us drill into XDAS (see a demo now), we're joined by Ian Dobson, director of the Security Forum for The Open Group, as well as Joël Winteregg, CEO and co-founder of NetGuardians. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Dobson: We actually got involved way back in '90s, in 1998, when we published the Distributed Audit Service (XDAS) Standard. It was, in many ways, ahead of its time, but it was a distributed audit services standard. Today’s audit and logging requirements are much more demanding than they were then. There is a heightened awareness of everything to do with audit and logging, and we see a need now to update it to meet today’s needs. So that’s why we've got involved now.

A key part of this is event reporting. Event reports have all sorts of formats today, but that makes them difficult to consume. Of course, we then generate events so that they can be consumed in useful ways. So, we're aiming the new audit standard from XDAS to be something that defines an interoperable event-reporting format, so that they can be consumed equally by everybody who needs to know.

The XDAS standard developers are well aware of, and closely involved in, the related Common Event Expression (CEE) standard development activity in Mitre. Mitre's CEE standard has a broader scope than XDAS, and XDAS will fit very well into the Event Reporting Format part of CEE.

We are therefore also participating in the CEE standard development to achieve this and more, so as to deliver to the audit and logging community an authoritative single open standard that they can adopt with confidence.

Winteregg: My company is working in the area of audit event management. We saw that it was a big issue to collect all these different audit trails from each different IT environment.

We saw that, if it was possible to have a single and standard way to represent all this information, that would be much easier and relevant for IT user and for a security officer to analyze all this information, in order to find out what the exact issues are, and to troubleshoot issue in the infrastructure, and so on. That’s a good basis for understanding what's going on the whole infrastructure in the company.

There is no uniform way to represent this information, and we thought that this initiative would be really good, because it will bring something uniform and universal that will help all the IT users to understand what is going on.

In distributed environments, it's really hard to track a transaction, because it starts on a specific component, then it goes through another one, and to a cloud. You don’t know exactly where everything is happening. So, the only way to track these transactions or to track the accountability in such an environment would be through some transaction identifiers, and so on.

For auditors or administrator, it is really costly to understand this information and use it

You will be able to track the who, the what, and the when in the whole IT infrastructure, which is really important these days . . .

in order to get relevant information for management to have metrics and to understand what's really happening on the IT infrastructure.

Audit information deals a lot with the accountability of the different transactions in an enterprise IT infrastructure. The real logs, which are modulated to develop strong meaning for debugging applications, may be providing the size of buffers or parameters of an application. Audit trails are much more business oriented. That means that you will have a lot of accountability information. You will be able to track the who, the what, and the when in the whole IT infrastructure, which is really important these days with all these different regulations, like Sarbanes-Oxley (SOX) and the others.

With a standard like XDAS, it will be much easier for a company to be in compliance with regulations, because there will be really clear and specific interfaces from all the different vendors to these generated audit trails.

The standard will be open, but there is a Java implementation of that standard called XDAS for J, which is a Java Library. This implementation is open source and business friendly. That means that you can use it in some proprietary software without having to then provide your software as an open-source software. So, it is available for business software too, and all the code is open. You can modify it, look at it, and so on. It’s on the Codehaus platform.

We're waiting for some feedback from vendors and users about how it is easy to use, how helpful it is, and if there are maybe some use cases -- if the scope is too wide, too narrow, etc. We're open to every comment about the current standard.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Monday, August 31, 2009

HP panel examines proper security hurdles on road to successful enterprise cloud computing adoption

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

The latest BriefingsDirect podcast focuses on exercising caution, overcoming fear, and the need for risk reduction on the road to successful cloud computing.

In order to ramp up cloud-computing use and practices, a number of potential security pitfalls need to be identified and mastered. Security, in general, takes on a different emphasis, as services are mixed and matched and come from a variety of internal and external sources.

So, will applying conventional security approaches and best practices be enough for low risk, high-reward cloud computing adoption? Is there such a significant cost and productivity benefit to cloud computing that being late or being unable to manage the risk means being overtaken by competitors that can do cloud successfully? More importantly, how do companies know whether they are prepared to begin adopting cloud practices without undo risks?

To help better understand the perils and promises of adopting cloud approaches securely, I recently moderated a panel of three security experts from Hewlett-Packard (HP), Archie Reed, HP Distinguished Technologist and Chief Technologist for Cloud Security; Tim Van Ash, director of software-as-a-service (SaaS) products at HP Software and Solutions, and David Spinks, security support expert at HP IT Outsourcing.

Here are some excerpts:
Van Ash: Anything associated with the Internet today tends to be described as cloud in an interchangeable way. There's huge confusion in the marketplace, in general, as to what cloud computing is, what benefits it represents, and how to unlock those benefits.

... The [cloud] provider is committing to providing a compute fabric, but they're not committing, for the most part, to provide security, although there are infrastructure as a service (IaaS) offerings emerging today that do wrap aspects of security in there.

You see more responsibility put on the provider in the [platform as a service (PaaS)] environment, but all the classic application security vulnerabilities, very much lie in the hands of the consumer or the customer who is building applications on the cloud platform.

With software-as-a-service (SaaS), more of the responsibility lies with the provider, because SaaS is really delivering capabilities or business processes from the cloud. But, there are a number of areas that the user is still responsible for, i.e., user management in ensuring that there are perfect security models in place, and that you're managing entry and exit of users, as they may enter a business or leave a business.

You're responsible for all the integration points that could introduce security vulnerabilities, and you're also responsible for the actual testing of those business processes to ensure that the configurations that you're using don't introduce potential vulnerabilities as well.

...Typically, what we see is that organizations often have concerns. They go through the fear, uncertainty, and doubt. They'll often put data out there in the cloud in a small department or team. The comfort level grows, and they start to put more information out there.

Reed: If you take the traditional IT department perspective of whether it's appropriate and valuable to use the cloud, and then you take the cloud security's perspective -- which is, "Are we trusting our provider as much as we need to? Are they able to provide within the scope of whatever service they're providing enough security?" -- then we start to see the comparisons between what a traditional IT department puts in play and what the provider offers.

For a small company, you generally find that the service providers who offer cloud services can generally offer -- not always, but generally -- a much more secure platform for small companies, because they staff up on IT security and they staff up on being able to respond to the customer requirements. They also stay ahead, because they see the trends on a much broader scale than a single company. So there are huge benefits for a small company.

But, if you're a large company, where you've got a very large IT department and a very large security practice inside, then you start to think about whether you can enforce firewalls and get down into very specific security implementations that perhaps the provider, the cloud provider, isn't able to do or won't be able to do, because of the model that they've chosen.

That's part of the decision process as to whether it's appropriate to put things into the cloud. Can the provider meet enough or the level of security that you're expecting from them?

Spinks: We've just been reviewing a large energy client's policies and procedures. ... As you move out into an outsourcing model, where we're managing their technology for them, there are some changes required in the policies and procedures. When you get to a cloud services model, some of those policies, procedures, and controls need to change quite radically.

Areas such as audit compliance, security assurance, forensic investigations, the whole concept of service-level agreements (SLAs) in terms of specifying how long things take have to change. Companies have to understand that they're buying a very standard service with standard terms and conditions.

Pressure to adopt

Van Ash: Obviously, the current economic environment is putting a lot of pressure on budgets, and people are looking at ways in which they can continue to move their projects forward on investments that are substantially reduced from what they were previously doing.

But, the other reason that people are looking at cloud computing is just agility, and both these aspects – cost and agility -- are being driven by the business. These two factors coming from the business are forcing IT to rethink how they look at security and how they approach security when it comes to cloud, because you're now in a position where many of your intellectual property and your physical data and information assets are no longer within your direct control.

So what are the capabilities that you need to mature in terms of governance, visibility, and audit controls that we were talking about, how do you ramp those up? How do you assess partners in those situations to be able to sit down and say that you can actually put trust into the cloud, so that you've got confidence that the assets you're putting in the cloud are safeguarded, and that you're not potentially threatening the overall organization to achieve quick wins?

The challenge is that the quick wins that the business is driving for could put the business at much longer-term risk, until we work out how to evolve our security practices across the board.

Spinks: ... The business units are pushing internally to get to use some cloud service that they've seen out there. A lot of companies are finding that their IT organizations are not responding fast enough such that business units are just going out there directly to a cloud services provider.

They're in a situation where the advice is, either ride the wave or get dumped. The business wants to utilize these environments -- the fast development testing and launch of new services, new software-related solutions, whatever they may be -- and cloud offers them an opportunity to do that quickly, at low cost, unlike the traditional IT processes.

Reed: ... What we need to do is take some of that traditional security-analysis approach, which ultimately we describe as just a basic risk analysis. We need to identify the value of this data -- what are the implications if it gets out and what's the value of the service -- and come back with a very simple risk equation that says, "Okay, this makes sense to go outside."

... There are certain things where you may say, "This data, in and of itself, is not important, should a breach occur. Therefore, I'm quite happy for it to go out into the cloud." ... Generally, when we talk to people, we come back to the risk equation, which includes, how much is that data worth ... and what is the value of the services being provided. That helps you understand what the security risk will be.

Next big areas

Spinks: The big areas that I believe will be developed over the next few years, in terms of ensuring we take advantage of these cloud services, are twofold. First, more sophisticated means in data classification. That's not just the conventional, restricted, confidential-type markings, but really understanding, as Archie said, the value of assets.

But, we need to be more dynamic about that, because, if we take a simple piece of data associated with the company's annual accounts and annual performance, prior to release of those figures, that data is some of the most sensitive data in an organization. However, once that report is published, that data is moved into the public domain and then should be unclassified.

We need not just management processes and data-classification processes, but these need to be much more responsive and proactive, rather than simply reacting to the latest security breach. As we move this forward, there will be an increased tension to more sophisticated risk management tools and risk-management methodologies and processes, in order to make sure that we take maximum advantage of cloud services.

Efforts under way

Reed: There are efforts under way. There are things, such as the Jericho Forum, which is now part of The Open Group. A group of CIOs and the like got together and said, "We need to deal with this and we need to have a way of understanding, communicating, and describing this to our constituents."

They created their definition of what cloud is and what some of the best practices are, but they didn't provide full guidelines on how, why, and when to use the cloud, that I would really call a standard.

There are other efforts that are put out by or are being worked on today by The National Institute of Standards and Technology, primarily focused on the U.S. public sector, but are generally available once they publish. But, again, that's something that's in progress.

The closest thing we've got, if we want to think about the security aspects of the cloud, are coming from the Cloud Security Alliance, a group that was formed by interested parties. HP supported founding this, and actually contributed to their initial guidelines.

... If we're looking for standards, they're still in the early days, they're still being worked on, and there are no, what I would call, formal standards that specifically address the cloud. So, my suggestion for companies is to take a look at the things that are under way and start to draw out what works for them, but also get involved in these sorts of things.

... We [at HP] also have a number of tools and processes based on standards initiatives, such as Information Security Service Management (ISSM) modeling tools, which incorporate inputs from standards such as the ISO 27001 and SAS 70 audit requirements -- things like the payment card industry (PCI), Sarbanes-Oxley (SOX), European Data Privacy, or any national or international data privacy requirements.

We put that into a model, which also takes inputs from the infrastructure that's being used, as well as input based on interviews with stakeholders to produce a current state and a desired or required state model. That will help our customers decide, from a security perspective at least, what do I need to move in what order, or what do I need to have in place?

That is all based on models, standards, and things that are out there, regardless of the fact that cloud security itself and the standards around it are still evolving as we speak.

Van Ash: We do provide a comprehensive set of consulting services to help organizations assess and model where they are, and build out roadmaps and plans to get them to where they want to be.

One of the offerings that we've launched recently is Cloud Assure. Cloud Assure is really designed to deal with the top three concerns the enterprise has in moving into the cloud.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Harnessing 'virtualization sprawl' requires managing an ecosystem of technologies, suppliers

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Better managing server virtualization expansion across enterprises has become essential if the benefits of virtualization are to be preserved and enhanced at scale. I recently had a chance to examine ways that IT organizations can adopt virtualization at deeper levels, or across more systems, data and applications -- but at lower risk.

As more enterprises use virtualization for more workloads to engender productivity from higher server utilization, we often see what can be called virtualization sprawl, spreading a mixture of hypervisors, which leads to complexity and management concerns.

In order to ramp up to more -- yet advantageous -- use of virtualization, pitfalls from heterogeneity need to be managed well. Yet, no one of the hypervisor suppliers is likely to deeply support any of the others.

So how do companies gain a top-down perspective of virtualization to encompass and manage the entire ecosystem, rather than just corralling the individual technologies? To better understand the risks of hypervisor sprawl and how to mitigate the pitfalls to preserve the economic benefits of virtualization, I recently interviewed Doug Strain, manager of Partner Virtualization Marketing at HP.

Here are some excepts:
Strain: Virtualization has been growing very steeply in the last few years anyway, but with the economy, the economic reasons for it are really changing. Initially, companies were using it to do consolidation. They continue to do that, but now the big deal with economy is the consolidation to lower cost -- not only capital cost, but also operating expenses.

... There’s a lot of underutilized capacity out there, and, particularly as companies are having more difficulty getting funding for more capital expenses, they’ve got to figure out how to maximize the utilizations they’ve already bought.

We’re seeing a little bit of a consolidation in the market, as we get to a handful of large players. Certainly, VMware has been early on in the market, has continued to grow, and has continued to add new capabilities. It's really the vendor to beat.

Of course, Microsoft is investing very heavily in this, and we’ve seen with Hyper-V, fairly good demand from the customers on that. And, with some of the things that Microsoft has already announced in their R2 version, they’re going to continue to catch up.

We’ve also got some players like Citrix, who really leverage their dominance in what’s called Presentation Server, now XenApp, market and use that as a great foot in the door for virtualization.

Strain: Because of the fact that all the major vendors now have free hypervisor capabilities, it becomes so easy to virtualize, number one, and so easy to add additional virtual machines, that it can be difficult to manage if technology organizations don’t do that in a planned way.

Most of the virtualization vendors do have management tools, but those tools are really optimized for their particular virtualization ecosystem. In some cases, there is some ability to reach out to heterogeneous virtualization, but it’s clear that that’s not a focus for most of the virtualization players. They want to really focus on their environment.

The other piece is that the hardware management is critical here. An example would be, if you’ve got a server that is having a problem, that could very well introduce downtime. You've got to have a way of navigating the virtual machine, so that those are moved off of the server.

That’s really an area where HP has really tried to invest in trying to pull all that together, being able to do the physical management with our Insight Control tools, and then tying that into the virtualization management with multiple vendors, using Insight Dynamics – VSE. ... We think that having tools that work consistently both in physical and in virtual environments, and allow you to easily transition between them is really important to customers.

There are a lot of ways that you can plan ahead on this, and be able to do this in a way that you don't have to pay a penalty later on.

Capacity assessment

It could be something as simple as doing a capacity assessment, a set of services that goes in and looks at what you’ve got today, how you can best use those resources, and how those can be transitioned. In most cases you’re going to want to have a set of tools like some of the ones I’ve talked about with Insight Control and Insight Dynamics VSE, so that you do have more control of the sprawl and, as you add new virtual machines, you do that in a more intelligent way.

We invest very heavily in certifying across the virtualization vendors, across the broadest range of server and storage platforms. What we’re finding is that we can’t say that one particular server or one particular storage is right for everybody. We’ve got to meet the broadest needs for the customers.

...Virtualization is certainly not the only answer or not the only component of data center transformation, but it is a substantial one. And, it's one that companies of almost any size can take advantage of, particularly now, where some of the requirements for extensive shared storage have decreased. It's really something that almost anybody who's got even one or two servers can take advantage of, all the way to the largest enterprises.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Open Group points standards at service-orientation architecture needs and approaches

This guest BriefingsDirect post comes courtesy of Heather Kreger, IBM’s lead architect for SOA Standards in the IBM Software Group.

By Heather Kreger

Last week The Open Group announced two new standards for architects; actually, more appro-priately, for service architects, SOA architects, and cloud architects. These standards are intended to help organizations more easily deploy service-based solutions rapidly and reliably especially in multi-vendor environments.

These standards are the first product in a family of standards being developed for architects by The Open Group’s SOA Work Group. Other standards currently in development for SOA include the SOA Ontology, SOA Reference Architecture, and Service Oriented Infrastructure.

Architecture standards are especially valuable for creating a common, shared language and understanding between service integrators, vendors and customers of all sizes. They provide a common foundation of understanding for the industry. Considering the who’s who of integrators involved in the development of these two new standards -- Capgemini, CGI and HP/EDS, and IBM -- we can expect the standards to reflect validated and mature best practices and industry experience.

[See a post by Sandy Kemsley from Heather's presentation at The Open Group's recent architecture conference in Toronto. Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

First, the Open Group Service Integration Maturity Model (OSIMM) provides a method to measure service adoption and integration and create roadmaps for incremental transformation to SOA to meet business objectives.

OSIMM provides a context to identify the business benefits of each step along the roadmap and progression toward the appropriate level of maturity for your business goals. The model consists of seven dimensions of consideration within an organization: Business View, Governance and Organization, Methods, Applications, Architecture, Information, and Infrastructure and Management.

Each of these dimensions can, in turn, be assessed on a maturity level scale from one to seven, including: 1: Silo (data integration); 2: Integrated (application integration); 3: Componentized (functional integration); 4: Simple services (process integration); 5: Composite services (supply-chain integration); 6: Virtualized services (virtual infrastructure); and 7: Dynamically reconfigurable services (eco-system integration).

OSIMM resonates with organizations because they can see at a glance what the entire scope of service use and SOA is and they can find themselves somewhere on that continuum. The model also makes it easy to see where they want to be on the continuum to meet objectives and to check on progress toward those goals. It’s important to note that with this maturity model, more is not necessarily better; few companies will need to be at level 7 maturity, most will satisfy their business objectives at level 4 and 5.

Second standard

The second standard, the SOA Governance Framework provides a methodology to help ensure that business objectives are in line with the SOA solutions and IT investment. The framework defines a SOA Governance Reference Model, which includes concepts that architects should understand in relation to governance, such as principles, guidelines, organizations, governed service and SOA processes, governing processes for compliance and dispensation, and supporting technologies.

For each of these concepts, the authors have provided starting points based on best practices. The framework defines the SOA Governance Vitality Method, which is an iterative cycle through the phases of Plan, Define, Implement and Monitor for the governance regimen. The monitor phase uses policies, checkpoints and triggers to ensure the governing processes are in place and being followed. These triggers can also be used to evaluate and adjust the governance regimen itself.

Actually, a great deal of the SOA Governance Framework applies to the governance of architecture in general but is explicitly defined to provide guidance for governing Service portfolios and SOA solution portfolios. Interestingly enough, the governance of service portfolios applies equally to business solutions that use cloud.

These two standards represent a major step forward in creating and simplifying the standards to build SOA. This is increasingly important as more organizations have to justify incremental investment in services. OSIMM helps you figure out where you want to go, and SOA governance ensures that you meet your objectives on the journey.

Heather Kreger is IBM’s lead architect for SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Web services, Management and Java in numerous industry standards groups including W3C, OASIS, DMTF, and The Open Group. Heather is the author of numerous articles and specifications, as well as the book “Java and JMX, Building Manageable Systems,” and most recently was co-editor of “Navigating the SOA Open Standards Landscape Around Architecture.”