Wednesday, February 3, 2010

CERN’s evolution toward cloud computing could portend next revolution in extreme IT productivity

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: Platform Computing.

What are the likely directions for cloud computing? Based on the exploration of expected cloud benefits at a cutting edge global IT organization, the future looks extremely productive.

In this podcast we focus on the thinking on how cloud computing -- both the private and public varieties -- might be used at CERN, the European Organization for Nuclear Research in Geneva.

CERN has long been an influential bellwether on how extreme IT problems can be solved. Indeed, the World Wide Web owes a lot of its usefulness to early work done at CERN. Now the focus is on cloud computing. How real is it, and how might an organization like CERN approach cloud?

In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere. That's because CERN deals with fantastically large data sets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness.

So please join us, as we track the evolution of high-performance computing (HPC) from clusters to grid to cloud models through the eyes of CERN, and with analysis and perspective from IDC, as well as technical thought leadership from Platform Computing.

Join me in welcoming our panel today: Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN; Steve Conway, Vice President in the High Performance Computing Group at IDC, and Randy Clark, Chief Marketing Officer at Platform Computing. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.

Just to give you an example, we at IDC just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.

CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.

Clark: At Platform Computing we have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn’t put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily one-third and possibly more [evaluating cloud].

Cass: At CERN we're a laboratory that exists to enable, initially Europe’s and now the world’s, physicists to study fundamental questions. Where does mass come from? Why don’t we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.

We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.

We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We’ve developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.

If you look at the past, in the 1990’s, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.

We realized in 2000-2001 that this wasn’t going to work and also that the scale of resources that we needed was so vast that it couldn’t all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we’ve done.

Grid sets stage for seeking greater efficiencies

[Our grid] pushes the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we’ve run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.

We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.

The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.

Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.

We’re now looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don’t need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.

... We’re definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.

Conway: CERN's scientists have earned multiple Nobel prizes over the years for their work in particle physics. CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.

More generally, CERN is a recognized world leader in technology innovation. What’s been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.

For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it’s running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.

In the case of CERN’s and Platform’s collaboration, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.

Showing a clear path to cloud

CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.

IDC: By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending.

That’s just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it’s going to move along pretty quickly.

... [Being able to manage workloads in a dynamic environment] is the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.

That’s one of the reasons we're here today with Platform and CERN, because that’s been Platform’s business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.

Clark: Historically, clusters and grids have been relatively static, and the workloads have been managed across those. Now, with cloud, we have the ability to have a dynamic set of resources.

The trick is to marry and manage the workloads and the resources in conjunction with each other. Last year, we announced our cloud products -- Platform LSF and Platform ISF Adaptive Cluster -- to address that challenge and to help this evolution.

[Cloud adoption] is being driven by the top of the organization. Tony and Steve laid it out well. They look at the public/private cloud economically, and say, "Architecturally, what does this mean for our business?" Without any particular application in mind they're asking how to evolve to this new model. So, we're seeing it very horizontally in both enterprise and HPC applications.

What Platform sees is the interaction of distributed computing and new technologies like virtualization requiring management. What I mean by that is the ability, in a large farm or shared environment, to share resources and then make those resources dynamic. It's the ability to add virtualization into those on the resource side, and then, on the server side, to make it Internet accessible, have a service catalog, and move from providing IT support to truly IT as a competitive service.

The state of the art is that you can get the best of Amazon, ease of use, cost, accessibility with the enterprise configuration, scale, and dependability of the enterprise grid environment.

There isn't one particular technology or implementation that I would point to, to say "That is state of the art," but if you look across the installations we see in our installed base, you can see best practices in different dimensions with each of those customers.

Conway: People who have already stepped through the earlier stages of this evolution, who have gone from clusters to grid computing, are now for the most part contemplating the next move to cloud computing. It's an evolutionary move. It could have some revolutionary implications, but, from a technological standpoint, sometimes evolutionary is much safer and better than revolutionary.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: Platform Computing.

BriefingsDirect analysts discuss ramifications of Google-China dust-up over corporate cyber attacks

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

The latest BriefingsDirect Analyst Insights Edition, Volume 50, focuses on the fallout from the Google’s threat to pull out of China, due to a series of sophisticated hacks and attacks on Google, as well as a dozen more IT companies. Due to the attacks late last year, Google on Jan. 12 vowed to stop censoring Internet content for China’s web users and possibly to leave the country altogether.

This ongoing tiff between Google and the Internet control authorities in China’s Communist Party-dominated government have uncorked a Pandora’s Box of security, free speech and corporate espionage issues. There are human rights issues and free speech issues, questions on China’s actual role, trade and fairness issues, and the point about Google’s policy of initially enabling Internet censorship and now apparently backtracking.

But there are also larger issues around security and Internet governance in general. Those are the issues we’ll be focusing on today. So, even as the U.S. State Department and others in the U.S. federal government seek answers on China’s purported role or complicity in the attacks, the repercussions on cloud computing and enterprise security are profound and may be long-term.

We’re going to look at some of the answers to what this donnybrook means for how enterprises should best protect their intellectual property from such sophisticated hackers as government, military or, quasi-government corporate entities and whether cloud services providers like Google are better than your average enterprise, or especially medium-sized business, at thwarting such risks.

We'll look at how users of cloud computing should trust or not trust providers of such mission-critical cloud services as email, calendar, word processing, document storage, databases, and applications hosting. And, we’ll look at how enterprise architecture, governance, security best practices, standards, and skills need to adapt still to meet these new requirements from insidious world-class threats.

This periodic discussion and dissection of IT infrastructure related news and events with a panel of industry analysts and guests, comes to you with the help of our charter sponsor Active Endpoints, maker of the ActiveVOS business process management system.

So, join me now in welcoming our panel for today’s discussion: Jim Kobielus, senior analyst at Forrester Research; Jason Bloomberg, managing partner at ZapThink; Jim Hietala, Vice President for Security at The Open Group; Elinor Mills, senior writer at CNET, and Michael Dortch, Director of Research at Focus. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Mills: We now have a huge first public example of a company coming out and saying, not only that they've been attacked -- companies don’t want to admit that ever and it’s all under the radar -- but also they’re pointing the fingers. Even though they're not specifically saying, "We think it’s the Chinese state," but they think enough of it that they're willing to threaten to pull out of the country.

It’s huge and it’s going to have every company reevaluating what their response is going to be -- not just how they’re going to do business in other countries, but what is their response going to be to a major attack.

Bloomberg: It’s not as big of a wakeup call as it should be. You can ask yourself, "Is this an attack by some small cadre of renegade hackers or is this attack by the government of the People’s Republic of China? That’s an open question at this point.

Who is the victim? Is it Google, a corporation, or the United States? Is it the western world that is the victim here? Is this a harbinger of the way that international wars are going to be fought down the road?

We’ve all been worried about cyber warfare coming, but we maybe don’t recognize it when we see it as a new battlefield. It's the same as terrorism. It’s not necessarily clear who the participants are.

When you place the enterprise into this context, well, it’s not necessarily just that you have a business within the context of a government subject to particular laws of particular government, you have the supernational, where large corporations have to play in multiple jurisdictions. That’s already a governance challenge for these large enterprises.

We already have this awareness that every single system on our network has to look out for itself and, even then, has levels of vulnerability.

Now, we have the introduction of cyber warfare, where we have concerted professional attacks from unknown parties attacking unknown targets and where it’s not clear who the players are. Anybody, whether it’s a private company, a public company, or a government organization is potentially involved.

That basically raises the bar for security throughout the entire organization. We’ve seen this already, where perimeter-based security has fallen by the wayside as being insufficient. We already have this awareness that every single system on our network has to look out for itself and, even then, has levels of vulnerability. This just takes it to the national level.

Kobielus: I don’t see anything radically or fundamentally new going on here. This is just a big, powerful, and growing world power, China, and a big and growing world power on a tech front Google, colliding. ... There has always been corporate espionage and there’s always been vandalism perpetrated by companies against each other through subterfuge, and also by companies or fronts operating as the agent of unseen foreign power. ... This is international real-politic as usual, but in a different technological realm.

Hietala: In terms of the visibility it’s gotten and the kinds of companies that were attacked, it’s a little bit game-changing. From the information security community perspective, these sorts of attacks have been going on for quite a while, aimed at defense contractors, and are now aimed at commercial enterprises and providers of cloud services.

I don’t think that the attacks per se are game-changing. There’s not a lot new here. It’s an attack against a browser that was couple of revs old and had vulnerability. The way in which the company was attacked isn’t necessarily game-changing, but the political ramifications around it and the other things we’ve just been talking about are what make it a little game-changing.

Dortch: This puts Google in the very interesting position of having to decide. Is it a politically neutral corporation or is it a protector of the data that its clients around the world, not just here, and not just from governments but corporations? Is it a protector and an advocate of protection for the data that those clients have been trusted to it? Or, is it going to use the fact that it is a broker of all that data to sort of throw its muscle around and take on governments like China’s in debates like this.

The implications here are bigger than even what we’ve been discussing so far, because they get at the very nature of what a corporation is in this brave new network world of ours.

Gardner: This boils down to almost two giant systems or schools of thought that are now colliding at a new point. They've collided at different points in the past on physical sovereignty, military sovereignty, and economic sovereignty. The competition is between what we might call free enterprise based systems and state sponsorship through centralized control systems.

Free enterprise won, when it came to the cold war, but it's hard to say what's going to happen in the economic environment where China is a little different beast. It's state sponsored and it's also taking advantage of free enterprise, but it's very choosy about what it allows for either one of those systems to do or to dominate.

When you look at the Google, Google made itself into a figurehead of representing what a free enterprise approach could do. It's not state sponsored or nationalistic. It's corporate sponsored. So, it would be interesting to see who has the better technology, who has the better financial resources, and ultimately who has the organizational wherewithal to manifest their goals online that wins out in the marketplace.

If an organized effort is better at doing this than a corporate one, well then they might dominate. But so far, we've seen a very complex system that the marketplace -- with choice, and shedding light and transparency on activities -- ultimately allows for free enterprise predominance. They can do it better, faster, cheaper and that it will ultimately win.

I think, we're really on the cusp here of a new level of competition, but not between countries or even alliances, but really between systems. The free enterprise system versus the state-sponsored or the centralized or the controlled system. It should be very interesting.

Bloomberg: ... If anything, cloud environments reduce the level of security.

They don’t increase it for the very reason that we don’t have a way of making them sovereign in their own right. They’re always not only subject to the laws of the local jurisdiction, but they’re subject to any number of different attacks that could be coming from any different location, where now the customers aren’t aware of this sort of vulnerability.

So, “Trust, but verify,” is a good point, but how can you verify, if you’re relying on a third party to protect your data for you? It becomes much more difficult to do the verification. I'd say that organizations are going to be backing away from cloud, once they realize just how risky cloud environments are.

Mills: Microsoft’s general counsel Brad Smith recently gave a keynote at the Brookings Institute Forum, and he talked about modernizing and updating the laws to adapt specifically to the cloud. That included privacy rights under the Electronic Communications Privacy Act being more clearly defined, updating the Computer Fraud and Abuse Act, and setting up a framework so that differences in the regulations and practices in various countries can be worked out and reconciled.

Hietala: I don’t think there is a silver-bullet cloud provider out there that has superior security to have that position. All enterprises still are going to have to be at the top of their game, in terms of protecting their assets, and that extends to small or medium businesses.

At some point, you could see a cloud provider stake out that part of the market to say, "We’re going to put in a superior set of controls and manage security to a higher degree than a typical small-to-medium business could," but I don’t see that out there today.

Dortch: Many small businesses outsource payroll processing, customer relationship management (CRM), and a whole bunch of things. A lot of that stuff is outsourced to cloud service providers, and companies haven’t asked enough questions yet about exactly how cloud providers are protecting data and exactly how they can reassure that nothing bad is going to happen to it.

For example, if their servers come under attack, can they demonstrate credibly how data is going to be protected. These are the types of questions that incidents like this can and should raise in the minds of decision-makers at small and mid-sized businesses, just as they're starting to raise these issues, and have been raising them for a while, among decision-makers at larger enterprise.

Kobielus: I think what will happen is that some cloud providers will increasingly be seen as safe havens for your data and for your applications, because (A) they have the strong security, and (B) they are hosted within, and governed by, the laws of nation states that rigorously and faithfully try to protect this information, and assure that the information can then be removed -- transferred out of that country fluidly by the owners, without loss.

How about governments in general, maybe it's the United Nations who steps in? Who is the ultimate governor of what happens in cyber space?

In other words, it's like the Cayman Islands of the cloud -- that offshore banking safe haven you can turn to for all this. Clearly, it's not going to be China.

... In terms of who has responsibility and how will governance best practices be spread uniformly across the world in such areas of IT protection, it's going to be some combination of multilateral, bilateral, and unilateral action. For multilateral, the UN points to that, but there are also regional organizations. In Southeast Asia there is ASEAN, and in the Atlantic there is NATO, and so forth.

Bloomberg: Who decides what is enough? We have these opposing forces. One is that information should be free, and the Internet should be available to everybody. That basically pushes for removing barriers to information flow.

Then you have the security concerns that are driving putting up barriers to information flow, and there is always going to be conflict between those two forces. As increasingly sophisticated attacks develop, that pushes the public consensus toward increasing security.

That will impact our ability to have freedom, and that's going to be, continue to be a battle that I don’t see anybody winning. It's’ really just going to be an ongoing battle as technology improves and as the bad guys attacks improve. It's going to be an ongoing battle between security and freedom and between the good guys and the bad guys, as it were, and that's never going to change.

Hietala: Large enterprises are going to have to be responsible for the security of their information. I think there are a lot of takeaways for enterprises from this attack. If you're talking about specific individuals, it’s almost hopeless, because your average individual consumer doesn’t have the level of knowledge to go out and find the right solutions to protect themselves today.

So, I'll focus on the large enterprises. They have to do a good job of asset inventory, know where, within their identity infrastructure, they're vulnerable to this specific attack, and then be pretty agile about implementing countermeasures to prevent it. They have to have patch management that's adequate to the task of getting patches out quickly.

They need to do things like looking at the traffic leaving their network to see if people are already in their infrastructure. These Trojans leave traces of themselves, when they ship information out of an organization. When people really understand what happened in this attack, they can take something away, go back, look at what they are doing from a security standpoint, and tighten things up.

If you're talking about individuals putting things in the cloud, that’s a different discussion that doesn’t seem real feasible to me to get them to the point where they can secure their information today.

Kobielus: I don't think Google is going to leave China. I think they are going to stay in China and somehow try to work it out with the PRC. I don't know where that's going, but fundamentally Google is a business and has a "don't do evil" philosophy. They're going to continue to qualify evil down to those things that don't actually align with their business interest.

In other words, they're going to stay. There's going to be a lot of wariness now to entrust Google's China operation with a whole lot of your IT -- "you" as a corporation -- and your data. There will be that wariness.

Preferred platforms

Other cloud providers will be setting up shop or hosting in other nations that are more respectful of IP, other nations that may not be launching corporate or governmental espionage at US headquartered properties in China. Those nations will become the preferred supernational cloud hosting platforms for the world.

I can't really say who those nations might be, but you know what, Switzerland always sort of stands out. They're still neutral after all these years. You've got to hand that to them. I trust them.

Bloomberg: In the short-term, the noise is going to die down or going to go back to business as usual. The security is going to need to improve, but so are hacks from the bad guys. It's going to continue, until there is the next big attack. And the question is, "What's it going to be and how big is it going to be?"

We're still waiting for that game changer. I don't think this is a game changer. It's just a way to skirmish. But, if a hacker is able to bring down the internet, for example, targeting the DNS infrastructure to the point that the entire thing collapses, that’s something that could wake people up to say, "We really have to get a handle on this and come up with a better approach."

Hietala: From our perspective [at The Open Group], we're starting to see more awareness at higher levels in governments that the threats and issues here are real. They’re here today. They seem to be state sponsored, and they're something that needs to be paid attention to.

Secretary of State Clinton recently gave a speech where she talked specifically about this attack, but also talked about the need for nations to band together to address the problem. I don't know what that looks like at this point, but I think that the fact that people at that level are talking about the problem is good for the industry and good for the outlook for solutions that are important in the future.

Mills: I think Google is going to get out of China and try and lead some kind of U.S. corporate effort or be a role model to try to do business in a more ethical way, without having to compromise and censor.

There will be a divergence that you'll see. China and other countries may be pushed more towards limiting and creating their own sort of channel that's government filtered. I think the battle is just going to get bigger. We're going to have more fights on this front, but I think that Google may lead the way.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

Tuesday, February 2, 2010

The Open Group's Cloud Work Group advances understanding of cloud-use benefits for enterprises

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: The Open Group. Follow the conference on Twitter: #OGSEA.

BriefingsDirect now presents a sponsored podcast discussion on the ongoing activities of The Open Group’s Cloud Computing Work Group. We'll meet and talk to the new co-chairmen of the Cloud Work Group, learn about their roles and expectations, and get a first-hand account of the group’s 2010 plans.

Join us as we examine the evolution of cloud, how businesses are grappling with that, and how they can learn to best exploit cloud-computing benefits, while fully understanding and controlling the risks. These topics and ore will also be under discussion at The Open Group's Architecture Practitioners and Security Practitioners conferences this week in Seattle.

In many ways, cloud computing marks an inflection point for many different elements of IT, and forms a convergence of other infrastructure categories that weren’t necessarily working in concert in the past. That makes cloud interesting, relevant, and potentially dramatic in its impact. What has been less clear is how businesses stand to benefit. What are the likely paybacks and how enterprises can prepare for the best outcomes?

We're here with an executive from The Open Group, as well as the new co-chairmen of the Cloud Work Group, to look at the business implications of cloud computing and how to get a better handle on the whole subject.

Please join David Lounsbury, Vice President for Collaboration Services at The Open Group; Karl Kay, IT Architecture Executive with Bank of America, and co-chairman of the Cloud Work Group, and Robert Orshaw, IBM Cloud Computing Executive, and co-chair of the Cloud Work Group. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Lounsbury: One of the things that everybody has seen in cloud is that there has been a lot of take up by small to medium businesses who benefit from the low capital expenditure and scalability of cloud computing, and also a lot by individuals who use software as a service (SaaS). We've all seen Google Docs and things like that. That’s fueled a lot of the discussion of cloud computing up to now, and it's a very healthy part of what's going on there.

But, as we get into larger enterprises, there's a whole different set of questions that have to be asked about return on investment (ROI) and how you merge things with the existing IT infrastructure. Is it going to meet the security needs and privacy needs and regulatory needs of my corporation? So, it's an expanded set of questions that might not be asked by a smaller set of companies. That's an area where The Open Group is trying to focus some of its activities.

There is a whole different scale that has to occur when you go into an enterprise, where you have got to think of all the users in the enterprise. What does it take to fund it? What does it take to secure it, protect the corporate assets and things like that, and integrate it, because you want services to be widely available?

Orshaw: A few years ago, there was a tremendous amount of hype, and the dynamics, flexibility, and pricing structures weren’t there. It's an exciting time now that you're seeing that from a flexibility, dynamic, and pricing standpoint, we're there. That's both in the private cloud and the public cloud sector -- and we'll probably get into more detail about the offerings around that.

A tremendous amount has happened over the past few years to improve the market adoption and overall usability of both public and private clouds.

In a former life, I was CIO of a large industrial manufacturing company that had 49 separate business units. Cloud today can be an issue in the beginning for CIOs. For example, at that large manufacturing company, in order for a business unit to provision new development test environments or production environments for implementing new applications and new systems, they would have to go through an approval process, which could take a significant amount of time.

Once approved, we would have centralized data centers and outsourced data centers. We would have to go through and see if there was existing capacity. If there wasn’t, we would then go ahead and procure that and install it. So, we're talking weeks, and perhaps even a few months, to provision and get a business unit up and running for their various projects.

These autonomous business units that weren’t very happy with that internal service to begin with, are now finding it very easy to go out with a credit card or a local purchase order to Amazon, IBM, and others and get these environments provisioned to them in minutes.

This is creating a headache for a lot of CIOs, where there is a proliferation of virtual cloud environments and platforms being used by their business units, and they don’t even know about it. They don’t have control over it. They don’t even know how much they're spending. So, the cloud group can have a significant effect on this, helping improve that environment.

Certainly the leading items like cost savings and time to market are two of the big motivators that we look to for cloud. In a lot of cases, our businesses are driving IT to adopt cloud as opposed to the opposite. It's really a matter of how we blend in the cloud environment with all of our security and regulatory requirement and how we make it fit within the enterprise suite of platform offerings.

The work groups are really focused on trying to deliver some short-term value. In the business use cases, they're really trying to define a clear set of business cases and financial models to make it easier to understand how to evaluate cloud with certain scenarios.

We're seeing a skill-set change on the technical side, in that, if you look at the adoption of cloud, you shift from being able to directly control your environments and make changes from a technical perspective, to working with a contractual service level agreement (SLA) type of model. So it's definitely a change for a lot of the engineers and architects working on the technical side of the cloud.

The Cloud Architecture Group is looking to deliver a reference architecture in 2010. One of the things we've discovered is that there are a lot of similarities between the reference architecture that we believe we need for cloud and what already has been built in the SOA reference architectures. I think we'll see a lot of alignment there. There are probably some other elements that will be added, but there's a lot of synergy between the work that’s already going on in SOA and SOI and the work that we are doing in cloud.

Number of activities

Lounsbury: There are a number of activities inside The Open Group. Enterprise architecture is a very large one, but also real-time and embedded systems for control systems and things of that nature. We've got a very active security program, and also, of course, we've got some more emerging technologically focused areas like service oriented architecture (SOA) and cloud computing.

We have a global organization with a large number of industrial members. As you've seen, from our cloud group, we always try to make sure that this is a perspective that’s balanced between the supply side and the buy side. We're not just saying what a vendor thinks is the greatest new technology, but we also bring in the viewpoint of the consumers of the technology, like a CIO, or as Karl represents on the Cloud Group, an architect on the design side. We make sure that we're balancing the interests.

We did a number of presentations reaching back to our Seattle conference about a year ago on cloud computing. We've reached out to other organizations to work with them to see if there is interest in working together on cloud activities. We've staged a series of presentations.

We've gotten about 500 participants virtually, and that represents about 85-90 companies participating.

The members decided in mid-2009 to form a work group around cloud computing. The work group is a way that we can bring together all aspects of what's going on in The Open Group, because cloud computing touches a lot of areas: security, architecture, technology, and all those things. Also, as part of that we've reached out to other communities to open a nonmember aspect of the Cloud Work Group as well.

Orshaw: At the end of this, we'll have a complete model for both public and private cloud. It's an exciting endeavor by the team, and I'm excited to see the outcome. We'll have short-term milestones, where we'll produce, document, and publish results every two months or so. We hope, towards the end of the year, to have all of these wrapped up into these global models that I described.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: The Open Group. Follow the conference on Twitter: #OGSEA.

Security, simplicity and control ease make desktop virtualization ready for enterprise uptake

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

The growing interest and value in PC desktop virtualization strategies and approaches has its roots in both technology and economics. Recently, a lot has happened technically that has matured the performance and economic benefits of desktop virtualization and the use of thin-client devices.

At the same time as this functional maturity improved, we are approaching an inflection point in a market that is accepting of new clients and new client approaches like desktop virtualization.

Indeed, the latest desktop virtualization model empowers enterprises with lower total costs, greater management of software, tighter security, and the ability to exploit low-cost, low-energy thin client devices. It's an offer that more enterprises are going to find hard to refuse.

In desktop virtualization, the workhorse is the server, and the client assists. This allows for easier management, support, upgrades, provisioning, and control of data and applications. Users can also take their unique desktop experience to any supported device, connect, and pick up where they left off. And, there are now new offline benefits too.

Here to help us learn more about the role and outlook for desktop virtualization, we're joined by Jeff Groudan, vice president of Thin Computing Solutions at HP. The BriefingsDirect interview is conducted by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Groudan: There certainly are some things in the market that are sure driving a potential inflection point [for client virtualization]. The market-driven things coming out of the recession are opening a lot of customers up to re-looking at some deployments that they may have delayed or specific IT projects that they have put on hold.

Just to put it into context, there was recently some data from Gartner. They feel like there are well over 600 million desktop PCs in offices today. Their belief is that over the next five years, upwards of 15 percent of those could be replaced by thin clients. So that's quite a number of redeployments and quite an inflection point for client virtualization.

In addition, there has been an ongoing desire to increase security and a lot of new compliance requirements that the customers have to address. In addition, in general, as they are looking for ways to save on costs, they are consistently and constantly looking for different ways to more efficiently manage their distributed PC environments. All of these things are driving the high level of interest in virtualizing PCs.

One of the key benefits of client virtualization is the ability to keep all the data behind the firewall in the data center and deploy thin clients to the edge of the network. Those thin clients, by design, don't have any local data.

You're also seeing better performance on the hardware side and the infrastructure side. It's really also helping bring the cost per seat of the client virtualization deployment down into ranges that are lot more interesting for large deployments. Last, and near and dear to my heart, you're seeing more powerful, yet cost-effective, thin clients that you can put on the desk and that really ensure those end-users get the experience that you want them to get.

Not an IT panacea

Our general coaching to customers is that client virtualization is not necessary for everyone, for every user group, or every application set. But, certainly, for environments where you need to get them more manageable, you need more flexibility.

When you think about the cost savings of client virtualization, usually the costs come from some of the long-term acquisition costs.

You need higher degrees of automation in order to manage a high number of distributed PCs with the benefits from centralized control, reduced labor costs, and the ability to manage remote or hard to get at locations -- things like branches, where you don't have a local IT. Those are great targets for early client virtualization deployments.

All of a sudden, the data-center guys need to be thinking about the end-user. The end-user guys need to be thinking about the data center. Roles and responsibilities need to be hammered out. How do you charge the capital expense versus operational expense? What gets budgeted where? My advice is: as you're thinking about the technical architecture and all of the savings end-to-end, you need to also be thinking about the internal business processes.

We look at this market in two ways, in the context of client virtualization and in the broader context of thin computing. Just zeroing in on client virtualization, we call it Client Virtualization HP. It's desktop virtualization. It's the same animal.

We look it as a specific set of technologies and architectures that dis-aggregate the elements of a PC, which allows customers to more easily manage and secure their environment. What we're really doing is taking advantage of a lot of the new software capabilities that matured on the server side, from a server virtualization and utilization perspective. We're now able to deploy some of those technologies, hypervisors, and protocols on the client side.

The first is that you don't want to have customers having to figure out how to architect the stuff on their own. If you think about PCs 20-25 years ago, customers didn't know how to architect a distributed PC environment. In 25 years, everybody has gotten good at it. We're still at the early stages on client virtualization.

Our specific objective is figuring out how to simplify virtualization, so that customers get past the technology, and really start to deliver the full benefit of virtualization, without all the complexity.

So our focus is to deliver more complete integrated solutions, end to end from the desktop to the data center, lay it all out, and reference designs so customers can very comfortably understand how to go build out a deployment. They certainly may want to customize it. We want to get them 80-90 percent there just by telling them what we have learned.

Wide applicability across industries

There are opportunities for just about every industry. We've seen certain verticals on the cutting edge of this. Financial services, healthcare, education, and public sector are a few examples of industries that have really embraced this quickly. They have two or three themes in common. One is an acute security need. If you think about healthcare, financial services, and government, they all have very acute needs to secure their environments. That led them to client virtualization relatively quickly.

We certainly have some very exciting launches coming up in the next couple of months where we're really focused on total cost per seat. How do we let people deploy these kinds of solutions and continue to get further economic benefits, delivering better tighter integration across the desktop to the data center?

The ease of deployment of these solutions can get easier-and-easier, and then ease of use and manageability tools. They allow the IT guys to deploy large deployments of client virtualization with as little touch and as little complexity as we can possibly make it. We're trying to automate these kinds of solutions. We're very excited about some of the things we'll be delivering to our customers in the next couple of months.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Monday, February 1, 2010

Technology, process and people must combine smoothly to achieve strategic virtualization benefits

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

The latest BriefingsDirect podcast discussion delves into proper planning and implementation of data-center virtualization to gain strategic-level advantage in enterprises.

Because companies generally begin their use of server virtualization at a tactical level, there is often a complex hurdle in expanding the use of virtualization. Analysts predict that virtualization will support upwards of half of server workloads in just a few years. Yet, we are already seeing gaps between an enterprise’s expectations and their ability to aggressively adopt virtualization without stumbling in some way.

These gaps can involve issues around people, process and technology and often, all three in some combination. Process refinement, proper methodological involvement, and swift problem management often provide proven risk reduction, and provide surefire ways of avoiding pitfalls as virtualization use moves to higher scale.

The goal becomes one of a lifecycle orchestration and governed management approach to virtualization efforts so that the business outcomes, as well as the desired IT efficiencies, are accomplished.

Areas that typically need to be part of any strategic virtualization drive include sufficient education, skilled acquisition, and training. Outsourcing, managed mixed sourcing, and consulting around implementation and operational management are also essential. Then, there are the usual needs around hardware, platforms and system as well as software, testing and integration.

So, we’re here with a panel of Hewlett Packard (HP) executives to examine in-depth the challenges of large scale successful virtualization adoption. We’ll look at how a supplier like HP can help fill the gaps that can hinder virtualization payoffs.

Please join me in welcoming our panel: Tom Clement, worldwide portfolio manager in HP Education Services; Bob Meyer, virtualization solutions lead with HP Enterprise Business; Dionne Morgan, worldwide marketing manager at HP Technology Services; Ortega Pittman, worldwide product marketing, HP Enterprise Services, and Ryan Reed, worldwide marketing manager at HP Enterprise Business. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excepts:
Meyer: The downturn has really forced anybody who is on the front to go headlong into virtualization. Today, we are technically ahead of where we were a year or two ago with the virtualization experience.

Everybody has significant amounts of virtualization in the production environment. They’ve been able to get a handle on what it can do to see what the real results and tangible benefits are. They can see, especially on the capital expenditure side, what it could do for the budgets and what benefits it can deliver.

Now, looking forward, people realize the benefits, and they are not looking in it just as an endpoint. They're looking down the road and saying, "Okay, this technology is foundational for cloud computing and some other things." Rather than slowing down, we’ll see those workloads increase.

They went from just single percentage points a year and a half ago to 12-15 percent now. Within two years, people are saying it should be about 50 percent. The technology has matured. People have a lot of experience with it. They like what they see in results, and, rather than slow down, it's bringing efficiency to things like the new services model.

Morgan: Many people have probably heard the term "virtual machine sprawl" or "VM sprawl," and that's one of the risks. Part of the reason VM sprawl occurs is because there are no clear defined processes in place to keep the virtualized environment under control.

Virtualization makes it so easy to deploy a new virtual machine or a new server, that if you don’t have the proper processes in place, you could have more and more of the these virtual machines being deployed and you lose control. You lose track of them.

That's why it's very important for our clients to think about ... how they're going to continue to manage virtualization on an on-going basis, so they keep it under control.

Pittman: Many, times small, medium, and large organizations have the virtualization needs, but might not have the skills on hand.

The skill demand and the instant ability to get started is something that we take a lot of pride in, and in the global track record of doing that very well is something that HP Enterprise Services can bring from an outsourcing perspective. That's where HP Enterprise Services comes to add value with meeting customers' needs around skills.

Clement: Our 30-plus years of experience in providing customer training has shown, time and time again, that technology investments by themselves don’t ensure success.

The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.

The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.

That's really where training comes in. Increasing the technical skills of our customers' people is often one of the most effective ways for them to grow, increase their productivity and boost the success rates of their virtualization initiatives.

In fact, an interesting study just last year from IDC found that 60 percent of the factors leading to the general success in the IT function are attributed to the skills of people involved. Our education team can help address both the people and process parts of the equation.

For more information on HP's Virtual Services, please go to: and

Reed: We see a shift in the way that IT organizations have considered what they think would be strategic to their end business function. A lot of that is driven through the analysis that goes into planning for a virtual server environment.

When doing something like a virtual server environment, the IT organizations have to take a step back and analyze whether or not this is something that they’ve got the core competency to support. Often times, they come to the conclusion that they don’t have the right set of skills, resources, or locations to support those virtual servers in terms of their data-center location, as well as where those resources are sitting.

So, during the planning of virtual server environments, IT organizations will choose to outsource the planning, the implementation, and the ongoing management of that IT infrastructure to companies like HP.

It's definitely a good opportunity for IT organizations to take a step back and look at how they want to have that IT infrastructure managed, and often times outsourcing is a part of that conversation.

Meyer: One thing virtualization does very nicely is blur the connections between the various pieces of infrastructure, and the technology has developed quite a bit to allow that to ebb and flow with the business needs.

And, you're right. The other side of that is getting the people to actually work and plan together. We always talk about virtualization as not an end-point. It's an enabler of technology to get you there.

If you put what we’re talking about in context, the next thing that people want to go to is maybe build a private-cloud service delivery model. Those types of things will depend on that cooperation. It's not just virtualization that that's causing but it's really the newest service delivery models. Where people are heading with their services absolutely requires management and a look at new processes as well.

Pittman: We’d like to work with our customers to understand that it's a starting point to consolidate, but there is a lot more in the broader ecosystem consider, as they think about optimizing their IT environment.

One of HP’s philosophies is the whole concept of converged infrastructure. That's thinking about the infrastructure more holistically and addressing the applications, as you said, as well as your server environments and not doing one off, but looking more holistically to get the full benefit.

Moving forward, that's something that we certainly could help customers do from an outsourcing standpoint in enabling all of the parts, so there aren’t gaps that cause bigger problems than the one hiccup that started the whole notion of virtualization in the beginning.

Morgan: We think about this in terms of their life cycle. We like to start with a strategy discussion, where we have consultants sit down with the client to better understand what they’re trying to accomplish from a business objective perspective. We want to make sure that the customers are thinking about this first from the business perspective. What are their goals? What are they trying to accomplish? And, how can virtualization help them accomplish those goals?

Then, we also can help them with their actual return on investment (ROI) analysis and we have ROI tools that we can use to help them develop that analysis. We have experts to help them with the business justification. We try to take it from a business approach first and then design the right virtualization solution to help them accomplish those goals.

Pittman: HP Enterprise Services worked with the Navy/Marine Corps Intranet (NMCI), which is the world’s largest private network, serving and supporting sailors, marines, and civilians in more than 620 locations worldwide.

They were experiencing business challenges in productivity and innovation and in the security areas. Our approach was to consolidate 2,700 physical servers down to 300, reducing outage minutes by almost half. This decreased NMCI’s IT footprint by almost 40 percent and cut carbon emissions by almost 7,000 tons.

We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.

Virtualizing the servers in this environment enabled them to eliminate carbon emissions equivalent to taking 3,600 cars off the road for one year. So, there were tremendous improvements in that area. We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.

All of this was done through the outsourcing virtualization support of HP Enterprise Services and we're really proud that that had a huge impact. They were recognized for an award, as a result of this virtualization improvement, which was pretty outstanding. We talked a little earlier about the broader benefits that customers can expect, the services that help make all of this happen.

In our full portfolio within the IT organization of HP, that would be server management services, data center modernization, network application services, storage services, web hosting services, and network management services. All combined, they made this happen successfully. We're really proud of that, and that's an example of the very large-scale impact that's reaping a lot of benefit.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

You may also be interested in:

Business event processing and SOA: Joined at the hip

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg, edited by Ronald Schmelzer

At the dawn of the computer age, forecasters predicted all manner of changes in day-to-day life, including fully automated kitchens, cars that drove themselves, paperless offices, and more. In all of these now-quaint views of the future, the prognostication focused on applying the technologies of the future to the problems of the present. Where an office had a filing cabinet full of paper, the future will put that paper in a computer, and voila: The paperless office!

The reality of the last 50 years, of course, is quite different. As technology evolved, so too did our appetite for information. Where the size of the filing cabinet in a 1950s office constrained the quantity of information a business could manage, today we have no such limitations. But rather than stopping at a simplification of existing business, we continue to push the limits of the technology at our disposal. If we have terabytes of storage, then to be sure we’ll soon have terabytes of information to fill it. If we have networks running at gigabit speeds, then you can rest assured the quantity of information we’ll attempt to push through such pipes will soon consume whatever capacity we have.

In many ways, in fact, it is the quantity and variety of information on the move, rather than simply at rest, that defines the modern business world. Today’s businesses operate in a complex ecosystem of connected, interrelated events, where each event creates information that flies around our networks. From fluctuating interest rates to customer transactions to manufacturing processes, business events drive the business while simultaneously causing the creation and movement of information.

In particular, it is the tight interrelationship between business events—occurrences in the real world that are relevant to the business, and software events, which are messages that such events generate, that creates both a problem as well as an opportunity for businesses. The sheer quantity and variety of events promises to swamp any organization unprepared for the onslaught of network traffic that today’s business generates. But on the other hand, business events are also the lifeblood of the organization, as everything the business does appears in real time in the event traffic on their network. Sometimes the patterns such events exhibit are easy to identify, but more often they are hard to detect and correlate. Organizations need to process and leverage business events to provide insight into the workings of their organization in order to run the business and empower the people within it.

What is business event processing?

The key to leveraging business events is to apply software that processes software events—such as messages on the network—in such a way as to gain insight into, and control over the business events that generate them. Such software is known as Business Event Processing (BEP). BEP software helps businesses detect, analyze, and respond to complex events to take advantage of emerging opportunities, handle unexpected exceptions, and redirect resources as necessary—essentially, dealing with business events on the business level, independent of the technology context for those events. BEP software often forms part of a Business Process Management (BPM) solution, which combines event pattern detection with dynamic process execution.

The goal of BEP is to detect and interpret business situations, resulting in effective business decision making. BEP enables organizations to extract events from multiple sources, detect business situations based on patterns of events, and then derive new events through aggregation of events as well as by adding new information. BEP software helps companies identify patterns and establish connections between events, and then initiates a new event, or a trigger, when an important trend emerges.

BEP is becoming increasingly important across the business environment because it enables a wide variety of organizations to proactively analyze and respond to small market changes that can have significant business impact. BEP also has a variety of other uses, for example:
  • Retailers’ BEP solutions proactively alert them about the success or failure of a product as goods move off the shelf, allowing them to make real time changes to pricing, inventory, and marketing campaigns

  • E-commerce vendors leverage BEP to help identify fraud and reduce abandoned shopping carts

  • Trading markets use BEP to uncover and compare minute changes throughout global markets to support buy/sell decisions as well as to ensure the timely execution of bids

  • The massive multi-player online game industry uses BEP to uncover unauthorized activities among tens of thousands of actions per second

  • Fleet management companies leverage BEP to help them make instantaneous decisions on how to deal with products that are lost in transit or delayed due to unforeseen circumstances.
Business event processing in an SOA context

BEP describes a wide range of ways that enterprises approach events, from simple to complex. Opening an account, making a withdrawal, buying an item, changes in sensor or meter readings, or sending an invoice are all examples of common business events. Regardless of the potential complexity of such events, organizations must both recognize new events and understand the importance of business critical events in a noisy environment. Only by recognizing important events in real time will such companies be able to leverage their IT systems and business processes to speed response and reduce the need for manual processing.

And for every type of application, there is potentially a new type of event.

Events, however, do not exist in a vacuum—they depend upon various applications and systems across the IT infrastructure to create and consume them. And for every type of application, there is potentially a new type of event. The BEP challenge, therefore, is dealing with environments of broad heterogeneity, separating what’s important to the business from the underlying complexity of the technology.

In other words, BEP does not stand alone in the IT organization. It requires a flexible architecture that can abstract the underlying heterogeneity the IT environment presents. Today’s enterprises are implementing Service-Oriented Architecture (SOA) for this purpose. SOA is a set of best practices for organizing IT resources in a flexible way to support agile business processes by representing IT capabilities and information as Services, which abstract the complexity of the underlying technology from the business. Businesses then define events and their responses through the IT perspective of interacting with Services.

Applying SOA to business events: Heterogeneity and flexibility

The story of how to apply SOA to business events take place simultaneously on two levels: above and below the Service abstraction. Above this abstraction is the business environment, where the business is able to leverage BEP to glean real time information about the business independent of the underlying technology. In contrast, below the Service abstraction, events are messages moving from one Service endpoint to another, typically (but not necessarily) in XML format.

It is below the Service abstraction, in fact, that applying SOA to business events provides much of its value to the organization. Service interfaces, by their nature, send and/or receive messages, so the broader the SOA implementation, the more the message traffic between Service endpoints represents the operations of the business. From the BEP perspective, however, such messages are events, and provide visibility into the business in an ad hoc, real time manner.

. . . If managers have visibility into business events, then they can then take more effective, proactive steps to optimize production and reduce costly slowdowns.

While the SOA-enabled BEP story offers business value beneath the Service abstraction, the benefits to the organization above the abstraction are every bit as important. After all, as the pace of business accelerates, there are business benefits from optimizing how the organization handles business events. Improved customer responsiveness, more optimal usage of physical assets and better management of complex value chains all benefit from improvements in event processing. Furthermore, if managers have visibility into business events, then they can then take more effective, proactive steps to optimize production and reduce costly slowdowns.

Similarly, event processing can improve customer service and increase customer satisfaction. Because event processing can identify important events and deliver the right information to the right place at the right time, managers can mitigate or avoid a wide range of problems. Such benefits accrue not only in individual instances, but also across business processes, as well. Visibility into events helps line of business managers deal with changes in business process, thus making the business more reactive.

Combined with SOA and BPM, therefore, BEP extends the value of each as well as the synergies between them. Following SOA best practices can leverage the value of both BPM and BEP, as SOA hides the complexity of the IT environment from the business aspects of the solutions. The bottom line is that BPM, SOA and BEP combine to meet the needs of the business more effectively than any one or two of the approaches can separately.

The ZapThink take

The exponential growth of information in the business world continues unabated, and there’s no reason to expect it to slack off in the future. This growth is driving the need for event processing, as well as the enabling technologies of Web 2.0 and the underlying architecture of SOA. The combination of these three approaches provides a foundation for flexibility, composability, integration, and scalability. At the heart of this synergy are open standards, which facilitate all the various interactions among systems that goes into Business Event Processing. Furthermore, existing security, governance, and BPM technologies round out the set of enabling technologies that feed this confluence of approaches.

The bottom line, however, is the business story. BEP, combined with SOA, further bridges the gap between business and IT.

The bottom line, however, is the business story. BEP, combined with SOA, further bridges the gap between business and IT. Only the business knows the relevance of business events, SOA abstracts the underlying technology, and Web 2.0 provides an empowering interface to increasingly powerful, real time capabilities and information.

The challenge with discussing this synergy among BEP, SOA, and Web 2.0 is that no one term does it justice. SOA is a critical part of this story, but is only a part. SOA delivers a set of principles for organizing an organization’s resources to provide a business-centric abstraction, because the business doesn’t care what server, network, or data center the implementation underlying a Service runs on. All they care is that the Service works as advertised.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at