Wednesday, February 16, 2011

Expert panel: As cyber security risks grow, architected protection and best practices must keep pace

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

L
ooking back over the past few years, it seems like cyber security and warfare threats are only getting worse. We've had the Stuxnet Worm, the WikiLeaks affair, China-originating attacks against Google and others, and the recent Egypt Internet blackout.

But, are cyber security dangers, in fact, getting that much worse? And are perceptions at odds with what is really important in terms of security protection? How can businesses best protect themselves from the next round of risks, especially as cloud, mobile, and social media and networking activities increase? How can architecting for security become effective and pervasive?

We posed these and other serious questions to a panel of security experts at the recent The Open Group Conference, held in San Diego the week of Feb. 7, to examine the coming cyber security business risks, and ways to head them off.

The panel: Jim Hietala, the Vice President of Security at The Open Group; Mary Ann Mezzapelle, Chief Technologist in the CTO's Office at HP, and Jim Stikeleather, Chief Innovation Officer at Dell Services. The discussion was moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Stikeleather: The only secure computer in the world right now is the one that's turned off in a closet, and that's the nature of things. You have to make decisions about what you're putting on and where you're putting it on. I's a big concern that if we don't get better with security, we run the risk of people losing trust in the Internet and trust in the web.

When that happens, we're going to see some really significant global economic concerns. If you think about our economy, it's structured around the way the Internet operates today. If people lose trust in the transactions that are flying across it, then we're all going to be in pretty bad world of hurt.

One of the things that you're seeing now is a combination of security factors. When people are talking about the break-ins, you're seeing more people actually having discussions of what's happened and what's not happening. You're seeing a new variety of the types of break-ins, the type of exposures that people are experiencing. You're also seeing more organization and sophistication on the part of the people who are actually breaking in.

The other piece of the puzzle has been that legal and regulatory bodies step in and say, "You are now responsible for it." Therefore, people are paying a lot more attention to it. So, it's a combination of all these factors that are keeping people up at night.

A major issue in cyber security right now is that we've never been able to construct an intelligent return on investment (ROI) for cyber security.

We're starting to see a little bit of a sea change, because starting with HIPAA-HITECH in 2009, for the first time, regulatory bodies and legislatures have put criminal penalties on companies who have exposures and break-ins associated with them.



There are two parts to that. One, we've never been truly able to gauge how big the risk really is. So, for one person it maybe a 2, and most people it's probably a 5 or a 6. Some people may be sitting there at a 10. But, you need to be able to gauge the magnitude of the risk. And, we never have done a good job of saying what exactly the exposure is or if the actual event took place. It's the calculation of those two that tell you how much you should be able to invest in order to protect yourself.

We're starting to see a little bit of a sea change, because starting with HIPAA-HITECH in 2009, for the first time, regulatory bodies and legislatures have put criminal penalties on companies who have exposures and break-ins associated with them.

So we're no longer talking about ROI. We're starting to talk about risk of incarceration , and that changes the game a little bit. You're beginning to see more and more companies do more in the security space.

Mezzapelle: First of all we need to make sure that they have a comprehensive view. In some cases, it might be a portfolio approach, which is unique to most people in a security area. Some of my enterprise customers have more than a 150 different security products that they're trying to integrate.

Their issue is around complexity, integration, and just knowing their environment -- what levels they are at, what they are protecting and not, and how does that tie to the business? Are you protecting the most important asset? Is it your intellectual property (IP)? Is it your secret sauce recipe? Is it your financial data? Is it your transactions being available 24/7?

It takes some discipline to go back to that InfoSec framework and make sure that you have that foundation in place, to make sure you're putting your investments in the right way.

... It's about empowering the business, and each business is going to be different. If you're talking about a Department of Defense (DoD) military implementation, that's going to be different than a manufacturing concern. So it's important that you balance the risk, the cost, and the usability to make sure it empowers the business.

Hietala: One of the big things that's changed that I've observed is if you go back a number of years, the sorts of cyber threats that were out there were curious teenagers and things like that. Today, you've got profit-motivated individuals who have perpetrated distributed denial of service attacks to extort money.

Now, they’ve gotten more sophisticated and are dropping Trojan horses on CFO's machines and they can to try in exfiltrate passwords and log-ins to the bank accounts.

We had a case that popped up in our newspaper in Colorado, where a mortgage company, a title company lost a million dollars worth of mortgage money that was loans in the process of funding. All of a sudden, five homeowners are faced with paying two mortgages, because there was no insurance against that.

When you read through the details of what happened it was, it was clearly a Trojan horse that had been put on this company's system. Somebody was able to walk off with a million dollars worth of these people's money.

State-sponsored acts

So you've got profit-motivated individuals on the one side, and you've also got some things happening from another part of the world that look like they're state-sponsored, grabbing corporate IP and defense industry and government sites. So, the motivation of the attackers has fundamentally changed and the threat really seems pretty pervasive at this point.

Complexity is a big part of the challenge, with changes like you have mentioned on the client side, with mobile devices gaining more power, more ability to access information and store information, and cloud. On the other side, we’ve got a lot more complexity in the IT environment, and much bigger challenges for the folks who are tasked for securing things.

Stikeleather: One other piece of it is require an increased amount of business knowledge on the part of the IT group and the security group to be able to make the assessment of where is my IP, which is my most valuable data, and what do I put the emphasis on.

One of the things that people get confused about is, depending upon which analyst report you read, most data is lost by insiders, most data is lost from external hacking, or most data is lost through email. It really depends. Most IP is lost through email and social media activities. Most data, based upon a recent Verizon study, is being lost by external break-ins.

When you move from just "I'm doing security" to "I'm doing risk mitigation and risk management," then you have to start doing portfolio and investment analysis in making those kinds of trade-offs.



We've kind of always have the one-size-fits-all mindset about security. When you move from just "I'm doing security" to "I'm doing risk mitigation and risk management," then you have to start doing portfolio and investment analysis in making those kinds of trade-offs.

... At the end of the day it's the incorporation of everything into enterprise architecture, because you can't bolt on security. It just doesn't work. That’s the situation we're in now. You have to think in terms of the framework of the information that the company is going to use, how it's going to use it, the value that’s associated with it, and that's the definition of EA.

... It's one of the reasons we have so much complexity in the environment, because every time something happens, we go out, we buy any tool to protect against that one thing, as opposed to trying to say, "Here are my staggered differences and here's how I'm going to protect what is important to me and accept the fact nothing is perfect and some things I'm going to lose."

Mezzapelle: It comes back to one of the bottom lines about empowering the business. It means that not only do the IT people need to know more about the business, but the business needs to start taking ownership for the security of their own assets, because they are the ones that are going to have to belay the loss, whether it's data, financial, or whatever.

We need to connect the dots and we need to have metrics. We need to look at it from an overall threat point of view, and it will be different based on what company you're about.



They need to really understand what that means, but we as IT professionals need to be able to explain what that means, because it's not common sense. We need to connect the dots and we need to have metrics. We need to look at it from an overall threat point of view, and it will be different based on what company you're about.

You need to have your own threat model, who you think the major actors would be and how you prioritize your money, because it's an unending bucket that you can pour money into. You need to prioritize.

The way that we've done that is this is we've had a multi-pronged approach. We communicate and educate the software developers, so that they start taking ownership for security in their software products, and that we make sure that that gets integrated into every part of portfolio.

The other part is to have that reference architecture, so that there’s common services that are available to the other services as they are being delivered and that we can not control it but at least manage from a central place.

Stikeleather: The starting point is really architecture. We're actually at a tipping point in the security space, and it comes from what's taking place in the legal and regulatory environments with more-and-more laws being applied to privacy, IP, jurisdictional data location, and a whole series of things that the regulators and the lawyers are putting on us.

One of the things I ask people, when we talk to them, is what is the one application everybody in the world, every company in the world has outsourced. They think about it for a minute, and they all go payroll. Nobody does their own payroll any more. Even the largest companies don't do their own payroll. It's not because it's difficult to run payroll. It's because you can’t afford all of the lawyers and accountants necessary to keep up with all of the jurisdictional rules and regulations for every place that you operate in.

Data itself is beginning to fall under those types of constraints. In a lot of cases, it's medical data. For example, Massachusetts just passed a major privacy law. PCI is being extended to anybody who takes credit cards.

Because all these adjacencies are coming together, it's a good opportunity to sit down and architect with a risk management framework. How am I going to deal with all of this information?



The security issue is now also a data governance and compliance issue as well. So, because all these adjacencies are coming together, it's a good opportunity to sit down and architect with a risk management framework. How am I going to deal with all of this information?

Risk management

Hietala: I go back to the risk management issue. That's something that I think organizations frequently miss. There tends to be a lot of tactical security spending based upon the latest widget, the latest perceived threat -- buy something, implement it, and solve the problem.

Taking a step back from that and really understanding what the risks are to your business, what the impacts of bad things happening are really, is doing a proper risk analysis. Risk assessment is what ought to drive decision-making around security. That's a fundamental thing that gets lost a lot in organizations that are trying to grapple the security problems.

Stikeleather: I can argue both sides of the [cloud security] equation. On one side, I've argued that cloud can be much more secure. If you think about it, and I will pick on Google, Google can expend a lot more on security than any other company in the world, probably more than the federal government will spend on security. The amount of investment does not necessarily tie to a quality of investment, but one would hope that they will have a more secure environment than a regular company will have.

You have to do your due diligence, like with everything else in the world. I believe, as we move forward, cloud is going to give us an opportunity to reinvent how we do security.



On the flip side, there are more tantalizing targets. Therefore they're going to draw more sophisticated attacks. I've also argued that you have statistical probability of break-in. If somebody is trying to break into Google, and you're own Google running Google Apps or something like that, the probability of them getting your specific information is much less than if they attack XYZ enterprise. If they break in there, they are going to get your stuff.

Recently I was meeting with a lot of NASA CIOs and they think that the cloud is actually probably a little bit more secure than what they can do individually. On the other side of the coin it depends on the vendor. You have to do your due diligence, like with everything else in the world. I believe, as we move forward, cloud is going to give us an opportunity to reinvent how we do security.

I've often argued that a lot of what we are doing in security today is fighting the last war, as opposed to fighting the current war. Cloud is going to introduce some new techniques and new capabilities. You'll see more systemic approaches, because somebody like Google can't afford to put in 150 different types of security. They will put one more integrated. They will put in, to Mary Ann’s point, the control panels and everything that we haven't seen before.

So, you'll see better security there. However, in the interim, a lot of the software-as-a-service (SaaS) providers, some of the simpler platform-as-a-service (PaaS) providers haven’t made that kind of investment. You're probably not as secured in those environments.

Lowers the barrier

Mezzapelle: For the small and medium size business cloud computing offers the opportunity to be more secure, because they don't necessarily have the maturity of processes and tools to be able to address those kinds of things. So, it lowers that barrier to entry for being secure.

For enterprise customers, cloud solutions need to develop and mature more. They may want to do with hybrid solution right now, where they have more control and the ability to audit and to have more influence over things in specialized contracts, which are not usually the business model for cloud providers.

I would disagree with Jim Stikeleather in some aspects. Just because there is a large provider on the Internet that’s creating a cloud service, security may not have been the key guiding principle in developing a low-cost or free product. So, size doesn't always mean secure.

You have to know about it, and that's where the sophistication of the business user comes in, because cloud is being bought by the business user, not by the IT people. That's another component that we need to make sure gets incorporated into the thinking.

Stikeleather: I am going to reinforce what Mary Ann said. What's going on in cloud space is almost a recreation of the late '70s and early '80s when PCs came into organizations. It's the businesspeople that are acquiring the cloud services and again reinforces the concept of governance and education. They need to know what is it that they're buying.

There will be some new work coming out over the next few months that lay out some of the tough issues there and present some approaches to those problems.



I absolutely agree with Mary. I didn't mean to imply size means more security, but I do think that the expectation, especially for small and medium size businesses, is they will get a more secure environment than they can produce for themselves.

Hietala: There are a number of different groups within The Open Group doing work to ensure better security in various areas. The Jericho Forum is tackling identity issues as it relates to cloud computing. There will be some new work coming out of them over the next few months that lay out some of the tough issues there and present some approaches to those problems.

We also have the Open Trusted Technology Forum (OTTF) and the Trusted Technology Provider Framework (TTPF) that are being announced here at this conference. They're looking at supply chain issues related to IT hardware and software products at the vendor level. It's very much an industry-driven initiative and will benefit government buyers, as well as large enterprises, in terms of providing some assurance of products they're procuring are secure and good commercial products.

Also in the Security Forum, we have a lot of work going on in security architecture and information security management. There are a number projects that are aimed at practitioners, providing them the guidance they need to do a better job of securing, whether it's a traditional enterprise, IT environment, cloud and so forth. Our Cloud Computing Work Group is doing work on a cloud security reference architecture. So, there are number of different security activities going on in The Open Group related to all this.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, February 15, 2011

HP offers framework for one-stop data center transformation

As more companies look toward building or expanding data centers, HP has announced today a comprehensive service that simplifies the process of designing and building data centers by offering design, construction and project management from a single vendor.

The new HP Critical Facilities Implementation service (CFI) enables clients to realize faster time-to-innovation and lower cost of ownership by providing a single integrator that delivers all the elements of a data center design-build project from start to finish. An extension of the HP Converged Infrastructure strategy, HP CFI is an architectural blueprint that allows clients to align and share pools of interoperable resources. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

A recent Gartner survey indicated that 46 percent of respondents reported that they will build one or more new data centers in the next two years, and 54 percent expected that they will need to expand an existing data center in that time frame.

“Constructing a data center is an enormous undertaking for any business, and taking an integrated approach with a single vendor will help maximize cost and efficiency, while reducing headaches,” said Dave Cappuccio, research vice president, Gartner. “As customers’ data center computing requirements add complexity to the design-build process, comprehensive solutions that provide clients with an end-to-end experience will allow them to realize their plans within the required timeframe and constraints.”

Extensive experience

Based on its experience in “greenfield” and retrofit construction, HP is delivering CFI for increased efficiency when designing and building data centers. The company draws on its experience in designing more than 50 million square feet of raised-floor data center space and its innovations in design engineering to create fully integrated facility and IT solutions.

Benefits of CFI include:
  • HP’s management of all of the elements of the design-build project and vision of integrating facilities development with IT strategy.
  • A customized data center implementation plan that is scalable and flexible enough to accommodate their existing and future data center needs.
  • Access to experience based on a track record of delivering successful customer projects around data center planning and design-build. These projects include the world’s first LEED-certified data center, the first LEED GOLD-certified data center, India’s first Uptime Institute Tier III-rated data center as well as more than 60 “greenfield” sites, including 100-megawatt facilities.
HP CFI is available through HP Critical Facilities Services. Pricing varies according to location and implementation. More information is available at www.hp.com/services/cfi.

You may also be interested in:

Adaptive Computing beefs up data center and private cloud management with Moab 6.0

Adaptive Computing has announced a major upgrade to its data-center and private-cloud management solution. The new version addresses growing enterprise demand to quickly deploy infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) or software-as-a-service (SaaS) from a centralized and intelligent data center infrastructure.

The Provo, Utah company's Moab Adaptive Computing Suite 6.0 provides increased agility and automation to decrease the time to value for new cloud services, reduce IT cost and complexity and optimize overall cloud resource utilization.

New capabilities in Moab enhance its ability to automate the process of identifying and allocating available resources based on business policies, compliance, cost and performance goals. Ensuring that applications and infrastructure are automatically deployed properly the first time, avoids service failures, optimizes resource usage, and reduces cost, the company said.

“Cloud solutions ultimately need to evolve in complexity from rapid provisioning to an automated, self-optimizing environment,” said Michael Jackson, president and COO of Adaptive Computing. “With Moab Adaptive Computing Suite 6.0, enterprises can create an optimized cloud environment that has the agility to respond to business requests via sophisticated automation. Innovative partners such as HP recognize this need for intelligent resource management and rely on Moab technology to help their customers further extend the value and return on investment of their cloud infrastructures.” [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Self-service projects

Moab Adaptive Computing Suite 6.0 enables organizations to evolve self-service cloud projects into rich, self-optimizing or intelligent workload-driven clouds. Its policy-based cloud intelligence engine works in tandem with existing data center management and resource investments to create an agile cloud environment that responds faster to business requests and automates across IT processes.

Product enhancements include:
  • Automatically understanding and managing a mix of application workloads. In addition, the solution automatically initiates live migration of virtual machine workloads to meet service needs and maximize utilization.
  • Rapidly automating resource allocation and provisioning and de-provisioning hardware resources via policies to reduce IT cost and management complexity.
  • Providing billing or “showback” of resource usage costs to ensure that services are delivered within standards and compliance requirements.
  • Aggregating data for rich context to automate initial resource allocation decisions and policy-based actions through integration with other resource managers, data repositories, and identity management systems.
You may also be interested in:

Sunday, February 13, 2011

Good insights from The Open Group Cloud Conference -- and its unconference brethren

This guest post comes courtesy of Dr. Chris Harding, Director for Interoperability and SOA at The Open Group.

By Dr. Chris Harding


The Open Group Conference in San Diego last week included a formal cloud computing conference stream on Wednesday, followed in that evening by an unstructured CloudCamp -- which made an interesting contrast.

The cloud conference stream

T
he Cloud Conference stream featured presentations on architecting for cloud and cloud security, and included a panel discussion on the considerations that must be made when choosing a cloud solution.

In the first session of the morning, we had two presentations on architecting for cloud. Both considered TOGAF as the architectural context. The first, from Stuart Boardman of Getronics, explored the conceptual difference that cloud makes to enterprise architecture, and the challenge of communicating an architecture vision and discussing the issues with stakeholders in the subsequent TOGAF phases.

The second, from Serge Thorn of Architecting the Enterprise, looked at the considerations in each TOGAF phase, but in a more specific way. The two presentations showed different approaches to similar subject matter, which proved a very stimulating combination.

This session was followed by a presentation from Steve Else of EA Principals in which he shared several use cases related to cloud computing. Using these, he discussed solution architecture considerations, and put forward the lessons learned and some recommendations for more successful planning, decision-making, and execution. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

We then had the first of the day’s security-related presentations. It was given by Omkhar Arasaratnam of IBM and Stuart Boardman of Getronics. It summarized the purpose and scope of the Security for the Cloud and SOA project that is being conducted in The Open Group as a joint project of The Open Group's Cloud Computing Work Group, the SOA Work Group, and Security Forum. Omkhar and Stuart described the usage scenarios that the project team is studying to guide its thinking, the concepts that it is developing, and the conclusions that it has reached so far.

The U.S. is a leader in the use of IT for government and administration, so we can expect that its conclusions will have a global impact.



The first session of the afternoon was started by Ed Harrington of Architecting the Enterprise, who gave an interesting presentation on current U.S. Federal Government thinking on enterprise architecture, showing clearly the importance of cloud computing to U.S. Government plans.

The U.S. is a leader in the use of IT for government and administration, so we can expect that its conclusions – that cloud computing is already making its way into the government computing fabric, and that enterprise architecture, instantiated as SOA and properly governed, will provide the greatest possibility of success in its implementation – will have a global impact.

We then had a panel session, moderated by Dana Gardner with his usual insight and aplomb, that explored the considerations that must be made when choosing a cloud solution — custom or shrink-wrapped — and whether different forms of cloud computing are appropriate to different industry sectors.

The panelists represented different players in the cloud solutions market – customers, providers, and consultants – so that the topic was covered in depth and from a variety of viewpoints. They were Penelope Gordon of 1Plug Corp., Mark Skilton of Capgemini, Ed Harrington of Architecting the Enterprise, Tom Plunkett of Oracle, and TJ Virdi of the Boeing Co.

In the final session of the conference stream, we returned to the topic of cloud security. Paul Simmonds, a member of the Board of the Jericho Forum, gave an excellent presentation on de-risking the cloud through effective risk management, in which he explained the approach that the Jericho Forum has developed. The session was then concluded by Andres Kohn of Proofpoint, who addressed the question of whether data can be more secure in the cloud, considering public, private and hybrid cloud environment.

CloudCamp

The CloudCamp was hosted by The Open Group but run as a separate event, facilitated by CloudCamp organizer Dave Nielsen. There were around 150-200 participants, including conference delegates and other people from the San Diego area who happened to be interested in the cloud.

Dave started by going through his definition of cloud computing. Perhaps he should have known better – starting a discussion on terminology and definitions can be a dangerous thing to do with an Open Group audience. He quickly got into a good-natured argument from which he eventually emerged a little bloodied, metaphorically speaking, but unbowed.

We then had eight “lightning talks”. These were five-minute presentations covering a wide range of topics, including how to get started with cloud (Margaret Dawson, Hubspan), supplier/consumer relationship (Brian Loesgen, Microsoft), cloud-based geographical mapping (Ming-Hsiang Tsou, San Diego University), a patterns-based approach to cloud (Ken Klingensmith, IBM), efficient large-scale data processing (Alex Rasmussen, San Diego University), using desktop spare capacity as a cloud resource (Michael Krumpe, Intelligent Technology Integration), cost-effective large-scale data processing in the cloud (Patrick Salami, Temboo), and cloud-based voice and data communication (Chris Matthieu, Tropo).

Overall, the CloudCamp was a great opportunity for people to absorb the language and attitudes of the cloud community, to discuss ideas, and to pick up specific technical knowledge.



The participants then split into groups to discuss topics proposed by volunteers. There were eight topics altogether. Some of these were simply explanations of particular products or services offered by the volunteers’ companies. Others related to areas of general interest such as data security and access control, life-changing cloud applications, and success stories relating to “big data”.

I joined the groups discussing cloud software development on Amazon Web Services (AWS) and Microsoft Azure. These sessions had excellent information content which would be valuable to anyone wishing to get started in – or who is already engaged in – software development on these platforms.

They also brought out two points of general interest. The first is that the dividing line between IaaS and PaaS can be very thin. AWS and Azure are, in theory, on opposite sides of this divide; in practice they provide the developer with broadly similar capabilities. The second point is that in practice your preferred programming language and software environment is likely to be the determining factor in your choice of cloud development platform.

Overall, the CloudCamp was a great opportunity for people to absorb the language and attitudes of the cloud community, to discuss ideas, and to pick up specific technical knowledge. It gave an extra dimension to the conference, and we hope that this can be repeated at future events by The Open Group.

This guest post comes courtesy of Dr. Chris Harding, Director for Interoperability and SOA at The Open Group.

You may also be interested in:

Friday, February 11, 2011

Infosys survey shows enterprise architecture and business architecture on common ascent to strategy enablers

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Join a panel discussion on the current state of enterprise architecture (EA) and which analyzes some new findings on the subject from a recently completed Infosys Technologies annual survey.

See how the architects themselves are defining the EA team concept, how enterprise architects are dealing with impact and engagement in their enterprises, and the latest definitions of EA deliverables and objectives. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

We'll also look at where the latest trends around hot topics like cloud and mobile are pushing the enterprise architects. Toward a new future?

Assembled to delve into the current state of EA and the survey results are Len Fehskens, Vice President of Skills and Capabilities at The Open Group; Nick Hill, Principal Enterprise Architect at Infosys Technologies; Dave Hornford, Architecture Practice Principal at Integritas, as well as Chair of The Open Group’s Architecture Forum; Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities for The Open Group; Andrew Guitarte, Enterprise Business Architect of Internet Services at Wells Fargo Bank, and Ahmed Fattah, Executive IT Architect in the Financial Services Sector for IBM, Australia. The panel is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Hill: There were some things that were different about this year’s survey. There are several major takeaways.

More and more, the business is taking hold of the value that enterprise architects bring to the table, enterprise architects have been able to survive the economic troubled times, and some companies have even increased their investment in EA.

If you took a look at this year’s survey compared to 2007-2008 surveys, largely they’ve come from core IT with some increase from the business side, business architects and some increase in project managers. The leader of the EA group is still reporting through the IT chain either to the CIO or the CTO.

We also introduced the notion of hot topics. So, we had some questions around cloud computing. And, we took a more forward-looking view in terms of not so much what has been transpiring with enterprise architectures since the last survey, but what are they looking to go forward to in terms of their endeavors. And, as we have been going through economic turmoil over the past 2-3 years, we asked some questions about that.

We did notice that in terms of the team makeup, a lot of the sort of the constituents of the EA group are pretty much still the same, hailing from largely the IT core enterprise group. We looked at the engagement and impacts that they have had on their organizations and, as well, whether they have been able to establish the value that we've noticed that enterprise architects have been trying to accomplish over the past 3-4 years.

This was our fifth annual survey. We did try to do some comparative results from previous surveys and we found that some of things were the same, but there are some things that are shifting in terms of EA.

Forde: In terms of the dynamics of EA, we're constantly trying to justify why enterprise architects should exist in any organization. That's actually no different than most other positions are being reviewed on an ongoing basis, because of what the value proposition is for the organization.

Certifying architects

What I'm seeing in Asia is that a number of academic organizations, universities, are looking for an opportunity to certify enterprise architects, and a number of organizations are initiating, still through the IT organization but at a very high CIO-, CTO-level, the value proposition of an architected approach to business problems.

What I'm seeing in Asia is an increasing recognition of the need for EA, but also a continuing question of, "If we're going to do this, what's the value proposition," which I think is just a reasonable conversation to have on a day-to-day basis anyway.

Fehskens: When you compare EA with all the other disciplines that make up a modern enterprise, it's the new kid on the block. EA, as a discipline, is maybe 20 years old, depending on what you count as the formative event, whereas most of the other disciplines that are part of the modern enterprise at least hundreds of years old.

So, this is both a real challenge and a real opportunity. The other functions have a pretty good understanding of what their business case is They've been around for a long time, and the case that they can make is pretty familiar. Mostly they just have to argue in terms of more efficient or more effective delivery of their results.

For EA, the value proposition pretty much has to be reconstructed from whole cloth, because it didn't really exist, and the value of the function is still not that well understood throughout most of the business.

So, this is an opportunity as well as a challenge, because it forces the maturing of the discipline, unlike some of these older disciplines who had decades to figure out what it was that we're really doing. We have maybe a few years to figure out what it is we're really doing and what we're really contributing, and that helps a lot to accelerate the maturing of the discipline.

EA, when it's well done, people do see the value. When it's not well done, it falls by the side of the road.



I don't think we're there completely yet, but I think EA, when it's well done, people do see the value. When it's not well done, it falls by the side of the road, which is to be expected. There's going to be a lot of that, because of the relative use of the discipline, but we'll get to the point where these other functions have and probably a lot faster than they did.

Hill: I think that’s very much the case. The caveat there is that it's not necessarily an ownership. It's a matter of participation and being able to weigh in on the business transformations that are happening and how EA can be instrumental in making those transformations successful.

Follow through

Now, given that, the idea is that it's been more at a strategic level, and once that strategy is defined and you put that into play within an enterprise the idea is how does the enterprise architect really follow-through with that, if they are more focused on just the strategy not necessarily the implementation of that. That’s a big part of the challenge for enterprise architects -- to understand how they percolate downwards the standards, the discipline of architecture that needs to be present within an organization to enable that strategy in transformation.

Fehskens: One of the things that I am seeing is an idea taking hold within the architecture community that architecture is really about making the connection between strategy and execution.

If you look at the business literature, that problem is one that’s been around for a long time. A lot of organizations evolved really good strategies and then failed in the execution, with people banging their heads against the wall, trying to figure out, "We had such a great strategy. Why couldn’t we really implement it?"

I don’t know that anybody has actually done a study yet, but I would strongly suspect that, if they did, one of the things that they would discover was there wasn’t something that played the role of an architecture in making the connection between strategy and execution.

I see this is another great opportunity for architects, if we can express this idea in language that the businesspeople understand, and strategy to execution is language that businesspeople understand, and we can show them how architecture facilitates that connection. There is a great opportunity for a win-win situation for both the business and the architecture community.

There is a great opportunity for a win-win situation for both the business and the architecture community.



Forde: I just wanted to follow the two points that are right here, and say that the strategy to execution problem space is not at all peculiar to IT architects or enterprise architects. It's a fundamental business problem. Companies that are good at translating that bridge are extremely effective and it's the role of architects in that, that’s the important thing, we have to have the place at the table.

But, to imagine that the enterprise architects are solely responsible for driving execution of a strategy in an organization is a fallacy, in my opinion. The need is to ensure that the team of people that are engaged in setting the strategy and executing on it are compelling enough to drive that through the organization. That is a management and an executive problem, a middle management problem, and then driving down to the delivery side. It's not peculiar to EA at all in my opinion.

Guitarte: From my experience of talking with people from the grassroots to the executive level, I have seen one very common observation, enterprise architects are caught off-guard, and the reason there is that there is this new paradigm. In fact, there is a shift in paradigm that business architecture is the new EA, and I am going out beyond my peers here in terms of predicting the future.

Creating a handbook

That is going to be the future. I am the founding chairman of the Business Architecture Society. Today, I am an advisory member of the Business Architecture Guild. We're writing, or even rewriting, the textbooks on EA. We're creating a handbook for business architects. What my peers have mentioned is that they are bridging the strategy and tactical demands and are producing the value that business has been asking for.

Fattah: The way I see the market is consistent with the results of the survey in that they see the emergence of the enterprise architect as business architect to work on a much wider space and make you focus more on the business. There are a number of catalysts for that. One of them is a business process, the rise of the business process management, as a very important discipline within the organization.

That, in a way, had some roots from Six Sigma, which was really a purely business aspect, but also from service oriented architecture (SOA), which has itself now developed into business process, decomposition and implementation.

That gives very good ammunition and support for the strategic decomposition of the whole enterprise as components that, with business process, is actually connecting elements between this. The business process architect is participating as a business architect using this business process as a major aspect for enabling business transformation.

I'm very encouraged with this development of business architecture. By the way, another catalyst now is a cloud. The cloud will actually purify or modify EA, because all the technical details maybe actually outsourced to the cloud provider, where the essence of what IT will support in the organization becomes the business process.

On one hand, I'm encouraged with the result of the survey and what I’ve seen in the organization, but on the other hand, I am disappointed that EA hasn’t developed these economic and business bases yet. I agree with Len that 20 years is a short time. On the other hand, it’s a long time for not applying this discipline in a consistent way. We’ll get much more penetration, especially with large organization, commercial organization, and not the academic side.

Hornford: I think what is driving [cloud adoption] is the ability to highlight the process or business service requirements, and not tie them to legacy investments that are not decomposed into a cloud. Where you have a separation to a cloud, you’re required to have the ability to improve your execution. The barriers in execution in our current world are very closely tied to our legacy investments in software asset with physical asset which are very closely tied to our organizational structure.

Forde: Any organization that hands over strategic planning or execution activity to a third-party is abdicating its own responsibility to shareholders, as they are a profit-making organizations. So I would not advocate that position at all. You give up control, and that’s not a good situation. You need to be in control of your own destiny. In terms of what Ahmed was talking about, you need to be very careful as you engage with the third-party that they are actually going to implement your strategic intent.

You need to have a really strong idea of what it is you want from the provider, articulating clearly, and set up a structure that allows you to manage and operate that with their strength in the game. If you just simply abdicate that responsibility and assume that that’s going to happen, it’s likely to fail.

Fattah: I agree, on one hand, the organization shouldn't abdicate the core function of the businesses in defining a strategy and then executing it right.

Having a bunch of people labeled as architects is different than having a bunch of people that have the knowledge, skills, and experience to deliver what is expected.



However, an example, which I'm seeing as a trend, but a very slow trend -- outsourcing architecture itself to other organizations. We have one example in Australia of a very large organization, which gives IBM the project execution, the delivery organization. Part of that was architecture. I was part of this to define with the organization their enterprise architecture, the demarcation between what they outsource and what they retain.

Definitely, they have to retain certain important parts, which is strategy and high-level, but outsourcing is a catalyst to be able to define what's the value of this architecture. So the number of architectures within our software organization was looked with a greater scrutiny. They are monitoring the value of this delivery, and value was demonstrated. So the team actually grew; not shrunk.

Forde: In terms of outsourcing knowledge skills and experience in an architecture, this is a wave of activity that's going to be coming. My point wasn't that it wasn't a valid way to go, but you have to be very careful about how you approach it.

My experience out of the Indian subcontinent has been that having a bunch of people labeled as architects is different than having a bunch of people that have the knowledge, skills, and experience to deliver what is expected. But in that region, and in Asia and China in particular, what I'm seeing is a recognition that there is a market there. In North America and in Europe, there is a gap of people with these skills and experience. And folks who are entrepreneurial in their outlook in Asia are certainly looking to fill that gap.

So, Ahmed's model is one that can work well, and will be a burgeoning model over the next few years. You've to build the skill base first.

Why the shift?

Guitarte: There's no disagreement about what's happening today, but I think the most important question is to ask why there is this shift. As Nick was saying, there is a shift of focus, and outsourcing is a symptom of that shift.

If you look back, Dave mentioned that in any organization there are two forces that tried to control the structure. One is the techno structure, which EA belongs to, and the main goal of a techno structure is to perpetrate themselves in power, to put it bluntly. Then, there is the other side, which is the shareholders, who want to maximize profit, and you've seen that cycle go back and forth.

Today, unfortunately, it's the shareholders who are winning. Outsourcing for them is a way to manage cash flow, to control costs, and unfortunately, we're getting hit.

Hill: The whole concept of leveraging the external resources for computing capabilities is something we drove at. We did find the purpose behind that, and it largely plays into our conversation behind the impact of business. It's more of a cost reduction play.

It's almost always the case that the initial driver for the business to get interested in something is to reduce cost.



That's what our survey respondents replied to and said the reason why the organization was interested in cloud was to reduce cost. It's a very interesting concept, when you're looking at why the business sees it as a cost play, as opposed to a revenue-generating, profit-making endeavor. It causes some need for balance there.

Fehskens: The most interesting thing for me about cloud is that it replays a number of scenarios that we've seen happen over and over and over and over again. It's almost always the case that the initial driver for the business to get interested in something is to reduce cost. But, eventually, you squeeze all the water out of that stone and you have to start looking at some other reason to keep moving in that direction, keep exploiting that opportunity.

That almost invariably is added value. What's happening with cloud is that it’s forcing people to look at a lot of the issues that they started to address with SOA. But, the problem with SOA was that a lot of vendors managed to turn it into a technology issue. "Buy this product and you’ll have SOA," which distracted people from thinking about the real issue here, which is figuring out what are the services that the business needs.

Once you understand what the services are that the business needs, then you can go and look for the lowest-cost provider out in the cloud to make that connection. But, once you’ve already made that disconnection between the services that the business needs and how they are provided, you can then start orchestrating the services on the business side from a strategically driven perspective to look at the opportunities to create added value.

You can assemble the implementation that delivers that added value from resources that are already out there that you don’t have to rely on your in-house organization to create it from scratch. So, there’s a huge opportunity here, but it’s accompanied by an enormous risk. If you get this right, you're going to win big. But if you get it wrong, you are going to lose big.

Cloud has focus

Fattah: When we use the term, cloud, like many other terms, we refer to so many different things, and the cloud definitely has a focus. I agree that the focus now on reducing cost. However, when you look at the cloud as providing pure business service such as software as a service (SaaS), but also business process orchestrated services with perhaps outsourcing business process itself, it has a huge potential to create this mindset for organization about what they are doing and in which part they have to minimize cost. That's where the service is a differentiator. They have to own it. They have to invest so much of it. And, they have to use the best around.

Definitely the cloud will play in different levels, but these levels where it will work in a business architecture is actually distilling the enterprise architecture into the essence of it, which is understanding what service do I need, how I sort the services, and how I integrate them together to achieve the value.

Hornford: We've talked in this group about the business struggle to execute. We also have to consider the ability of an enterprise architecture team to execute.

We're 20 years into EA, but you can look at business literature going back a much broader period, talking about the difficulty of executing as a business.



When we look at an organization that has historically come from and been very technically focused in enterprise IT, the struggle there, as Andrew said, is that it’s a self-perpetuating motion.

I keep running into architecture teams that talk about making sure that IT has a seat at the table. It’s a failure model, as opposed to going down the path that Len and Ahmed were talking about. That's identifying the services that the business needs, so that they can be effectively assembled, whether that assembly is inside the company, partly with a outsource provider, or is assembled as someone else doing the work.

That gets back to that core focus of the sub-discipline that is evolving at an even faster rate than enterprise architecture. That’s business architecture. We're 20 years into EA, but you can look at business literature going back a much broader period, talking about the difficulty of executing as a business.

This problem is not new. It’s a new player in it who has the capability to provide good advice, and the core of that I see for execution is an architecture team recognizing that they are advice providers, not doers, and they need to provide advice to a leadership team who can execute.

Varying maturity

Forde: It’s interesting listening to Dave’s comments. What we have to gauge here is that the state of EA varies in maturity from industry to industry and organization to organization.

For the function to be saying "I need a place at the table" is an indication of a maturity level inside an organization. If we're going to say that an EA team that is looking for a place at the table is in a position to strategically advise the executives on what to do in an outsourcing agreement, that's a recipe for disaster.

However, if you're already in the position of being a trusted adviser within the organization, then it's a very powerful position. It reflects the model that you just described, Dana.

Organizations and the enterprise architecture team at the business units need to be reflecting on where they are and how they can play in the model that Ahmed and Dave are talking about. There is no one-size-fits-all here from an EA perspective, I think it really varies from organization to organization.

Hill: One of the major focus areas that we found in the survey is that, when we talk about business architecture, the reality is that there's a host of new technologies that have emerged with Web 2.0 and are emerging in grid computing, cloud computing, and those types of things that surely are alluring to the business. The challenge for the enterprise architecture is to take a look at what those legacy systems that are already invested in in-house and how an organization is going to transition that legacy environment to the new computing paradigms, do that efficiently, and at the same time be able to hit the business goals and objectives.

It's a conundrum that the enterprise architects have to deal with, because there is a host of legacy investment that is there. In Infosys, we've seen a large uptake in the amount of modernization and rationalization of portfolios going on with our clientele.

That's an important indicator that there is this transition happening and the enterprise architects are right in the middle of that, trying to coach and counsel the business leadership and, at the same time, provide the discipline that needs to happen on each and every project, and not just the very large projects or transformation initiatives that organizations are going through.

The key point here is that the enterprise architects are in the middle of this game. They are very instrumental in bringing these two worlds together, and the idea that they need to have more of a business acumen, business savvy, to understand how those things are affecting the business community, is going to be critical.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Some thoughts on the Microsoft and Nokia tag team on mobile or bust news

Given what they are and where they have been, there's little logical reason for Microsoft not dominating the mobile smartphone computing landscape. And it should have been a done-deal in many global major markets at least four years ago.

The only reason that Microsoft is now partnering with Nokia on mobile -- clearly not the client giant's first and primary strategy on winning the market -- is because of a lack of execution. I surely recall speaking to Microsofties as many as 10 years ago, and they were all-in on the importance and imperative for mobile platforms. Windows CE's heritage is long and deep. Nokia just as well knew the stakes, knew the technology directions, knew the competition.

Now. In the above two paragraphs replace the words "Microsoft" and "Nokia." Still works. Both had huge wind in their sails (sales?) to steer into the mobile category for keeps, neigh to define and deliver the mobile category to a hungry world and wireless provider landscape ... on their, the platform-providers', terms!

So now here we have two respective global giants who had a lead, one may even say a monopoly or monopoly-adjacency, in mobile and platforms and tools for mobile. And now it is together and somehow federated -- rather than separately or in traditional OEM partnership -- that they will rear up and gallop toward the front of the mobile device pack -- the iOS, Android, RIM and HP-Palm pack. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

How exactly is their respective inability, Microsoft and Nokia, to execute separately amid huge market position advantages enhanced now by trying to execute in cahoots ... loosely, based mostly on a common set of foes? I'll point you to the history of such business alliances, often based on fear, and its not any better than the history of big technology mergers and acquisitions. It stinks. It stinks for end-users, investors, partners and employees.

But why not reward the leadership of these laggards with some more perks and bonuses? Works in banking.

A developer paradise

And talk about an ace in the hole. Not long ago, hordes of developers and ISVs -- an entire global ecosystem -- were begging Microsoft to show them the mobile way, how to use their Visual Studio skills to skin the new cat of mobile apps. They were sheep waiting to be lead (and not to slaughter). The shepherd, it turned out, was out to lunch. Wily Coyote, super genius.

And execution is not the only big reason these companies have found themselves scrambling as the world around them shifts mightily away. Each Microsoft and Nokia clearly had the innovators dilemma issues in droves. But these were no secret. (See reason one above on execution again ... endless loop).

Microsoft had the fat PC business to protect, which as usual divided the company on how to proceed on any other course, Titantic-like. Nokia had the mobile voice business and mobile telecom provider channel to protect. So many masters, so many varieties of handsets and localizations to cough up. Motorola had a tough time with them one too. Yes, it was quite a distraction.

But again, how do these pressures to remain inert inside of older models change by the two giants teaming up? Unless they spin off the right corporate bits and re-assemble them together under a shared brand, and go after the market anew, the financial pressures not to change fast remain steadfast. (See reason one above on execution again ... endless loop).

What's more there's no time to pull off such a corporate shell game. The developers are leaving (or left), the app store model is solidifying elsewhere, the carriers are being pulled by the end-users expectations (and soon enterprises). And so this Microsoft-Nokia mashup is an eighth-inning change in the line-up and there's no time to go back to Spring training and create a new team.

Too little, too late

Nope, I just can't see how these synergies signal anything but a desperation play. Too little, too late, too complex, too hard to execute. Too much baggage.

At best, the apps created for a pending Nokia-Microsoft channel nee platform will be four down the list for native app support. More likely, HTML 5 and mobile web support (standards, not native) may prove enough to include the market Microsoft and Nokia muster together. But that won't be enough to reverse their lackluster mobile position, or get them the synergies they crave.

Each Microsoft and Nokia were dark horses in the mobile devices and associated cloud services race. Attempting to hitch the two horses together with baling wire and press releases doesn't get them any kind of leg up on the competition. It may even hobble them for good.

You may also be interested in:

Wednesday, February 9, 2011

The golden thread of interoperability runs deep at Open Group conference

This guest post comes courtesy of Dr. Chris Harding, Director for Interoperability and SOA at The Open Group.

By Dr. Chris Harding

S
AN DIEGO -- There are so many things going on at every Conference by The Open Group that it is impossible to keep track of all of them, and this week’s conference here is no exception. The main themes are cybersecurity, enterprise architecture, SOA and cloud computing. Additional topics range from real-time and embedded systems to quantum lifecycle management.

But there are a number of common threads running through all of those themes, relating to value delivered to IT customers through open systems. One of those threads is interoperability.

Interoperability panel session

The interoperability thread showed strongly in several sessions on the opening day of the conference, Monday Feb. 7, starting with a panel session on Interoperability Challenges for 2011 that I was fortunate to have been invited to moderate. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The panelists were Arnold van Overeem of Capgemini, chair of the Architecture Forum’s Interoperability project, Ron Schuldt, the founder of UDEF-IT and chair of the Semantic Interoperability Work Group’s UDEF project, TJ Virdi of Boeing, co-chair of The Open Group Cloud Computing Work Group, and Bob Weisman of Build-the-Vision, chair of The Open Group Architecture Forum’s Information Architecture project. The audience was drawn from many companies, both members and non-members of The Open Group, and made a strong contribution to the debate.

What is interoperability? The panel described several essential characteristics:
  • Systems with different owners and governance models work together;
  • They exchange and understand data automatically;
  • They form an information-sharing environment in which business information is available in the right context, to the right person, and at the right time; and
  • This environment enables processes, as well as information, to be shared.
Interoperability is not just about the IT systems. It is also about the ecosystem of user organizations, and their cultural and legislative context.

Semantics is an important component of interoperability. It is estimated that 65 percent of data warehouse projects fail because of their inability to cope with a huge number of data elements, differently defined.

There is a constant battle for interoperability. Systems that lock customers in by refusing to interoperate with those of other vendors can deliver strong commercial profit. This strategy is locally optimal but globally disastrous; it gives benefits to both vendors and customers in the short term, but leads in the longer term to small markets and siloed systems. The front line is shifting constantly. There are occasional resounding victories – as with the introduction of the Internet – but the normal state is trench warfare with small and painful gains and losses.

Blame for lack of interoperability is often put on the vendors, but this is not really fair. Vendors must work within what is commercially possible. Customer organizations can help the growth of interoperability by applying pressure and insisting on support for standards. This is in their interests; integration required by lack of interoperability is currently estimated to account for over 25 percent of IT spend.

SOA has proved a positive force for interoperability. By embracing SOA, a customer organization can define its data model and service interfaces, and tender for competing solutions that conform to its interfaces and meet its requirements. Services can be shared processing units forming part of the ecosystem environment.

This is in some ways reinforcing SOA as an interoperability enabler.



The latest IT phenomenon is cloud computing. This is in some ways reinforcing SOA as an interoperability enabler. Shared services can be available on the cloud, and the ease of provisioning services in a cloud environment speeds up the competitive tendering process.

But there is one significant area in which cloud computing gives cause for concern: lack of interoperability between virtualization products. Virtualization is a core enabling technology for cloud computing, and virtualization products form the basis for most private cloud solutions. These products are generally vendor-specific and without interoperable interfaces, so that it is difficult for a customer organization to combine different virtualization products in a private cloud, and easy for it to become locked in to a single vendor.

There is a need for an overall interoperability framework within which standards can be positioned, to help customers express their interoperability requirements effectively. This framework should address cultural and legal aspects, and architectural maturity, as well as purely technical aspects. Semantics will be a crucial element.

Such a framework could assist the development of interoperable ecosystems, involving multiple organizations. But it will also help the development of architectures for interoperability within individual organizations – and this is perhaps of more immediate concern.

The Open Group can play an important role in the development of this framework, and in establishing it with customers and vendors.

SOA/TOGAF practical guide


S
OA is an interoperability enabler, but establishing SOA within an enterprise is not easy to do. There are many stakeholders involved, with particular concerns to be addressed. This presents a significant task for enterprise architects.

TOGAF has long been established as a pragmatic framework that helps enterprise architects deliver better solutions. The Open Group is developing a practical guide to using TOGAF for SOA, as a joint project of its SOA Work Group and The Open Group Architecture Forum.

The discussion resolved all the issues, enabling the preparation of a draft for review by The Open Group, and we can expect to see this valuable guide published at the conclusion of the review process.



This work is now nearing completion. Ed Harrington of Architecting-the-Enterprise had overcome the considerable difficulty of assembling and adding to the material created by the project to form a solid draft. This was discussed in detail by a small group, with some participants joining by teleconference. As well as Ed, this group included Mats Gejnevall of Capgemini and Steve Bennett of Oracle, and it was led by project co-chairs Dave Hornford of Integritas and Awel Dico of the Bank of Montreal.

The discussion resolved all the issues, enabling the preparation of a draft for review by The Open Group, and we can expect to see this valuable guide published at the conclusion of the review process.

UDEF deployment workshop

The importance of semantics for interoperability was an important theme of the interoperability panel discussion. The Open Group is working on a specific standard that is potentially a key enabler for semantic interoperability: the Universal Data Element Framework (UDEF).

It had been decided at the previous conference, in Amsterdam, that the next stage of UDEF development should be a deployment workshop. This was discussed by a small group, under the leadership of UDEF project chair Ron Schuldt, again with some participation by teleconference.

The group included Arnold van Overeem of Capgemini, Jayson Durham of the US Navy, and Brand Niemann of the Semantic Community. Jayson is a key player in the Enterprise Lexicon Services (ELS) initiative, which aims to provide critical information interoperability capabilities through common lexicon and vocabulary services. Brand is a major enthusiast for semantic interoperability with connections to many US semantic initiatives, and currently to the Air Force OneSource project in particular, which is evolving a data analysis tool used internally by the USAF Global Cyberspace Integration Center (GCIC) Vocabulary Services Team, and made available to general data management community. The participation of Jayson and Brand provided an important connection between the UDEF and other semantic projects.

As a result of the discussions, Ron will draft an interoperability scenario that can be the basis of a practical workshop session at the next conference, which is in London.

Complex cloud environments

Cloud Computing is the latest hot technology, and its adoption is having some interesting interoperability implications, as came out clearly in the Interoperability panel session. In many cases, an enterprise will use, not a single cloud, but multiple services in multiple clouds. These services must interoperate to deliver value to the enterprise. The Complex Cloud Environments conference stream included two very interesting presentations on this.

The first, by Mark Skilton and Vladimir Baranek of Capgemini, explained how new notations for cloud can help explain and create better understanding and adoption of new cloud-enabled services and the impact of social and business networks. As cloud environments become increasingly complex, the need to explain them clearly grows.

Consumers and vendors of cloud services must be able to communicate. Stakeholders in consumer organizations must be able to discuss their concerns about the cloud environment. The work presented by Mark and Vladimir grew from discussions in a CloudCamp that was held at a previous Conference by The Open Group. We hope that it can now be developed by The Open Group Cloud Computing Work Group to become a powerful and sophisticated language to address this communication need.

The second presentation, from Soobaek Jang of IBM, addressed the issue of managing and coordinating across a large number of instances in a cloud computing environment. He explained an architecture for “Multi-Node Management Services” that acts as a framework for auto-scaling in a SaaS lifecycle, putting structure around self-service activity, and providing a simple and powerful web service orientation that allows providers to manage and orchestrate deployments in logical groups.

SOA conference stream

The principal presentation in this stream picked up on one of the key points from the Interoperability panel session in a very interesting way. It showed how a formal ontology can be a practical basis for common operation of SOA repositories. Semantic interoperability is at the cutting edge of interoperability, and is more often the subject of talk than of action. The presentation included a demonstration, and it was great to see the ideas put to real use.

The presentation was given jointly by Heather Kreger, SOA Work Group Co-chair, and Vince Brunssen, Co-chair of SOA Repository Artifact Model and Protocol (S-RAMP) at OASIS. Both presenters are from IBM. S-Ramp is an emerging standard from OASIS that enables interoperability between tools and repositories for SOA. It uses the formal SOA Ontology that was developed by The Open Group, with extensions to enable a common service model as well as an interoperability protocol.

This presentation illustrated how S-RAMP and the SOA Ontology work in concert with The Open Group SOA Governance Framework to enable governance across vendors. It contained a demonstration that included defining new service models with the S-RAMP extensions in one SOA repository and communicating with another repository to augment its service model.

Architecture governance must change in the context of cloud-based ecosystems. It may take some effort to keep to the principles of the SOA style – but it will be important to do this.



To conclude the session, I gave a brief presentation on SOA in the Cloud – the Next Challenge for Enterprise Architects. This discussed how the SOA architectural style is widely accepted as the style for enterprise architecture, and how cloud computing is a technical possibility that can be used in enterprise architecture. Architectures using cloud computing should be service-oriented, but this poses some key questions for the architect. Architecture governance must change in the context of cloud-based ecosystems. It may take some effort to keep to the principles of the SOA style – but it will be important to do this. And the organization of the infrastructure – which may migrate from the enterprise to the cloud – will present an interesting challenge.

Enabling semantic interoperability

T
he day was rounded off by an evening meeting, held jointly with the local chapter of the IEEE, on semantic interoperability. The meeting featured a presentation by Ron Schuldt, UDEF Project Chair, on the history, current state, and future goals of the UDEF.

The importance of semantics as a component of interoperability was clear in the morning’s panel discussion. In this evening session, Ron explained how the UDEF can enable semantic interoperability, and described the plans of the UDEF Project Team to expand the framework to meet the evolving needs of enterprises today and in the future.

This meeting was arranged through the good offices of Jayson Durham, and it was great that local IEEE members could join conference participants for an excellent session.

This guest post comes courtesy of Dr. Chris Harding, Director for Interoperability and SOA at The Open Group.

You may also be interested in: