Tuesday, July 19, 2011

Cloud and SaaS force a rethinking of integration and middleware as services for services

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Workday.

Major trends around cloud, mobile, and software as a service (SaaS) are dramatically changing the requirements and benefits of application integration.

In many respects, the emphasis now on building hybrid business processes from a variety of far-flung sources forces a rethinking of integration and middleware. Integration capabilities themselves often need to be services in order to support a growing universe of internal and external constituent business process component services.

And increasingly, integration needs to be ingrained in applications services, with the costs and complexity hidden. This means that more people can exploit and leverage integration, without being integration technology experts. It means that the applications providers are also the integration providers. It also means the stand-alone integration technology supplier business -- and those that buy from it -- are facing a new reality.

Here to explore the new era of integration-as-a-service and what it means for the future is David Clarke, Director of Integration at SaaS ERP provider Workday. He is interviewed by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Workday is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Clarke: I remember when we originally became part of Workday several years ago, we were doing some sort of product planning and strategic thinking about how we were going to integrate the product lines and position them going forward. One of the things we had in our roadmap at the time was this idea of an [integration] appliance. So we said, "Look, we can envision the future, where all the integration is done in the cloud, but we frankly think it's like a long way off. We think that it's some years off."

... We thought the world wasn’t going to be ready soon enough to put the integration technology and stack in the cloud as well.

It just became clearer and clearer to us that there was an appetite and a willingness in our customer and prospect base to use this technology in the cloud.



Happily that turned out to have been incorrect. Over the course of the ensuing 12 months, it just became clearer and clearer to us that there was an appetite and a willingness in our customer and prospect base to use this technology in the cloud.

We never really went ahead with that appliance concept, it didn’t get productized. We never used it. We don’t need to use it. And now, as I have conversations with customers and with prospects, it just is not an issue.

In terms of it being any kind of philosophical or in principle difficulty or challenge, it has just gone away. It totally surprised me, because I expected it to happen, but thought it would take a lot longer to get to where it has got to already.

Gardner: We see that a “consumerization” of IT is taking place, where the end-users want IT in the enterprise to work as well and in the same manner as it does for their personal lives. How does that shift the thinking of an enterprise architect?

Clarke: Superficially, enterprise architects are under a lot of pressure to present technologies in ways that are more familiar to customers from their personal lives. The most specific example of that is the embrace of mobile technologies. This isn't a huge surprise. It's been a pretty consistent pattern over a number of years that workforce mobility is a major influence on product requirements.

Mobile devices

We've seen that very significant proportions of access to our system is via mobile devices. That informs our planning and our system architecture. We're invested heavily in mobile technologies -- iPad, Android, BlackBerry, and other clients. In my experience, that’s something that's new, with the customer enterprise architects. This is something they have to articulate, defend, and embrace.

Historically, they would have been more concerned with the core issues of scalability, reliability, and availability. Now, they've got more time to think about these things, because we as SaaS vendors have taken a lot of things that they used to do off of their plates.

Historically, a lot of time was spent by enterprise architects worrying about the scalability and reliability of the enterprise application deployments that they had, and now that’s gone away. They get a much higher service level agreement (SLA) than they ever managed to operate by themselves when they run their own systems.

So, while they have different and new things to think about because of the cloud and mobility, they also have more head space or latitude to do that, because we have taken some of the pain that they used to have away.

Gardner: I suppose that as implications pan out around these issues, there will be a shift in economics as well, whereby you would pay separately and perhaps on a capital and then operating basis for integration.

They also have more head space or latitude to do that, because we have taken some of the pain that they used to have away from them.



If integration by companies like Workday becomes part-and-parcel of the application services -- and you pay for it on an operating basis only -- how do traditional business models and economics around middleware and integration survive?

Clarke: I'd certainly hate to be out there trying to sell middleware offerings stand-alone right now, and clearly there have been visible consolidations in this space. I mentioned BEA earlier as being the standard bearer of the enterprise Java generation of middleware that’s been acquired by Oracle.

They are essentially part of the application stack, and I'm sure they still sell and license stand-alone middleware. Obviously, the Oracle solutions are all on-premise, so they're still doing on-premise stuff at that level. But, I would imagine that the economics of the BEA offering is folded very much into the economics of the Oracle application offering.

In the web services generation of middleware and integration, which essentially came after the enterprise Java tier, and then before the SOA tier, there was a pretty rapid commoditization. So, this phenomenon was already starting to happen, even before the cloud economics were fully in play.

Then, there was essentially an increased dependence or relevance of open source technologies -- Spring, JackBe, free stacks -- that enabled integration to happen. That commoditization was already starting to happen.

Open source pressure

So even before the advent of the cloud and the clear economic pressure that put on stand-alone integration, there was already a separate pressure that was originating from open source. Those two things together have, in my view, made it pretty difficult to sustain or to conceive a sustainable integration model.

A lot of the investment dollars that have gone into something like integration market are now going elsewhere in infrastructure. They're going into storage. They're going into availability. They're going certainly to cloud platforms. It would need to be a brave venture capitalist now who would write a check to a company coming in with a bright idea for a new on-premise middleware stack. So that business is gone.

... Workday is an applications company. We're an on-demand apps company and we build and serve human capital management (HCM), financials, and enterprise resource planning (ERP) application suites.

Cape Clear, which was my former company, was acquired by Workday about three years ago. We were partners, but as Workday’s business expanded significantly, they saw that providing a compelling and a differentiated integration experience in the context of this new cloud architecture was going to be something that was very important to them. So they acquired Cape Clear and we became part of the overall Workday organization.

Gardner: So there seem to be two fundamental things going on here. One, is taking integration to the on-demand or SaaS domain, but second, there is also this embedding integration functionality into the application.

One of the perpetual holy grails of the middleware industry, when it was a stand-alone undertaking was to find a way to express and expose middleware and integration concepts in a way that they could be used by mere mortals.



How does someone like yourself who is crafting the middleware integration capabilities need to shift their thinking in order to go “to the cloud,” but also become part-and-parcel with the application?

Clarke: One of the perpetual holy grails of the middleware industry, when it was a stand-alone undertaking, was to find a way to express and expose middleware and integration concepts in a way that they could be used by mere mortals, by business analysts, by people who weren't necessarily very deep technologists with deep technology expertise.

In my experience, the middleware industry never achieved that. So, they didn't really ever find a metaphor or a use model that enabled less skilled, but nonetheless technically savvy, people to use their products.

As you observe in the applications game, you absolutely have to get there, because fundamentally what you're doing here is you are enabling companies and individuals to solve business problems and application problems. The integration arises as a necessity of that or as a consequence of that. In and of itself, it isn't useful.

Designing applications

The most specific thing that we've seen is how we build, manipulate, and use extremely sophisticated integration technology. We spend a lot of our time thinking about how to design that into the application, so that it can be experienced and consumed by users of the application who don’t know anything about XML, Java, web protocols, or anything like that.

... Business analysts can very easily and visually define what they are getting and putting it in terms of the business concepts and the business objects they understand. They can define very simple transformations, for example, going from a payroll input to a check, or going from a report of absences by departments to a payroll input.

They're consuming and using integration technologies in a very natural way in the context of their day-to-day working in the web layer in these systems. They're not programmers. They're not developers. They're not thinking about it that way.

It's quite empowering for the teams that we have had working on this technology to see if it's usable in that way by the business analysts here. It's the closest I've seen people get to capturing this unicorn of enabling integration technology to be actually used by business people.

It's the closest I've seen people get to capturing this unicorn of enabling integration technology to be actually used by business people.



Gardner: Do you think in 10 years, or maybe 5, we won’t even be thinking about integration? It will really be a service, a cloud service, and perhaps it will evolve to be a community approach.

Clarke: I'll make two main observations in this area.

First, there is an important difference between a general-purpose platform or integration platform and then a more specific one, which is centered around a particular application domain. Workday is about the latter.

We're building a very powerful set of cloud technologies, including an integration cloud or an integration platform in the cloud, but it’s very focused on connecting essentially to and from Workday, and making that very easy from a variety of places and to a variety of places.

What we're not trying to create is a general-purpose platform, an associated marketplace, in the way that maybe somebody like Salesforce.com is doing with AppExchange or Google with App Engine for app development. In a sense, our scope is narrower in that way, and that’s just how we're choosing to prosecute the opportunity, because it’s harder to establish a very horizontal platform and it’s just difficult to do.

I referred earlier to the problem that middleware companies traditionally have of doing everything and nothing. When you have a purely horizontal platform that can offer any integration or any application, it’s difficult to see exactly which ones are going to be the ones that get it going.

The way we're doing this is therefore more specific. We have a similar set of technologies and so on, but we're really basing it very much around the use case that we see for Workday. It’s very grounded in benefits integrations, payroll integrations, financial integrations, payment integrations. And every one of our deployments has tens, dozens, hundreds of these integrations. We're constantly building very significant volume, very significant usage, and very significant experience.

Developing marketplace

I can see that developing into a marketplace in a limited way around some of those key areas and possibly broadening from there.

That's one of the interesting areas of distinction between the strategies of the platform vendors as to how expansive their vision is. Obviously expansive visions are interesting and creating horizontal platforms is interesting, but it’s more speculative, it’s riskier, and it takes a long time. We are more on the specific side of that.

You mentioned collaborating and how this area of business processes and people collaborating in the community. I referred earlier to this idea that we're focusing on these key use cases. What’s arising from those key use cases is a relatively small set of documents and document formats that are common to these problem areas.

Lately, I've been reading, or rereading, some of the RosettaNet stuff. RosettaNet has been around forever. It was originally created in the early '80s. As you know, it was essentially a set of documents, standard documents, interchange formats for the semiconductor or the technology manufacturing industry, and it has been very successful, not very prominent or popular, but very successful.

What we see is something similar to RosettaNet starting to happen in the application domain where, when you are dealing with payroll providers, there is a certain core set of data that gets sent around. We have integrated to many dozens of them and we have abstracted that into a core documentary that reflects the set of information and how it needs to be formatted and how it needs to be processed.

These are very good vectors for cooperation and for collaboration around integrations, and they're a good locus around which communities can develop standardized documents.



In fact, we now have a couple of payroll partners who are directly consuming that payroll format from us. So, in the same way that there are certain HR XML standards for benefits data, we can see other ones emerging in other areas of the application space.

These are very good vectors for cooperation and for collaboration around integrations, and they're a good locus around which communities can develop standardized documents, which is the basis for integration. That’s intriguing to me, because it all derives from that very specific set of use cases that I just never really saw as a general-purpose integration vendor.

Gardner: What Workday is supporting is an applications-level benefit. A business process, like a network, is perhaps more valuable as the number of participants in the process increases, and become able to participate with a fairly low level of complexity and friction.

So, if we make this shift to a Metcalfe's Law-type of "the more participants, the more valuable it is to all of those participants," shouldn’t we expect a little bit of a different world around integration in the next few years?

Business process

Clarke: That’s right. We don’t really see or envisage this very transactional marketplace, where you just have people buying a round of maps or integrations and installing them. We see it happening in the context of a business process.

For example, hiring. As somebody hires somebody into Workday, there are typically many integration points in that business process -- background checking, provisioning of security cards, and creation of email accounts. There is a whole set of integration points. We're increasingly looking to enable third parties to easily plug-in into those integration points in a small way, for provisioning an email account, or in a big way, like managing a whole payroll process.

It’s that idea of these integrations as being just touch points and jumping-off points from an overall business process, which is quite a different vision from writing cool, stand-alone apps that you can then find and store from inside of our platform marketplace.

It’s that idea of an extended business process where the partners and partner ISVs and customers can collaborate very easily, and not just at install time or provisioning time, but also when these processes are running and things go wrong, if things fail or errors arise.

You also need a very integrated exception-handling process, so that customers can rapidly diagnose and correct these errors when they arise. Then, they have a feeling of being in a consistent environment and not like a feeling of having 20 or 30 totally unrelated applications executing that don’t collaborate and don’t know about each other and aren’t executing the context within the same business process. We're keen to make that experience seamless.

You also need a very integrated exception handling process, so that customers can rapidly diagnose and correct these errors when they arise.



Gardner: I can also see where there is a high incentive for the participants in a supply chain or a value chain of some sort to make integration work. Do you already see that the perception of cooperation for integration is at a different plane? Where do you expect that to go?

Clarke: Totally, already. Increasingly -- pick an area, but let's say for learning management or something -- if we integrate, or if multiple people integrate to us or from us, then customers already are starting to expect that those integrations exist.

Now they're starting to ask about how good they are, what's the nature of them, what SLAs can they expect here? The customers are presuming that an integration, certainly between Workday and some other cloud-based service, either exists already or is very easy and doable.

But they're looking through that, because they're taking the integration technology level questions for granted. They're saying, "Given that I can make such an integration work, how is it really going to work, what's the SLA, what happens if things go wrong, what happens when things fail?"

What's really interesting to me is that customers are increasingly sophisticated about exploring the edge cases, which they have seen happen before and have heard about them before. They're coming to us upfront and saying, "What happens if I have issues when my payroll runs? Who do I go to? How do you manage that? How do you guys work with each other?"

Consistent information

We, therefore, are learning from our customers and we're going to our ISV and services partners, like our payroll partners, our learning management partners, our background checking partners and saying, "Here is the contract that our customers expect. Here is the service that they expect." They're going to ask us and we want to be able to say that this partner tests against every single update and every single revision of the Workday software. They will handle a seamless support process where you call one number and you get a consistent set of information.

Customers are really looking through the mere fact of a technical integration existing and asking about what is my experience going to be and actually using this day-to-day across 50 geographies and across population of 20,000 employees. I want that to be easy.

It’s a testament to the increasing sophistication of the integration technology that people can take that for granted. But as I say, it’s having these increasingly interesting and downstream effects in terms of what people are expecting from the business experience of using these integration systems in the context of a composite business process that extend beyond just one company.

Gardner: When you have a massive, complex, integration landscape, does it makes sense to focus on the application provider as that point of responsibility and authority? Or does it have to be federated?

Clarke: What we are gradually feeling our way toward here is that for us that’s the central concept of this federation of companies. We think obviously of Workday being in the middle of that. It depends on what your perspective is, but you have this federation of companies collaborating to provide the service ultimately, and the question is, where do they choke?

And it's not realistic to say that you can always come to Workday, because if we are integrating to a payroll system on behalf of somebody else, and we correctly start off and run the payroll or send the payroll requests, and then there is an error at the other end, the error is happening ultimately in the other payroll engine. We can't debug that. We can't look at what happened. We don't necessarily even know what the data is.

As we run any integration in our cloud, there is a very consistent set of diagnostics, reporting, metrics, error handling, error tracking that is generated and that's consistent.



We need a consistent experience for the customer and how that gets supported and diagnosed. Specifically, what it means for us today is that, as we run any integration in our cloud, there is a very consistent set of diagnostics, reporting, metrics, error handling, error tracking that is generated and that's consistent across the many types of integrations that we run.

Again, as our partners become more savvy at working with us, and they know more about that, they can then more consistently offer resolution and support to the customers in the context of the overall Workday support process.

For us, it’s really a way of building this extended and consistent network of support capability and of trust. Where customers have consistent experiences, they have consistent expectations around how and when they get support.

The most frustrating thing is when you are calling one company and they're telling you to call the other company, and there isn’t any consistency or it’s hard to get to the bottom of that. We're hopeful that enlightened integrations around business processes, between collaborating companies, as I have described, will help me to get some of that.

... The technology is important, but it's not enough. People just don't just want technology. They want well-intentioned and an honest collaboration between their vendors to help them do the stuff efficiently.

... [Technically] there’s nothing fundamentally new, in some sense. I've worked in several generations of integration and middleware technology, and each one is a refinement of the past, and you're standing on the shoulders of giants, to badly paraphrase Newton.

Packaged and presented

A
lot of the underlying technology you're using for integration, a lot of the underlying concepts, are not that new. It's just the way that they're being packaged and presented. In some cases, it's the protocols that we're using, and certainly some of the use models. But the ways you're accessing them and consuming them are different. So, it is in that sense [this is] evolutionary.

Gardner: While you have put quite a bit of emphasis on the tool side in order to make this something that mere mortals can adjust and operate, you've also done a lot of heavy lifting on the connections side. You recognized that in order to be successful with an integration platform, you had to find the means in which to integrate to a vast variety of different types of technologies, services, data, and so forth. Tell me what you've done, not only on the usability, but on the applicability across a growing universe of connection points.

Clarke: That’s another interesting area. As you say, there are thousands or millions of different types of endpoints out there. This being software, it can map any data format to any other data format, but that’s a trivial and uninformative statement, because it doesn’t help you get a specific job done.

Essentially what we've been trying to do is identify categories of target systems and target processes that we need to integrate with and try to optimize and focus our efforts on that.

For example, pretty much the majority of our customers have a need to integrate to and from benefit systems for 401(k), healthcare, dental, visual plans, and so forth. It's an extremely common use case. But, there is still a wide diversity of benefits providers and a wide variety of formats that they use.

We've studied the multiple hundreds of those benefits providers that we've experienced by working with our customers and we've abstracted out the most common format scenarios, data structures, and so forth, and we have built that into our integration layer.

Configure your data set

You can very easily and rapidly and without programming configure your specific data set, so that it can be mapped into and out of your specific set of benefits providers, without needing to write any code or build a custom integration.

We've done that domain analysis in a variety of areas, including but not limited to benefits. We've done it for payroll and for certain kinds of financial categories as well. That's what's enabling us to do this in a scalable and repeatable way, because we don’t want to just give people a raw set of tools and say, "Here, use these to map anything to anything else." It's just not a good experience for the users.
For more information on Workday's integration as a service, go to http://www.workday.com/solutions/technology/integration_cloud.php.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Workday.

You may also be interested in:

Monday, July 18, 2011

WSO2 launches Stratos offerings as PaaS for open source cloud middleware

WSO2 today announced the debut of the WSO2 StratosLive platform as a service (PaaS) and the launch of WSO2 Stratos 1.5, the newest release of WSO2’s open-source cloud middleware platform software. Together, they provide comprehensive cloud middleware solutions for enabling service-oriented architecture (SOA) and composite application development and deployment in the cloud.

The Palo Alto, Calif., company offers a complete PaaS both as on-premise software and as a hosted service, running the same production-ready code wherever it best suits customers’ privacy, service-level agreement (SLA), and deployment requirements. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

StratosLive provides a complete enterprise deployment and integration platform, including application server, enterprise service bus (ESB), database, identity server, governance registry, business process manager, portal server and more.

Stratos provides the same capabilities to organizations that want the benefits of a PaaS running on their own premises. It builds on and extends WSO2's Carbon enterprise middleware platform by taking the Carbon code and adding cloud functionality for self-service provisioning, multi-tenancy, metering, and elastic scaling, among others.

With StratosLive and Stratos, central cloud features are built directly into the core platform.



All Carbon products, including the latest features from the recent Carbon 3.2 platform release, are available both as part of the Stratos cloud middleware platform and as cloud-hosted versions with instant provisioning on the StratosLive public PaaS. WSO2's approach enables developers to migrate their applications and services between on-premise servers, a private PaaS, a public PaaS, and hybrid cloud environments, providing deployment flexibility.

“The cloud is a compelling platform for enabling enterprises to combine the agility they’ve gained by employing SOAs and composite applications with an extended reach and greater cost efficiencies,” said Dr. Sanjiva Weerawarana, WSO2 founder and CEO. “At WSO2, we’re delivering on this promise by providing the only truly open and complete PaaS available today with our WSO2 StratosLive middleware PaaS and WSO2 Stratos 1.5 cloud middleware platform.”

Four new products

The launch of StratosLive and Stratos 1.5, adds four new cloud middleware products:
  • Data as a Service provides both SQL and NoSQL databases to users based on both MySQL and Apache Cassandra. This allows users to self-provision a database in the cloud and to choose the right model for their applications.
  • Complex Event Processing as a Service is the full multi-tenant cloud version of CEP Server, which launched in June 2011 and supports multiple CEP engines, including Drools Fusion and Esper, to enable complex event processing and event stream analysis.
  • Message Broker as a Service is the full multi-tenant cloud version of Message Broker, which launched in June 2011 and supports message queuing and publish-subscribe to enable message-driven and event-driven solutions in the enterprise. It uses Apache Qpid as the core messaging engine to implement the Advanced Message Queuing Protocol (AMQP) standard.
  • Cloud Services Gateway (CSG), first launched as a separate single-tenant product, is now a fully multi-tenant product within Stratos and StratosLive.
With StratosLive and Stratos, central cloud features are built directly into the core platform—including multi-tenancy, automatic metering and monitoring, auto-scaling, centralized governance and identity management, and single sign-on. The Cloud Manager in StratosLive and Stratos offers point-and-click simplicity for configuring and provisioning middleware services, so developers can get started immediately and focus on the business logic, rather than configuring and deploying software systems.

The cloud is a compelling platform for enabling enterprises to combine the agility they’ve gained by employing SOAs and composite applications with an extended reach and greater cost efficiencies.



Additionally, the Stratos cloud middleware platform features an integration layer that allows it to install onto any existing cloud infrastructure such as Eucalyptus, Ubuntu Enterprise Cloud, Amazon Elastic Computing Cloud (EC2), and VMware ESX. Enterprises are never locked into a specific infrastructure provider or platform.

The availability of StratosLive and Stratos 1.5 brings several new core platform enhancements:
WSO2 StratosLive and WSO2 Stratos 1.5 are available today. Released under the Apache License 2.0, they do not carry any licensing fees. Production support for the Stratos starts at $24,000 per year. StratosLive middleware PaaS is available at three paid subscription levels: SMB, Professional, and Enterprise, as well as a free demo subscription. For details on subscription pricing, visit the WSO2 website.

You may also be interested in:

Friday, July 15, 2011

SaaS PPM helps Deloitte-Australia streamline top-level decision making

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

The latest BriefingsDirect podcast focuses on Deloitte-Australia and how their business has benefited from leveraging the software-as-a-service (SaaS) model for project and portfolio management (PPM) activities.

We spoke to Deloitte-Australia at a recent HP conference in Barcelona to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

To learn more about Deloitte’s innovative use of SaaS hosting for non-core applications, join here Ferne King, director within the Investment and Growth Forum at Deloitte-Australia in Melbourne. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
King: The SaaS model made sense to us, because we had a strategic direction in our firm that any non-core application’s strategic intent was to have them as SaaS.

It’s the only solution that we found in the marketplace that would help us support and have visibility into the investments we were going to make for ourselves internally around the growth and the maintenance of what we did internally within our own firm.

Deloitte-Australia is approximately 5,000 practitioners. In 2010, our revenue was A$850 million. We provide professional services to public and private clients, and we are now globally the largest professional services firm. We utilize PPM internally within the firm, and that helps us to understand that portfolio and prioritization. Deloitte-UK practice and Deloitte-America practice in their consulting areas use PPM to go to the market and help manage deliver investments with their client base.

Three benefits


The three benefits of PPM, primarily for us has been understanding that portfolio and linking that to our strategy. For example, our executive will have a series of business objectives they want to achieve in the Australian practice.

By utilizing PPM, we can understand what is going on within the firm that’s meeting those objectives, and then, more importantly for them, with the gap, and then they can take the action on the gap. That’s the number one priority. The number two priority is being able to communicate to our people within the practice the particulars of change.

For example, over the next quarter, what will our practitioners in the firm see as a result of all of these myriad of initiatives going on, whether it’s a SaaS service HR system or whether it’s a new product and service that they can take to market. Whatever change is coming, we can better communicate that to them within their organization.

Our third priority which the PPM product helps with discipline is our area of delivery. So, in our project management methodology, it helps us improve our disciplines. We had a journey of 18 months of doing things manually and then we brought PPM to technology enable what we were doing manually.

From a SaaS perspective, the benefit we’ve achieved there is that we can focus our people on the right things. Instead of having our people focused on what hardware, what platform, what change request, what design do we need to be happening, we can focus on what our to-be process should be, what our design should be. Then we basically hand that over the fence to the SaaS team, which then help execute that.

We don’t have to stand in a queue within our own IT group and look for a change window. We can make changes every Wednesday, every Sunday, 12 months of the year, and that works for us.

A top priority

Just because [our applications use] is "non-core" doesn’t mean that it’s not a top priority. Our firm has approximately 2,500 applications within our Australian practice. PPM, at our executive level, is seen as one of our top 10 applications to help our executive, our partners, the senior groups of our firm register ideas to help our business grow and be maintained.

So it’s high value, but it’s not part of our core practice suite. It doesn’t bring in revenue and it doesn’t keep the lights on, but it helps us manage our business.

[This fits into] our roadmap of strategic enterprise portfolio management. In that journey, we're four years in and we are two years into technology enablement. We undertook the journey four years ago to go down strategic portfolio management and we lasted about 18 months to 2 years, manually developing and understanding our methodology, understanding the value where we wanted to go to.

In our second year we technology enabled that to help us execute more effectively, speed to value, time to value, and now we are entering our third year into the maturity model of that.

[We have attained] fantastic results, particularly at the executive levels, and they are the ones who pay for us to create the time to work on this. Deloitte itself has taken the transformation over the years. If anybody in the market follows the professional services, industry group areas, Deloitte globally is 160,000 practitioners and over $250 billion of revenue on FY10. We're coming together and have been taking a journey for some time to be as one.

So, if you're a client in the marketplace, you don’t have to think about what door you need to enter the Deloitte world. You enter one door and you get service from whatever service group you need.

PPM has enabled us to help the executive achieve their vision of firm-wide visibility of the enterprise investments we are making to improve our growth and support our maintenance.



If I take the example of three years ago, our tax group would only be interested in what’s happening in their tax group. Our consultant group would really only be interested in what’s happening in the consultant group.

Now that we are acting as one, the tax service line lead and the consulting service line lead would like visibility of what’s happening firm wide. PPM is now enabling us to do that.

What I would summarize there is that PPM has enabled us to help the executive achieve their vision of firm-wide visibility of the enterprise investments we are making to improve our growth and support our maintenance.

[How did we pick HP?] First of all, probably, 27 years experience with project delivery, coming from an engineering construction background, getting very detailed knowledge over the years about the one-on-one delivery components and dealing with a lot of vendors over the years in the client marketplace.

So, well-versed in what we needed and well-versed in what was available out there in the marketplace. When we went to market looking for a partner and a vendor solution, we were very clear on what we wanted. HP was are able to meet that.

I actually took my own role out of the scoring process. We helped put scripts together, scenarios for our vendors to come and demonstrate to us how we were going to achieve meeting our objectives. Then, we brought people around the table from that business with a scoring method, and HP won on that scoring method.

[I would recommend to those approaching this that they] understand the method or the approach that you want to use PPM for. You cannot bring PPM and expect it to answer 80 percent of your issues. It can support and help direct resolution of issues, but you need to understand how you are expecting to do that. An example would be if you want to capture ideas from other business units or groups or the technology department on what they'd like to do to improve their application or improve product development, any area of the business, understand the life-cycle of how you want that to be managed. Don’t expect PPM to have preset examples for you.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, July 14, 2011

How I became a REST 'convert'

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

By Ronald Schmelzer

Many of you know me as one half of the ZapThink team – an advisor, analyst, sometimes-trainer, and pundit that has been focused on XML, web services, service oriented architecture (SOA), and now cloud computing over the past decade or so. Some you may also know that immediately prior to starting ZapThink I was one of the original members of the UDDI Advisory Group back in 2000 when I was with ChannelWave, and I also sat on a number of standards bodies including RosettaNet, ebXML, and CPExchange initiatives. Furthermore, as part of the ZapThink team, I tracked the various WS-* standards from their inception to their current “mature” standing.

I’ve closely followed the ups and downs of the Web Service Interoperability (WS-I) organization and more than a few efforts to standardize such things as business process. Why do I mention all this? To let you know that I’m no slouch when it comes to understanding the full scope and depth of the web services family of standards. And yet, when push came to shove and I was tasked with implementing SOA as a developer, what did I choose? REST.

Representational State Transfer, commonly known as REST, is a style of distributed software architecture that offers an alternative to the commonly accepted XML-based web services as a means for system-to-system interaction. ZapThink has written numerous times about REST and its relationship to SOA and Web Services. Of course, this has nothing to do with Service-Oriented Architecture, as we’ve discussed in numerous ZapFlashes in the past. The power of SOA is in loose coupling, composition, and how it enables approaches like cloud computing. It is for these reasons that I chose to adopt SOA for a project I’m currently working on. But when I needed to implement the services I had already determined were necessary, I faced a choice: use web services or REST-based styles as the means to interact with the services. For the reasons I outline below, REST was a clear winner for my particular use case.

Web services in theory and in practice

The main concepts behind Web Services were established in 1999 and 2000 during the height of the dot-com boom. SOAP, then known as the Simple Object Access Protocol and later just “SOAP,” is the standardized, XML-based method for interacting with a third-party service. Simple in concept, but in practice, there’s many ways to utilize SOAP. RPC style (we think not) or Document style? How do you identify end points? And what about naming operations and methods? Clearly SOAP on its own leaves too much to interpretation.

So, this is the role that the Web Services Description Language (WSDL) is supposed to fill. But writing and reading (and understanding) WSDL is a cumbersome affair. Data type matching can be a pain. Versioning is a bear. Minor server-side changes often result in different WSDL and a resulting different service interface, and on the client-side, XSD descriptions of the service are often similarly tied to a particular version of the SOAP endpoint and can break all too easily. And you still have all the problems associated with SOAP. In my attempts to simply get a service up and running, I found myself fighting more with SOAP and WSDL than doing actual work to get services built and systems communicating.

Writing and reading (and understanding) WSDL is a cumbersome affair.

The third “leg” of the web services concept, Universal Description, Discovery and Integration (UDDI), conceptually makes a lot of sense, but in practice, hardly anyone uses it. As a developer, I couldn’t even think of a scenario where UDDI would help me in my particular project. Sure, I could artificially insert UDDI into my use case, but in the scenario where I needed loose coupling, I could get that by simply abstracting my end points and data schema. To the extent I needed run-time and design-time discoverability or visibility into services at various different states of versioning, I could make use of a registry / repository without having to involve UDDI at all. I think UDDI’s time has come and gone, and the market has proven its lack of necessity. Bye, bye UDDI.

As for the rest of the WS-* stack, these standards are far too undeveloped, under implemented, under-standardized, inefficient, and obscure to make any use of whatever value they might bring to the SOA equation, with a few select exceptions. I have found the security-related specifications, specifically OAuth, Service Provisioning Markup Language (SPML), Security Assertion Markup Language (SAML), eXtensible Access Control Markup Language (XACML), are particularly useful, especially in a Cloud environment. These specifications are not web services dependent, and indeed, many of the largest Web-based applications use OAuth and the other specs to make their REST-based environments more secure.

Why REST is ruling

I ended up using REST for a number of reasons, but the primary one is simplicity. As most advocates of REST will tell you, REST is simpler to use and understand than web services. development with REST is easier and quicker than building WSDL files and getting SOAP to work and this is the reason why many of the most-used web APIs are REST-based. You can easily test HTTP-based REST requests with a simply browser call. It can also be more efficient as a protocol since it doesn’t require a SOAP envelope for every call and can leverage the JavaScript Object Notation (JSON) as a data representation format instead of the more verbose and complex to process XML.

But even more than the simplicity, I appreciated the elegance of the REST approach. The basic operation and scalability of the Web has proven the underlying premise of the fundamental REST approach. HTTP operations are standardized, widely accepted, well understood, and operate consistently. There’s no need for a REST version of the WS-I. There’s no need to communicate company-specific SOAP actions or methods – the basic GET, POST, PUT, and DELETE operations are standardized across all Service calls.

As most advocates of REST will tell you, REST is simpler to use and understand than web services.

Even more appealing is the fact that the vendors have not polluted REST with their own interests. The primary driver for web services adoption has been the vendors. Say what you might about the standard’s applicability outside a vendor environment, one would be very hard pressed to utilize web services in any robust way without first choosing a vendor platform. And once you’ve chosen that platform, you’ve pretty much committed to a specific web services implementation approach, forcing third-parties and others to comply with the quirks of your particular platform.

Not so with REST. Not only does the simplicity and purity of the approach eschew vendor meddling, it actually negates much of the value that vendor offerings provide. Indeed, it’s much easier (and not to mention lower cost) to utilize open source offerings in REST-based SOA approaches than more expensive and cumbersome vendor offerings. Furthermore, you can leverage existing technologies that have already proven themselves in high-scale, high-performance environments.

Focus on architecture, not on HTTP

So, how did I meld the fundamental tenets of SOA with a REST-based implementation approach? In our Web-Oriented SOA ZapFlash, we recommended using the following approach to RESTafarian styles of SOA:

  • Make sure your services are properly abstracted, loosely coupled, composable, and contracted
  • Every web-oriented service should have an unambiguous and unique URI to locate the service on the network
  • Use the URI as a means to locate as well as taxonomically define the service in relation to other services.
  • Use well-established actions (such as POST, GET, PUT, and DELETE for HTTP) for interacting with services
  • Lessen the dependence on proprietary middleware to coordinate service interaction and shift to common web infrastructure to handle SOA infrastructure needs

    Much of the criticism of REST comes not from the interaction approach, but rather from the use of HTTP.

Much of the criticism of REST comes not from the interaction approach, but rather from the use of HTTP. Roy Fielding, the progenitor of REST, states in his dissertation that REST was initially described in the context of HTTP, but is not limited to that protocol. He states that REST is an architectural style, not an implementation, and that the web and the use of the HTTP protocol happens to be designed under such style. I chose to implement REST using eXtensible Messaging and Presence Protocol (XMPP) as a way of doing distributed, asynchronous messaging styles of REST-based Service interaction. XMPP, also known as the Jabber Protocol, has already proven itself as a widely-used, highly-scalable messaging protocol for secure and distributed near-realtime messaging protocol. XMPP-based software is deployed widely across the Internet, and forms the basis of many high-scale messaging systems, including those used by Facebook and Google.

Am I bending the rules or the intent of REST by using XMPP instead of HTTP? Perhaps. If HTTP suits you, then you have a wide array of options to choose from in optimizing your implementation. Steven Tilkov does a good job of describing how to best apply HTTP for REST use. But you don’t have to choose XMPP for your implementation if HTTP doesn’t meet your needs. There are a number of other open-source approaches to alternative transports for REST existing including RabbitMQ (based on the AMQP standard), ZeroMQ, and Redis.

The ZapThink take

The title of this ZapFlash is a bit of a misnomer. In order to be a convert to something you first need to be indoctrinated into another religion, and I don’t believe that REST or web services is something upon which to take a religious stance. That being said, for the past decade or so, dogmatic vendors, developers, and enterprise architects have reinforced the notion that to do SOA properly, you must use web services.

ZapThink never believed that this was the case, and my own experiences now shows that SOA can be done well in practice without using Web Services in any significant manner. Indeed, my experience shows that it is actually easier, less costly, and potentially more scalable to not use Web Services unless there’s an otherwise compelling reason.

The conversation about SOA is a conversation about architecture – everything that we’ve talked about over the past decade applies just as equally when the Services are implemented using REST or Web Services on top of any protocol, infrastructure, or data schema. While good enterprise architects do their work at the architecture level of abstraction, the implementation details are left to those who are most concerned with putting the principles of SOA into practice.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Monday, July 11, 2011

Enterprise architects increasingly leverage advanced TOGAF 9 for innovation, market response, and governance benefits

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Join The Open Group in Austin, Texas July 18-22 to learn more about enterprise architecture, cloud computing, and TOGAF 9. To register, go to http://www.opengroup.org/austin2011/register.htm.

Join a podcast discussion in conjunction with the latest Open Group Conference in Austin, Texas, to examine the maturing use of The Open Group Architecture Framework (TOGAF), and how enterprise architects and business leaders are advancing and exploiting the latest Version 9.

The panel explores how the full embrace of TOGAF, its principles, and methodologies are benefiting companies in their pursuit of improved innovation, responsiveness to markets, and operational governance.

Is enterprise architecture (EA) joining other business transformation agents as a part of a larger and extended strategic value? How? And what exactly are the best practitioners of TOGAF getting for their efforts in terms of business achievements?

Here to answer such questions, and delve into advanced use and expanded benefits of EA frameworks, is Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities for The Open Group, who is based in Shanghai, and Jason Uppal, Chief Architect at QR Systems, based in Toronto. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Uppal: This is a time for the enterprise architects to really step up to the plate and be accountable for real performance influence on the organization’s bottom line.

If we can improve things like exploiting assets better today than what we have, improve our planning program, and have very measurable and unambiguous performance indicator that we're committing to, this is a huge step forward for enterprise architects and moving away from technology and frameworks to real-time problems that resonate with executives and align to business and in IT.

An example where EA has a huge impact in many of the organizations is ... we're able to capture the innovation that exists in the organization -- and make that innovation real, as opposed to just suggestions that are thrown in a box, and nobody ever sees.

Say you define an end-to-end process using architecture development method (ADM) methods in TOGAF. This gives me a way to capture that innovation at the lowest level and then evolve it over time.

Those people who are part of the innovation at the beginning see their innovation or idea progressing through the organization, as the innovation gets aligned to value statements, and value statements get aligned to their capabilities, and to the strategies and the projects.

Therefore, if I make a suggestion of some sort, that innovation or idea is seen throughout the organization through the methods like ADM, and the linkage is explicit and very visible to the people. Therefore, they feel comfortable that their ideas are going somewhere, they are just not getting stuck.

So one of the things with a framework like a TOGAF is that, on the outside, it’s a framework. But at the same time, when you apply this along with the other disciplines, it's making a big difference in the organization, because it's allowing the IT organizations to ... actually exploit the current assets that they already have.

And [TOGAF helps] make sure the new assets that they do bring into the organization are aligned to the business needs.

Forde: In the end, what you want to be seeing out of your architectural program is moving the key performance indicators (KPIs) for the business, the business levers. If that is related to cost reduction or is related to top-line numbers or whatever, that explicit linkage through to the business levers in an architecture program is critical.

Going back to the framework reference, what we have with TOGAF 9 is a number of assets, but primarily it’s a tool that’s available to be customized, and it's expected to be customized.

You can start at the top and work your way down through the framework, from this kind of über value proposition, right down through delivery to the departmental level or whatever. Or, you can come into the bottom, in the infrastructure layer, in IT for example, and work your way up. Or, you can come in at the middle. The question is what is impeding your company’s growth or your department’s growth, if those are the issues that are facing you.

If you come to the toolset with a problem, you need to focus the framework on the area that's going to help you get rapid value to solving your particular problem set. So once you get into that particular space, then you can look at migrating out from that entry point, if that's the approach, to expanding your use of the framework, the methods, the capabilities, that are implicit and explicit in the framework to address other areas.

One of the reasons that this framework is so useful in so many different dimensions is that it is a framework. It’s designed to be customized, and is applicable to many different problems.

Uppal: When we think about an advanced TOGAF use ..., it allows us to focus on the current assets that are under deployment in the organization. How do you get the most out of them? An advanced user can figure out how to standardize and scale those assets into a scalable way so therefore they become reusable in the organization.

As we move up the food chain from very technology-centric view of a more optimized and transformed scale, advanced users at that point look and say -- a framework like TOGAF -- they have all these tools in their back pocket.

Now, depending on the stakeholder that they're working with, be that a CEO, a CFO, or a junior manager in the line of business, they can actually focus them on defining a specific capability that they are working toward and create transitional roadmaps. Once those transitional roadmaps are established, then they can drive that through.

An advanced user in the organization is somebody who has all these tools available to them, frameworks available to them, but at the same time, are very focused on a specific value delivery point in their scope.

It moves the conversation away from this framework debate and very quickly moves our conversation into what we do with it.



One beauty of TOGAF is that, because we get to define what enterprise is and we are not told that we have to interview the CEO on day one, I can define an enterprise from a manager’s point of view or a CFO’s point of view and work within that framework. That to me is an advanced user.

... I use methods like TOGAF to define the capabilities in a business strategy that [leaders] are trying to optimize, where they are, and what they want to transition to.

Very creative

This is where a framework allows me to be very creative, defining the capabilities and the transition points, and giving a roadmap to get to those transitions. That is the cleverness and cuteness of architecture work, and the real skills of an architect comes into, not in defining the framework, but defining the application of the framework to a specific business strategy.

... Because, what we do in the business space, and we have done it many times with the framework, is to look at the value chain of the organization. And looking at the value chain, then to map that out to the capabilities required.

Once we know those capabilities, then I can squarely put that question to the executives and say, "Tell me which capability you want to be the best at. Tell me what capability you want to lead the market in. And, tell me which capability you want to be mediocre and just be at below the benchmark in industry."

Once I get an understanding of which capability I want to be the best at, that's where I want to focus my energy.



Once I get an understanding of which capability I want to be the best at, that's where I want to focus my energy. Those ones that I am prepared to live with being mediocre, then I can put another strategy into place and ask how I outsource these things, and focus my outsourcing deal on the cost and service.

This is opposed to having very confused contract with the outsourcer, where one day I'm outsourcing for the cost reasons. The other day, I'm outsourcing for growth reasons. It becomes very difficult for an organization to manage the contracts and bend it to provide the support.

That conversation, at the beginning, is getting executives to commit to which capability they want to be best at. That is a good conversation for an enterprise architect.

My personal experience has been that if I get a call back from the executive, and they say they want to be best at every one of them, then I say, "Well, you really don’t have a clue what you are talking about. You can’t be super fast and super good at every single thing that you do."

One of the things that we've been looking at [at next week's conference] from the industry’s point of view is saying that this conversation around the frameworks is a done deal now, because everybody accepted that we have good enough frameworks. We're moving to the next phase of what we do with these frameworks.

Continuous planning

I
n Austin we'll be looking at how we're using a TOGAF framework to improve ongoing annual business and IT planning. We have a specific example that we are going to bring out where we looked at an organization that was doing once-a-year planning. That was not a very effective way for the organizations. They wanted to change it to continuous planning, which means planning that happens throughout the year.

We identified four or five very specific measurable goals that the program had, such as accuracy of your plan, business goals being achieved by the plan, time and cost to manage and govern the plan, and stakeholders’ satisfaction. Those are the areas that we are defining as to how the TOGAF like framework will be applied to solve a specific problem like enterprise planning and governance.

That's something we will be bringing to our conference in Austin and that event will be held on a Sunday. In the future, we'll be doing a lot more of those specific applications of a framework like a TOGAF to a unique set of problems that are very tangible and they very quickly resonate with the executives, not in IT, but in the entire organization.

Join The Open Group in Austin, Texas July 18-22 to learn more about enterprise architecture, cloud computing, and TOGAF 9. To register, go to http://www.opengroup.org/austin2011/register.htm.

In our future conferences, we're going to be addressing that and saying what people are specifically doing with these frameworks, not to debate the framework itself, but the application of it.

Forde: Jason is going to be talking as a senior architect at the conference on the applied side of TOGAF on Sunday [July 17]. For the Monday plenary, this is basically the rundown. We have David Baker, a Principal from PricewaterhouseCoopers, talking about business driven architecture for strategic transformations.

This is a time now for the enterprise architects to really step up to the plate and be accountable for real performance influence on the organization’s bottom line.



Following that, Tim Barnes, the Chief Architect at Devon Energy out of Canada, covering what they are doing from an EA perspective with their organization.

Then, we're going to wrap up the morning with Mike Wolf, the Principal Architect for EA Strategy and Architecture at Microsoft, talking about IT Architecture to the Enterprise Architecture.

This is a very powerful lineup of people addressing this business focus in EA and the application of it for strategic transformations, which I think are issues that many, many organizations are struggling with.

Capability-based planning

Uppal: The whole of our capability-based planning conversation was introduced in TOGAF 9, and we got more legs to go into developing that concept further, as we learn how best to do some of these things.

When I look at a capability-based planning, I expect my executives to look at it from a point of view and ask what are the opportunities and threats. What it is that you can get out there in the industry, if you have this capability in your back pocket? Don’t worry about how we are going to get it first, let’s decide that it’s worth getting it.

Then, we focus the organization into the long haul and say, well, if we don’t have this capability and nobody in the industry has this capability, if we do have it, what will it do for us? It provides us another view, a long-term view, of the organization. How are we going to focus our attention on the capabilities?

One of the beauties of doing EA is, is that when we start EA at the starting point of a strategic intent, that gives us a good 10-15 year view of what our business is going to be like. When we start architecture at the business strategy level, that gives us a six months to five-year view.

Enterprise architects are very effective at having two views of the world -- a 5-, 10-, or 15-year view of the world, and a 6-month to 3-year view of the world. If we don’t focus on the strategic intent, we'll never know what is possible, and we would always be working on what is possible within our organization, as opposed to thinking of what is possible in the industry as a whole.

Everybody is trying to understand what it is they need to be good at and what it is their partners are very good at that they can leverage.



Forde: In the kinds of environment that most organizations are operating in -- government, for-profit, not-for-profit organizations -- everybody is trying to understand what it is they need to be good at and what it is their partners are very good at that they can leverage. Their choices around this are of course critical.

One of the things that you need to consider is that if you are going to give X out and have the power to manage that and operate whatever it is, whatever process it might be, what do you have to be good at in order to make them effective? One of the things you need to be good at is managing third parties.

One of the advanced uses of an EA is applying the architecture to those management processes. In the maturity of things you can see potentially an effective organization managing a number of partners through an architected approach to things. So when we talked about what do advanced users do, what I am offering is that an advanced use of EA is in the application of it to third-party management.

Framework necessity

You need a framework. Think about what most major Fortune 500 companies in the United States do. They have multiple, multiple IT partners for application development and potentially for operations. They split the network out. They split the desktop out. This creates an amazing degree of complexity around multiple contracts. If you have an integrator, that’s great, but how do you manage the integrator?

There’s a whole slew of complex problems. What we've learned over the years is that the original idea of “outsourcing,” or whatever the term that’s going to be used, we tend to think of that in the abstract, as one activity, when in fact it might be anywhere from 5-25 partners. Coordinating that complexity is a major issue for organizations, and taking an architected approach to that problem is an advanced use of EA.

Uppal: Chris is right. For example, there are two capabilities that an organization we worked with decided on ... that they wanted to be very, very good at.

We worked with a large concrete manufacturing company. If you're a concrete manufacturing company, your biggest cost is the cement. If you can exploit your capability to optimize the cement and substitute products with the chemicals and get the same performance, you can actually get a lot more return and higher margins for the same concrete.

The next thing is the cleverness of the architect -- how he uses his tools to actually define the best possible solutions.



In this organization, the concrete manufacturing process itself was core competency. That had to be kept in-house. The infrastructure is essential to make the concrete, but it wasn’t the core competency of the organization. So those things had to be outsourced.

In this organization we have to build a process -- how to manage the outsourcers and, at the same time, have a capability and a process. Also, how to become best concrete manufacturers. Those two essential capabilities were identified.

An EA framework like TOGAF actually allows you to build both of those capabilities, because it doesn’t care. It just thinks, okay, I have a capability to build, and I am going to give you a set of instructions, the way you do it. The next thing is the cleverness of the architect -- how he uses his tools to actually define the best possible solutions.

Very explicit model

Our governance model is very explicit about who does what and when and how you monitor it. We extended this conversation using TOGAF 9 many times. At the end, when the capability is deployed, the initial value statement that was created in the business architecture is given back to the executive who asked for that capability.

We say, "This is what the benefits of these capabilities are and you signed off at the beginning. Now, you're going to find out that you got the capability. We are going to pass this thing into strategic planning next year, because for next year's planning starting point, this is going to be your baseline." So not only is the governance just to make sure it’s via monitoring, but did we actually get the business scores that we anticipated out of it.

... The whole cloud conversation becomes a very effective conversation within the IT organization.

When we think about cloud, we have actually done cloud before. This is not a new thing, except that before we looked at it from a hosting point of view and from a SaaS point of view. Now, cloud is going in a much further extended way, where entire capability is provided to you. That capability is not only that the infrastructure is being used for somebody else, but the entire industry’s knowledge is in that capability.

This is becoming a very popular thing, and rightfully so, not because it’s a sexy thing to have. In healthcare, especially in countries where it’s a socialized healthcare and it's not monopolized, they are sharing this knowledge in the cloud space with all the hospitals. It's becoming a very productive thing, and enterprise architects are driving it, because we're thinking of capabilities, not components.

IT interaction

Forde: Under normal circumstances the IT organizations are very good at interacting with other technology areas of the business. From what I've seen with the organizations I have dealt with, typically they see slices of business processes, rather than the end-to-end process entirely.

Even within the IT organizations typically, because of the size of many organizations, you have some sort of division of responsibilities. As far as Jason’s emphasis on capabilities and business processes, of course the capabilities and processes transcend functional areas in an organization.

To the extent that a business unit or a business area has a process owner end to end, they may well be better positioned to manage the BPM outsourcing-type of things. If there's a heavy technology orientation around the process outsourcing, then you will see the IT organization being involved to one extent or another.

The real question is, where is the most effective knowledge, skill, and experience around managing these outsourcing capabilities? It may be in the IT organization or it may be in the business unit, but you have to assess where that is.

Under normal circumstances the IT organizations are very good at interacting with other technology areas of the business.



That's one of the functions that the architecture approaches. You need to assess what it is that's going to make you successful in this. If what you need happens to be in the IT organization, then go with that ability. If it is more effective in the business unit, then go with that. And perhaps the answer is that you need to combine or create a new functional organization for the specific purpose of meeting that activity and outsource need.

For most, if not all, companies, information and data are critical to their operation and planning activities, both on a day-to-day basis, month-to-month, annually, and in longer time spans. So the information needs of a company are absolutely critical in any architected approach to solutions or value-add type of activities.

I don’t think I would accept the assumption that the IT department is best-placed to understand what those information needs are. The IT organization may be well-placed to provide input into what technologies could be applied to those problems, but if the information needs are normally being applied to business problems, as opposed to technology problems, I would suggest that it is probably the business units that are best-placed to decide what their information needs are and how best to apply them.

The technologist’s role, at least in the model I'm suggesting, is to be supportive in that and deliver the right technology, at the right time, for the right purpose.

Join The Open Group in Austin, Texas July 18-22 to learn more about enterprise architecture, cloud computing, and TOGAF 9. To register, go to http://www.opengroup.org/austin2011/register.htm.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: