Tuesday, September 25, 2007

Integration infrastructure approaches adjust to new world of SaaS and shared services

Read a full transcript of the discussion. Listen to the podcast. Sponsor: Cape Clear Software.

Change is afoot for the role and requirements of integration for modern software-as-a-service (SaaS) providers and enterprises adopting shared services models. Reuse is becoming an important issue, as are patterns of automation.

The notion of reuse of integration -- with added emphasis on integration as a service -- has prompted a different approach to integration infrastructure. The new demand is driven by ecologies of services, some from the Web "cloud," as well as the need to efficiently scale the delivery of services and applications composed of many disparate component services.

Integrations require reusable patterns, high performance, as well as many different means of access from clients. As a result Cape Clear Software has this week unveiled a new major version of its enterprise service bus (ESB), Cape Clear 7.5, with an emphasis on:
  • A new graphical editor, the SOA Assembly Editor, an Eclipse-based tool to graphically clip together elements of integrations.
  • Multi-tenanting additions to the ESB that allow segmentation of integrations, data, and reporting, as well as segmenting use and reuse of integrations on reporting and management of integrations based on the identities of inbound customers, clients, or businesses.
  • A Business Process Execution Language (BPEL) management system with tools to monitor transactions, and repair transactions when they fail, to allow for rebuilding previous business information and ensure transactional integrity in running and maintaining large enterprise-class BPEL deployments.
To help better understand the new landscape for integration models, I recently moderated a sponsored podcast discussion with Phil Wainewright, an independent consultant, director of Procullux Ventures, and fellow ZDNet SaaS blogger, as well as Annrai O’Toole, CEO of Cape Clear Software.

Here are some excerpts:
... We're getting more sophisticated about SaaS, because it's being taken on board in a whole range of areas within the enterprise, and people want to do integration.

There are two forms of integration coming to the fore. The first is where data needs to be exchanged with legacy applications within the enterprise. The second form of integration that we see -- not at the moment, but it’s increasingly going to be an issue -- is where people want to integrate between different services coming in from the cloud. It’s a topic that’s familiar when we talk about mashups, fairly simple integrations of services that are done at the browser level. In the enterprise space, people tend to talk about composite applications, and it seems to be more difficult when you are dealing with a range of data sources that have to be combined.

People have realized that if you're doing integration to each separate service that's out there, then you're creating the same point-to-point spaghetti that people were trying to get away from by moving to this new IT paradigm. People are starting to think that there's a better way of doing this. If there's a better way of delivering the software, then there ought to be a better way of integrating it together as well.

Therefore, they realize that if we share the integration, rather than building it from scratch each time, we can bring into the integration field some of the benefits that we see with the shared-services architecture or SaaS. ... The new generation of SaaS providers, are really talking about a shared infrastructure, where the application is configured and tailored to the needs of individual customers. In a way, they’re segmented off from the way the infrastructure works underneath.

When you build an integration, you always end up having to customize it in some way for different customers. Customers will have different data formats. They’ll want to access it slightly differently. Some people will want to talk to it over SOAP. Some won't, and they’ll want to use something like REST. Or they might be going backwards and are only able to send it FTP drops, or something like that.

Multi-tenanting is one solution to the problem. The other is what we call multi-channel, which is the ability to have an integration, and make it available with different security policies, different transports, and different transformations going in and out.

A combination of multi-tenanting and multi-channeling allows you to build integrations once, make them accessible to different users, and make them accessible in different ways for each of those different customers. It gives you the scalability and reuse you need to make this model viable.

One point worth bearing in mind here is that this problem is going to get solved, because the economic reality of it suggests that we must solve this. One, the payoff for getting it right is huge. Second, the whole model of SaaS won’t be successful, unless we skin the integration problem. We don’t want the world to be limited to just having Salesforce.com with its siloed application.

We want SaaS to be the generic solution for everybody. That’s the way the industry is going, and that can only happen by solving this problem. So, we’re having a good stab at it, and I'll just briefly address some of the things that I think enable us to do it now, as opposed to in the past. First, there is a standardization that’s taken place. A set of standards has been created around SOA, giving us the interoperability platform that makes it possible in a way that was never possible before. Second is an acceptance of this shared-services, hosted model.

Years ago, people would have laughed at you and said, "I’m going to trust all my customer data to a provider in the cloud?" But, they’re doing it happily because of the economics of it. The whole trend toward trusting people with outsourced offerings means that the people will be more likely to trust integrations out there, because a lot of the technology to do this has been around for quite some time.

In enterprises you’re seeing this big move to virtualization and shared services. They’re saying, "Why are we having development teams build integration in all these branch offices at all these locations around the world? It’s extremely wasteful. It's a lot of skill that we've got to push out, and there are a lot of things that go wrong with these. Can't we consolidate all of those into a centralized data center? We’ll host those integrations for those individual business units or those at departments, but we'll do it here. We’ve got all the expertise in one place."

Those guys are delighted, because at the individual local level they don’t maintain all the costs and all the complexity of dealing with all the issues. It’s hosted out in their internal cloud. We haven't seen enough data points on that, but this hosted integration model can work. We’ve got it working for pure entities in SaaS companies like Workday, and we’ve got it working for a number of large enterprises. There is enough evidence for us to believe that this is really going to be the way forward for everybody in the industry.
Read a full transcript of the discussion. Listen to the podcast. Sponsor: Cape Clear Software.

Monday, September 24, 2007

Integrien deepens analytics, betters interoperability and userability in Q4's Alive 6.0 release

Read a full transcript of the discussion. Listen to the podcast. Sponsor: Integrien Corp.

The movement of IT and systems management to the end-to-end business service value level has been a long time in coming. Yet the need has never been higher. Enterprises and on-demand application providers alike need to predict how systems will behave under a variety of conditions.

Rather than losing control to ever-increasing complexity -- and gaining less and less insight into the root causes of problematic applications and services -- operators must gain the ability to predict and prevent threats to the performance of their applications and services. Firefighting against applications performance degradation in a dynamic service-oriented architecture (SOA) just won't cut it.

By adding real-time analytics to their systems management practices, IT operators can determine the normal state of how systems should be performing. Then, by measuring the characteristics of systems under many conditions over time, administrators can gain predictive insights into their entire operations, based on a business services-level of performance and demand. They can stay ahead of complexity, and therefore contain the costs of ongoing high-performance applications delivery.

I recently had a podcast discussion with Mazda Marvasti, the CTO of Integrien Corp., on managing complexity by leveraging probabilistic systems management and remediation. I learned that Integrien's Alive suite uses probabilistic analysis to predict IT systems problems before costly applications outages. Furthermore, I received some details on the next Alive 6.0 release in Q4 of this year.

Here are some excerpts:
Can you give us some sense of the direction that the major new offerings within the Alive product set will take?

Basically, we have three pillars that the product is based on. First is usability. That's a particular pet peeve of mine. I didn't find any of the applications out there very usable. We have spent a lot of time working with customers and working with different operations groups. ... The second piece is interoperability. The majority of the organizations that we go to already have a whole bunch of systems, whether it be data collection systems, event management systems, or configuration management databases, etc.

Our product absolutely needs to leverage those investments -- and they are leveragable. But even those investments in their silos don’t produce as much benefit to the customer as a product like ours going in there and utilizing all of that data that they have in there, and bringing out the information that’s locked within it.

The third piece is analytics. What we have in the product coming out is scalability to 100,000 servers. We've kind of gone wild on the scalability side, because we are designing for the future. Nobody that I know of right now has that kind of a scale, except maybe Google, but theirs' is basically the same thing replicated thousands of times over, which is different than the enterprises we deal with, like banks or health-care organizations.

A single four-processor Xeon box, with Alive installed on it, can run real-time analytics for up to 100,000 devices. That’s the level of scale we're talking about. In terms of analytics, we've got three new pieces coming out, and basically every event we send out is a predictive event. It’s going to tell you this event occurred, and then this other set of events have a certain probability within a certain timeframe to occur.

Not only that, but then we can match it to what we call our "finger printing." Our finger printing is a pattern-matching technology that allows us to look at patterns of events and formulate a particular problem. It indicates particular problems and those become the predictive alerts to other problems.

Now, with SOA and virtualization moving into application-development and data-center automation, there is a tremendous amount of complexity in the operations arena. You can’t have the people who used to have the "tribal knowledge" in their head determining where the problems are coming from or what the issues are.

The problems and the complexity have gone beyond the capability of people just sitting there in front of screens of data, trying to make sense out of it. So, as we gained efficiency from application development, we need consistency of performance and availability, but all of this added to the complexity of managing the data center.

That’s how the evolution of the data center went from being totally deterministic, meaning that you knew every variable, could measure it, and had very specific rules telling you if certain things happened, and what they were and what they meant -- all the way to a non-deterministic era, which we are in right now.

Now, you can't possibly know all the variables, and the rules that you come up with today may be invalid tomorrow, all just because of change that has gone on in your environment. So, you cannot use the same techniques that you used 10 or 15 years ago to manage your operations today. Yet that’s what the current tools are doing. They are just more of the same, and that’s not meeting the requirements of the operations center anymore.

I’ve been working on these types of problems for the past 18 years. Since graduate school, I’ve been analyzing data extraction of information from disparate data. I went to work for Ford and General Motors -- really large environments. Back then, it was client-servers and how those environments were being managed. I could see the impending complexity, because I saw the level of pressure that there was on application developers to develop more reusable code and to develop faster with higher quality.

The run book is missing that information. The run book only has the information on how to clean it up after an accident happens.

That’s the missing piece in the operations arena. Part of the challenge for our company is getting the operations folks to start thinking in a different fashion. You can do it a little at a time. It doesn’t have to be a complete shift in one fell swoop, but it does require that change in mentality. Now that I am actually forewarned about something, how do I prevent it, as opposed to cleaning up after it happens.
Read a full transcript of the discussion. Listen to the podcast. Sponsor: Integrien Corp.

Before we know who owns the SOA business case, how about simple business processes?

There's a good article on "owning" the business case for SOA on SearchWebServices.com. Some of my most respected analysts are quoted.

But is the question posed a relevant one? While making the business case for SOA is and will be a fascinating topic for some time, we may be jumping the gun.

From where I sit, just about everyone that has a strategic role in IT and business decisions at an enterprise has an "ownership" stake in SOA. It's that pervasive. The COO may be the best person under many current organizational charts to see all the moving SOA parts.

Yet buy-in and inclusiveness -- both wide and deep -- for SOA are essential, so it can't really fall to any one person. Assigning "business value" ownership is too abstract, really, for real-world companies to begin using it and embracing SOA. SOA is ubiquitous in its effects. The positioning of SOA as an abstraction is holding back its embrace and adoption.

So let's look at more practical questions on SOA and business value, before we go shooting for the moon. Sadly, in even the most progressive enterprises, the ownership of a single business process is ambiguous. Organizations have been ceated for decades based on the notion of decentralization -- which is just another way of breaking up complexity into small chunks and assigning responsibility for the chunks, often at the expense of minding the whole. Very few individuals or teams are defined or incentivized to manage an entire business process. Yet this an essential stepping stone to SOA, and to eventually making the business case for SOA.

We see attempts to proffer SOA from the top down, with even less emphasis on adoption from the bottom up. What I'm saying is also, and perhaps predominantly, build it from the middle out. Create the new middle for SOA at the business process level, and then evangelize it in any which way.

In effect, SOA and its foundational core, business processes, are fighting back against the long-term tide of decentralization and IT specialization. SOA says you can now make the chucks of discrete IT resources relate far better, so why not begin to look at an entire process and work to make it more efficient, and more flexible? Why not extract the best of specialization and improve, refine and reuse the parts best in the context of general business -requirements whole? See the forest and the trees. Make better business decisions -- operationally and strategically -- as a result.

As Dr. Paul Brown points out in a book I recently helped review via a sponsored podcast, Succeeding with SOA: Realizing Business Value Through Total Architecture (Addison-Wesley, April 2007), the business process is the right level to assign "ownership." Now.

When an analyst or architect -- as well as their teams -- begin to see themselves as managing and evangelizing on a business process level, then SOA can begin to make strides as a concept and methodology more broadly. To try and inject SOA into a company broadly, then discretely is putting the cart in front of the horse. Better yet to re-arrange all the horses and carts based on the right trips for the right loads, making it easier to change horses and carts as needed.

I like the idea of cross-functional teams (horses, carts, drivers, and caravans) created that serve a business process lifecycle. These would be pods (perhaps virtual in nature) of tightly-coordinated people with the right mix of skills and experience -- specific and general, technical and business-oriented, able to communicate as a team on many levels.

Like the Ray Bradbury book, Fahrenheit 451, where individuals learn and carry on whole books in their memories as a way to preserve the books and their knowledge, business process pods would retain and refine the essence of a business process and care for it and extol its virtues throughout an enterprise. They would cross all the chasms across the constituent services but at the higher business value level.

We've heard talk of a "T" person from SOA evangelists at IBM, whereby the horizontal bar in the "T" represents business acumen, and the vertical bar represents technical depth. But I like the idea of the cross-functional pod better -- a team of, by, and for the business process.

The ownership of a business process (never mind SOA) is too much for one person. A multi-talented team can provide the wetware and organizational dynamism to get SOA started on a practical, middle level -- that of a business process as a productivity entity. This step is what's needed before we start assigning ownership for the business case for SOA.