Tuesday, May 3, 2011

Rapidly evolving IT trends make open source, agile application integration platforms more important than ever

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Register for CamelOne. Sponsor: FuseSource.

Enterprise integration requirements are rapidly shifting to accommodate such trends as cloud computing, mobile devices' explosion, and increased demand for extended enterprise business processes.

Application-to-application integration inside an enterprise's four walls is well understood, but very quickly the demands placed on integration are spanning multiple enterprises, multiple types of applications, and varieties of service providers. As a result, software as a service (SaaS) and cloud computing are joining with legacy systems to form new and varied hybrid models that require whole new sets of integration needs and challenges.

Once these newer breeds of integrations are set up, can the old, brittle management and upkeep of them suffice -- or will agility and rapid upgrades and innovations require new tools to make integration a lifecycle function with ongoing management and more automated governance?

In the latest BriefingsDirect enterprise IT discussion, the panel examines how open-source integration projects like Apache Camel and lightweight integration implementations and graphical tools are making developers and architects more agile. At the same time, these open-source approaches are proving less vulnerable to the complexity, fragility, and cost that often plague aging commercial middleware integration products. [Learn more about the CamelOne conference May 24 in Washington, DC.]

Dana Gardner, Principal Analyst at Interarbor Solutions, recently sat down with Rob Davies, Chief Technology Officer at FuseSource, and Debbie Moynihan, Vice President of Marketing at FuseSource, to examine the need for innovative, new, open and agile integration capabilities.

Here are some excerpts:
Gardner: The need for integration is increasing. The things that need to be integrated are increasing, rapidly. Open source is well established. When you put these factors together this perhaps spells an historic shift. Has the ability to integrate openly become an essential ingredient of businesses?

Davies: Sometimes, it's difficult to see things happening like that, if you’re actually right inside in the middle of it. We probably are at that shift right now.

We’ve talked about cloud environment. Also there’s social networking, SaaS, and mobile devices, and you need to link all those together. It's coming to the point where organizations won’t have a choice other than to use open source as a way to try to keep up with a pace of change.

We're probably at a point now where we’re going to see that the traditional model of providing software is going to dwindle over time, probably pretty rapidly, as organizations realize that they need the flexibility and the ability to change what they’re doing very quickly.

Future-proofing applications

You have to start thinking about how you're going to future-proof your applications right from the beginning to adapt to changes in their environments. You have to architect in how you’re going to integrate and future-proof your applications, because it does get more costly if you do it as an afterthought.

Gardner: Many of the SaaS providers are doing multitenancy and providing applications as services on demand at a very attractive and aggressive price point. They're leveraging open source on the back end, I have to imagine. Do you have any insight into what the service providers themselves are building with?

Davies: Most applications now -- in particular in the cloud -- are using open source at the back end. We can't give you any specific details of vendors that are doing that, but I know they're using open-source projects, and not just the SaaS vendors, but some of the other existing product vendors use open-source as well to enable their products.

We certainly see open-source as definitely mainstream now, and we’ve seen it has been the first choice that people use for building any kind of application or service they’re providing. It's more a case of people asking the questions now of not should we be using open source, but why shouldn’t we use open source? It's starting to become a first choice for people to go to.

Gardner: Debbie, why do people need to rethink integration?

Moynihan: The business models are changing and people are being asked to do more with less. Teams and applications are more distributed than ever.

There are a lot of new technologies coming out that people are struggling to learn, and figuring out how to incorporate them into their infrastructure: cloud, mobile, the explosion of the huge amounts of data that enterprises are trying to understand and make sense out of. Not to mention the social media technologies that people are being asked about and wondering how to incorporate into their enterprise infrastructure.

There are a lot of different skills that people are looking to have that they've never been asked to have before. More and more people are being asked to perform IT tasks. It isn’t just highly skilled developers, but also business analysts and people who have never done integration before are being asked to do integration activities.

They're not sure how to keep up with all of these changes. Costs are a problem because essentially everyone has the same or smaller budget going forward and a lot of people have fewer people to do what they've been doing before.

At FuseSource, we've seen a lot people looking more and more to open source to solve some of these problems ... . There's a lot of flexibility. When the environment changes and new technologies come out, you need to integrate new things into your environment.

The community people, when they see a problem or new technology, just make it happen. They can add, expand, and modify what's involved in the various open-source integration projects without the overhead and bureaucracy of some of the traditional software development environments.

Gardner: In the past, when we had a shift in computing, we'd bring in a new set of applications, we'd update our platforms, and then think about integrating them. It was a sequential process and it could take three to five years to go through something like that.

We don’t really have that luxury anymore. Now things are happening in a simultaneous fashion. So integration really can't be an afterthought, but needs to be part-and-parcel with how you go about designing and implementing your applications.

Doesn’t open source, in a sense, allow for a compression of the time that we’ve traditionally taken with commercial products?

Moynihan: Absolutely. Open-source is a componentized, lightweight approach. As people develop their applications, they develop them in such a way that they can be broken apart in new and different ways down the road, and it's very transparent. It makes it easier over time to further integrate what you’ve built and to make changes as you need.

Gardner: One of the other aspects of this that I'm seeing in the market is that more people need to take part in integration. It can't just go through a bottleneck of "beard-and-sneaker guys" in the back room who can do coding. Integration needs to be part of process innovation. That means we need to elevate it out to a wider group of individuals, maybe as many as possible that are on the front lines of process innovation and analysis.

The addition of tooling is going to help broaden how many people can do integration, and we're real excited.



What's being done about the integration that we've been describing to make it more, well ... applicable?

M
oynihan:
On April 11, we announced the general availability of a new graphical tooling for Apache Camel. [Users can download a trial version of the plug-in, which includes some of the functionality of the fully paid version found on the subscription-based Fuse Mediation Router.]

The addition of graphical tooling makes it easier for more people to do integration development. They don't have to write code. They can use a drag-and-drop environment to select the integration patterns that they want to implement, and the software will implement them. They can test them and deploy them into production as well.

The addition of tooling is going to help broaden how many people can do integration, and we're real excited. We've been doing a beta program since the end of January with over 500 participants. Rob mentioned the breadth of all the components and how hot Apache Camel has been. We're not surprised that more and more people want to use it. So, the idea of having tooling on top of it is really attractive to users.

Gardner: So, what's the name and where do you go to find out more about them?

Moynihan: The Fuse IDE for Camel is the name. It plugs into an Eclipse environment and you can get it at fusesource.com.

Gardner: You know it strikes me that when we begin to talk about integration that I’d mentioned service-oriented architecture (SOA), but that was sort of yesterday’s buzzword. We're now into cloud, hybrid, and mobile. But, from an architectural perspective, you can't really scale and leverage these open components without that proper underpinning, typically an enterprise-service-bus (ESB) architecture.

Rob, help me understand why doing this correctly from an architecture -- not just an open-source -- perspective is really important as well.

Davies: You hit the core things about the SOA and the ESB architectures. We see where people are using, in particular, Apache Camel and some of our other open-source projects. They want flexibility there. So, they want to leverage a service bus, put things on, expose them as service, and expose them over the service bus, which uses different transports to enable that bus, be that messaging, HTTP, or whatever other means you want to use.

Application integration

At the same time, you also want to have the flexibility now to do it in application integration. You want to have that flexibility for some services and you very much need that enterprise service bus in place. But for other cases, you want to be able to do that more locally, where the integration points are.

The approach that we have is that we enable you to do both, because you can embed Apache Camel inside an application server, if you want it inside your application itself. If you want to use it in a more traditional sense, you can deploy it into ServiceMix. You can define your apps easily, deploy them into ServiceMix, and use it to manage the container.

Having that flexibility as well means that you can have the right architecture for your particular solution. If you look at how people would do the integration before, they’d have to get an ESB, and that would force the whole architecture of how they do things. When you’ve got more flexibility, it means that you can make the right architecture choices that you need, and you're not constrained to one particular style of integration.

Gardner: I'm facing a lot of questions more recently about how to architecturally cross the domains that we've mentioned -- SaaS, cloud, on-premises, traditional architecture, and private cloud architecture.

Does the service-bus approach and the open-source approach also give us some sort of a path or vision for how to go about this?

You can only really get that speed of innovation to keep up with the way the environment is changing by choosing open source.



Davies: Having open source enables you to have the insight into how the integration application works.

If you just look back just a couple of years, when people were starting to use the cloud, they weren’t even thinking about having hybrid clouds. Now, we're seeing more and more people, more of our customers, looking to hybrid clouds and have a private cloud for applications.

When they need the capacity, obviously they can get that capacity in a public cloud. But, to have all those PCs working together seamlessly, they need the agility that you get from an integration solution that can be deployed on a public cloud, locally, or a combination of both. That’s something that you can only get from software that has evolved at the same pace as the demands of the environment.

You can only really get that speed of innovation to keep up with the way the environment is changing by choosing open source, because the open-source community itself is driving the projects to keep up with the demands.

So, you have to try to move outside of a traditional release cycle that you would get from a traditional product company. You don’t really have any other alternatives, if you want to keep up, than to look at open-source projects, the Apache ones in particular. [Learn more about the CamelOne conference May 24 in Washington, DC.]

Apache projects certainly hit the right notes in that you've got both very business-friendly license from the Apache license and very active communities, and you’ve got diversity in that community. You know these projects are going to live beyond the lifetime of particular individuals on the projects.

Support and consultancy

Y
ou also have the benefit of having companies like FuseSource, which created the projects in the first place, and who are there and able to provide support and consultancy if you need it. You get the best of having a dynamic community, a dynamic project, and you also get the security of having professional company to back it up.

Gardner: How rapidly are the iterations within the Apache project, within Camel in particular, happening? How rapidly is innovation taking place?

Very fast pace

Davies: It’s happening at a very fast pace. When we do release these out of Apache, it's typically every three months, but in that three-month period there could be other components that have gone into the Apache Camel Framework. Because it's open source, people can actually look about, release their own components into an open-source environment, or develop them separately without necessarily releasing to Apache, just to get the functionality out.

That pace of change is very fast and it’s near real-time. When the need comes up, within a few days or a week, you would probably find someone who has already written that integration component that you need and it’s available. ... If you’ve got an open-source framework, you can actually have an insight into how the project works.

After we launched Apache Camel at the Apache Software Foundation, we provided a number of default integration components for Camel. But, as soon as they got out there and the community started to use them and saw the benefits of using them, we saw no end of contributions. People contributed adapters to weird and wonderful systems, and contributed them right back into the Apache project. [Learn more about the CamelOne conference May 24 in Washington, DC.]

We know from our customers that they’ve got specific needs. They’ve got legacy applications. Because we've gone to the effort of making sure that it's very easy to add a new component into Apache Camel, it's very straightforward for someone to add in extra functionality.

For example, if you want to write a component for legacy mainframe application, you could very easily do in a matter of hours. The old approach would take you weeks, months, maybe even years, especially if you don’t have access to the source code. So, you’ve got that added flexibility.

The fact that it's an open-source project at Apache means you can get feedback instantly, if you’ve got issues and problems. Of course, if you want professional help, there’s FuseSource as well. We have our own community at fusesource.com. So, all these things combined means that you have more flexibility and a much more agile way of doing integration.

Gardner: What's happening now in the community? I understand you have a conference that’s coming up May 24, a first of its kind. Why is this a good time to be pulling together the Camel Community?

The nice thing about Camel is that it provides a basic foundation and a terminology of well-defined patterns.



Moynihan: We’re really excited. We have an event coming up in May called CamelOne, and the reason why we focused on Camel with the name of the event is that it’s actually for open-source integration and messaging overall. It’s because Camel is a really great way for people to get started, and it brings together the entire community.

Camel is a great foundation and CamelOne is an event to bring together users of Camel and other open-source integration and messaging technologies to learn more about Camel, open-source messaging like ActiveMQ, and ESBs like Apache ServiceMix.

Camel provides a basic foundation and a terminology of well-defined patterns. The integration patterns themselves are very well-defined, but what's happening is all the different ways in which you connect and what you are connecting to have been changing and evolving over time.

Other people are going to be doing more in-depth management of many integration patterns and they may need to know all the nuances of an ESB platform. The focus of CamelOne is to bring people together to understand, learn about, and meet each other and to grow this community of open-source integration users.

Gardner: So, this is CamelOne, May 24, in the Washington D.C. area. Why Washington D.C.? Is there a lot of this going on in the public sector?

Central location

Moynihan: Actually, we do have a lot of users in the Washington D.C. area. We also thought that was a central location, where people could come from not only anywhere in the US but also from other regions of the world as well. There are a lot of direct flights to that location. But, we do have a lot of users in the area. For example, the Federal Aviation Administration (FAA) is going to be speaking and they have selected open-source integration for the next generation of their services infrastructure.

Since they connect with a lot of other agencies, there is a lot of interest in learning more specifically about that program and about the technologies that it's built upon, because a lot of other agencies need to connect.

Gardner: And how about more information on CamelOne? It’s simple, I suppose search on CamelOne will get you there.

Moynihan: Yes, camelone.com is the website as well.

Gardner: Now, you guys have been involved with a series of books and you have something new coming out in that series. Tell me about that.

Camel in Action

Moynihan: There are a couple of books that recently have come out. One is Camel in Action, which is fantastic for people who want to get going with Camel and learn how to use and deploy it. Rob is coauthor of the ActiveMQ in Action book, which has come out in print recently from Manning Publications.

Davies: ActiveMQ in Action is really a scripted book, which goes through all the different use cases of using ActiveMQ, right from getting started and what messaging is about. It walks you through different deployment options, all the way up through using clusters of ActiveMQ brokers, to using ActiveMQ as a wide area network, so you can connect geographically dispersed locations.

It shows you how to tune the performance of ActiveMQ and get the best out of it. So it's very comprehensive book about how to use ActiveMQ. It's somewhat complementary to Camel in Action as well, which goes through all the different patterns you can use.

It doesn't talk about using Camel. It talks about integration patterns as well and then describes how you can use those using Apache Camel, and you can use Apache Camel with ActiveMQ. ActiveMQ also can embed Apache Camel. So, you have routes running inside the broker from Camel. The two of them are very complementary.

On our website fusesource.com, we also have a lot of webinars, which are happening live on a regular basis. We have a lot of archived webinars, which actually walk you through technical tutorials on how to get started with these various open-source projects.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Register for CamelOne. Sponsor: FuseSource.

You may also be interested in:

Monday, May 2, 2011

Case Study: How Fairchild Semiconductor leverages the Workday Integration Cloud

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Workday.

The latest BriefingsDirect podcast provides a case study on how new forms of cloud-based integration are helping a major high-tech company build new relationships among and between extended enterprise business processes.

We'll examine how Fairchild Semiconductor has been an early adopter of integration platform as a service (iPaaS). The venerable Silicon Valley company has been using graphical tools to build integrations among and between far-flung applications and services but with those integration platforms housed in a newly unveiled Workday Integration Cloud. [Disclosure: Workday is a sponsor of BriefingsDirect podcasts.]

We’ll learn here from the chief technology officer at Workday on what the integration cloud approach can do and how it points to a future in which broad integration capabilities are increasingly built into software-as-a-service (SaaS) applications.

This cloud-based integration model will prove far less vulnerable to the complexity, fragility and cost that plagues traditional on-premises middleware integration methods. It should also spur the evolution of services ecosystems among multiple business service providers and application providers.

Joining the conversation to dig into what makes integration as a service (IaaS) tick and what it means for the future is Paul Lones, Senior Vice President for Information Technology at Fairchild Semiconductor, and Stan Swete, the Chief Technology Officer at Workday. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: What's the problem now with integration? Why is this different than a few years ago? Why is it that we need to adopt a different take on integration?

Lones: We've just recently gone live with Workday, and several of their partners, and have completely transformed our human capital management landscape.

Fairchild Semiconductor has roughly 10,000 employees worldwide. We're a semiconductor manufacturing company. We have manufacturing facilities in the United States and throughout Asia. Our customer base is global, our employee base is global. Over 70 percent of our business is in Asia and 70 percent of our employees are in Asia. Having the capability to provide a core HR platform like this to that broad a set of colleagues around the world is really exciting for us, and to be able to support our internal customers and the HR group.

... Integrations are a new challenge, a broader challenge than they have been traditionally. ... In the HR arena, there has been no such thing as a standard integration. Every benefit provider, every payroll service provider that you want to work with requires a custom integration. That’s always been true, and having the set of tools that we now have at our disposal makes that a lot easier.

Companies like Fairchild are really trying to take advantage of some of the new capabilities that SaaS providers are offering. ... It's a critical enabler.

We look for two things. One, we want to find a supplier that thinks of this in a more holistic ecosystem-like way, and that has a series of application-level partners, that we can add to our overall architecture and overall application capability.

In addition to that, we look for good integration tools, because even beyond those partnerships, we still have to do a lot of integration work.

... For the Workday partners, those integrations are handled between Workday and their partners, which reduced our integration burden. We don't have to maintain those, as both of those applications continue to improve. In addition to that, we've built 28 application integrations ourselves, largely to benefit service providers and payroll service providers around the world.

We were fortunate enough that we were able to get some early access to the toolset that Workday is now making available to their broader customer and partner base.

I had a small team of IT staff that was completely unfamiliar with Workday when they first started, and we put them to work on these integrations. We were able to complete these 28 integrations in less than 120 days, which I think was pretty good performance. ... We do know that from an overall project implementation perspective that an on-premises application typically will take 2-3 times as long to execute, and I'd expect that the integration piece would have a similar scaling.

Gardner: Stan, what needed to change and when Workday looked at this issue of your online ecosystem and how it tied things together?

Swete: We still look at it as having the same requirements for enterprise integration. Especially for hub systems like human capital management, there are ton of other systems that you have to integrate with. So the requirements are daunting and are still there. It's been the same for a while at enterprise software.

What we see as being a cloud vendor, a SaaS vendor, is just new opportunities to leverage the SaaS model to do integration a little bit differently, have the application vendor take on more of the ownership of the integration issue, and use the fact that we've got all of our customers running on a single version of the product to tie some integration logic to that and bring more control and stability to that integration for our customers and our partners.

Gardner: Why is it then that the traditional systems, platforms, and middleware that are in place are not up to this task?

Swete: There's just a split today between the technologies and the platforms that are used to execute integration and convey data and then the application’s endpoints that are involved with and tied up in the logic of that integration.

It's not that no one is up to it, but it's just that that gap splits responsibilities where maybe they don’t have to be split. What we’re trying to do is marry it, use what we know about our applications to create integration logic, and then embed technology that hasn’t been embedded with applications before to help with the delivery of that.

That hasn't replaced every single kind of middleware technology that you need. You still need a middleware technology behind your firewall. You still need specialized middleware technology in the cloud to do things that it does best. But, for the application-centric part of integration, application vendors can do more.

... The Workday Integration Cloud is an extension of Workday's cloud that we use to host and process our on-demand applications and it has several really important components. One is a platform component. The tools that Paul mentioned that they used to build integrations, up until today, have been there for Workday developers. The announcement makes these tools fully available to Workday customers and to Workday partners.

In addition to the tools, there is a rich enterprise service bus (ESB) execution environment that runs the results of these developmental tools. We offer not only the tools to build integration systems but the execution environment for the integration systems. And then we've a set of scheduling and monitoring tools that our customers can use to directly schedule and monitor the execution of their integrations.

So those three things taken together form the platform, that's part of the integration cloud. The resulting integration systems we also consider a part of the cloud. Workday for some time has been building what we call Packaged Integrations and Connectors. We have a library of those that we can make available to our customers.

Fairchild has used some of these. These integrations are built with our tooling by us and for our customers. Packaged integrations really just look like another Workday product, but they handle both ends of the integration challenge.

We also have connectors that handle our end of it but build logic out. The main example is a payroll interface product that lets our customers, gives our customers a starting point for hooking up Workday human capital management to the variety of international payrolls many of our larger customers have.

This is very solid ESB technology, well thought of by the engineering talent that we now own.



Packaged integrations from Workday is another component of the Integration Cloud and the final one is just the body of integrations that our customers and partners create.

These are the intellectual property of our customers and our partners. Workday does facilitate sharing of those definitions if the customer and partners are interested, but there is that growing body of application as well. Those things taken together are the Workday Integration Cloud.

Gardner: And just to be clear, this is designed for your customers. This isn't just a general purpose integration service that you are opening up writ large. This is about your ecosystem and your customers, is that right?

Swete: The beauty of it is that it's based on middleware from a company formerly called Cape Clear that Workday acquired three years ago. I think that's very important to mention that. So it's not like we, an apps vendor, just did our take on an ESB. This is very solid ESB technology, well thought of by the engineering talent that we now own.

Built-in integration

W
e're taking this technology and integrating it into our applications, building integration into our applications as the way we refer to it, and then making the combined product available to both our customers and our partners. The partners are the equally important point. Systems integration partners from Workday can get access to these tools and this platform.

Gardner: And how about the pricing?

Swete: The Workday Integration Cloud platform is being made available at no additional cost to Workday customers and Workday partners. We make our money selling our application services.

Gardner: I'm intrigued by this notion of making integration part of the application. I think the history of this, Paul, has been that over the years, new applications and platforms, and even models of computing would come along. You would get great productivity from the application, you would buy and install and master the platform, and then you would be faced with an integration problem.

This is happened over and over again. We've seen it with mainframes to client-server and then into multi-tier and distributing computing and then ultimately with web and now cloud computing.

Companies like ours and many companies working on this are moving from a monolithic internal application orientation to one that's more of a hybrid model.



Given that integration has been a bolt-on, something that's been delivered after they shift in an application model, why now change? Why is integration and the application coming together now?

Lones: Part of it is that our approach to overall enterprise architecture is changing. Companies like ours and many companies working on this are moving from a monolithic internal application orientation to one that's more of a hybrid model, where we want to really take advantage of the new capabilities and the quicker pace of development and deployment of improvements that SaaS providers offer.

Therefore, integrations naturally become a critical part of that, because the number of applications that we use in our business increases somewhat with this sort of approach.

Swete: The challenge here is that the requirements in the large problem of integration haven't changed, and there have been a lot of tools developed to address the issue. Some results have been achieved, but I don't think anyone is satisfied with how maintainable enterprise integration is. And, we happened to think the answer is to build more robust integration where the integration definitions themselves are more informed by what exists and what's changing in the application.

Hub system

That's the opportunity that we were seeing. We came on to it by just being the provider of an application that is going to be the hub system and be hooked up to a lot of different systems.

We knew that integration was going to be front and center for us as a brand new SaaS vendor six years ago. One of the differences we wanted to make was to do more about the problem. So, we started with an investment of technology.

Where that has led us is really tying what can get done with integration technology to what applications know about, everything from their security model to, in our case, we leverage a lot the fact that we know about people and how they are organized. So, we're able to have integration definitions that can get routed around for the appropriate approvals before certain steps happen.

That’s unique, but it's breaking down the separation between integration that would be built by one side of the company and tying it back to who it's really serving, the other side of the company.

For payroll integration, the payroll admin can be hooked into the fact that a major feed of HR data is going out to a payroll system and they can get a check on that before it happens. That’s something we’ve built in and we’ll continue to look for those opportunities. I still think it's actually early days for what our integration tools can leverage inside the application.

You still have to have experts on integration middleware and we have that, but the real benefit we think comes from blurring the distinction and marrying these things together.



Gardner: So, the system of record for HR and the governance and policies about employees and their roles in the organization can now be applied pretty seamlessly to who gets to do integrations and/or how integrations as part of a business process would work. Am I reading that right?

Swete: Yeah, how they get executed, how they get approved is all built in to the same sort of system that you use to schedule a report or any other thing you’d do in your application. For us, it's just an extension of the application, rather than a hard line and then some integration technology that no one on the app side understands.

There still are differences. You still have to have experts on integration middleware and we have that, but the real benefit we think comes from blurring the distinction and marrying these things together.

... We’ve taken the approach of splitting the development tools into a framework that is more geared for developing simple integrations, as we call them. This is one-way data in or one-way data out of Workday to third-party systems, and we have a tool called the Enterprise Interface Builder (EIB) that is a non-programmer could use. You still need to know that you are sending something to a secure FTP location, but you don’t have to be a developer.

Sets of choices

We give you a graphical user interface, we give you a selected set of choices for how you can source data, a selected set of choices for how you can transform it, and a select set of choices for how you can deliver it. You can save that, and then you have a definition that you can then schedule on a recurring basis. That’s built for non-developers.

The other tool that we have has a completely different personality. It's what we call Workday Studio. This is the developer tool that we have used to build our integrations, and it is now available for our customers. But, on this one, you want to be a developer. You're not doing programming, but you are working in an Eclipse-based framework with detailed control over integration components and orchestration of how data flows. So, this is a technical development tool.

The thing it creates is the same thing that the EIB creates, an integration system that can then be executed in Workday, but the creation of it is much more technical.

Gardner: So it's interesting, Stan. You have a user like Fairchild, using these tools, building these integrations, moving more toward a multiparty ecosystem process-oriented benefit, but the responsibility on those integrations is with you.

It seems as if you're really giving an awful lot here. How can you do that with a strong sense of confidence? Isn't there a risk that if these integrations start breaking that you are in the catbird seat?

Levels of the game

Swete: Yeah, well, there are levels of the game for how you can leverage the support you get out of the core application that we keep moving forward. One level of the game is for us that's very important in the integrations we build and sell are ones that can just share the application definitions. So, we support those across all the updates and verify that the logic of those is going to work.

For the integrations tools, we can put smarts into the tools that share how the applications are constructed in that. It gives our customers a leg-up that they can start with these components. Then they can create integrations that are a little bit more impervious to being broken by changes in the applications, because they're sharing metadata back into the applications.

Lots of integrations are built on our application programming interface (API) and so we've got to be rigorous about versioning the API and having a contract to support back versions that gives us a certain amount of insurance. It's not like that with some of these opened in the tools that there couldn't be logic and coding errors that are put in and those are the ones that we would have to encounter together with our customers and we're not going to debug every single one of those.

So, for different levels of the game, more packaged, complete support, on up to the more open-ended integrations, you do what you can to try to make it so the integrations are a little bit more robust than what would have been built with a separate tool set.

... We also encourage people to want to share these integrations. We didn’t need to do more to automatically support that because our partners are going to be generating these things, as our customers, and in the SaaS community, there is just this great notion about sharing the things you do. So, we see supporting that and we can ultimately see that even leading to selling some of the things you do. All of those are potential features for this space.

Gardner: Paul, it sounds less like a buyer-seller relationship than a partnership. Do you view it that way?

Our experience to date is that companies like ours have more of a voice in the feature improvements of the application.



Lones: We do. Our experience to date, working with providers like Workday and some of the other SaaS providers that we are fortunate enough to do business with, is that companies like ours have more of a voice in the feature improvements of the application.

There tends to be, and certainly it's the case with Workday, a much more active community of clients, users, that are sharing information about everything from somewhat technical to very business process-oriented experiences that all of us have had. That's a very different experience.

In some ways, it's sort of ironic to me that we view it quite a bit more as a partnership. A lot of people perhaps think that it's a SaaS application and, if things don't work out, then when your contract is up, you just go find another SaaS provider.

It is true that there might be a little bit more flexibility, but what we’re finding so far in our experience, and it is early, is that the receptivity and the sense of making improvements together, I think it will actually stick longer than maybe some of the traditional software applications.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Workday.

You may also be interested in:

Learning the right lessons from the Amazon cloud outage


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Have you noticed that ZapThink’s crystal ball has been working overtime? We sounded the warnings about cyberwarfare mere days before the Stuxnet worm hit. Then we predicted the fall of enterprise architecture frameworks right before the Zachman organization imploded. Next, we heralded a secondary market for IP addresses as the IPv4 space ran out of them. Sure enough, that secondary market is now here. And last week, we warned against putting all your eggs in any one cloud provider’s basket. Sure enough, Amazon’s public cloud went belly up immediately afterward. All I can say is that if we make a prediction that will impact your business, you’d better take heed!

In all seriousness, there’s no supernatural clairvoyance at work here. What you’re seeing is the power of the ZapThink 2020 vision for Enterprise IT, which delineates the interrelationships among the numerous trends in the IT marketplace. Just as the best psychics are in reality masters at picking up subtle clues in human behavior, we’re tuning into the complex subtleties that the multiple forces of change in our midst present to us.

One of the primary insights of the ZapThink 2020 vision is that individual trends, let alone single events, should never be taken in isolation. This insight is particularly useful when a crisis like the Amazon crash presents itself.

At this point in time, we’re experiencing a backlash from this crash. People are reconsidering the wisdom of moving to the cloud, and in particular, public clouds. Perhaps the large infrastructure vendors who were warning their customers about the security and reliability issues with public clouds in order to sell more gear to build private clouds were right after all?

Not so fast. If we place the Amazon crash into its proper context, we are in a better position to learn the right lessons from this crisis, rather than reacting out of fear to an event taken out of that context. Here, then, are some essential lessons we should take away from the crash:
  • There is no such thing as 100 percent reliability. In fact, there’s nothing 100 percent about any of IT—no code is 100 percent bug free, no system is 100 percent crashproof, and no security is 100 percent impenetrable. Just because Amazon came up snake eyes on this throw of the dice doesn’t mean that public clouds are any less reliable than they were before the crisis. Whether investing in the stock market or building a high availability IT infrastructure, the best way to lower risk is to diversify. You got eggs? The more baskets the better.
  • This particular crisis is unlikely to happen ever again. We can safely assume that Amazon has some wicked smart cloud experts, and that they had already built a cloud architecture that could withstand most challenges. Suffice it to say, therefore, that the latest crisis had an unusual and complex set of causes. It also goes without saying that those experts are working feverishly to root out those causes, so that this particular set of circumstances won’t happen again.


    Just because Amazon came up snake eyes on this throw of the dice doesn’t mean that public clouds are any less reliable than they were before the crisis.

  • The unknown unknowns are by definition inherently unpredictable. Even though the particular sequence of events that led to the current crisis is unlikely to happen again, the chance that other entirely unpredictable issues will arise in the future is relatively likely. But such issues might very well apply to private, hybrid, or community clouds just as much as they might impact the public cloud again. In other words, bailing on public clouds to take refuge in the supposedly safer private cloud arena is an exercise in futility.

  • The most important lesson for Amazon to learn is more about visibility than reliability. The weakest part of Amazon’s cloud offerings is the lack of visibility they provide their customers. This “never mind the man behind the curtain” attitude is part of how Amazon supports the cloud abstraction I discussed in the previous ZapFlash. But now it’s working against them and their customers. For Amazon to build on its success, it must open the kimono a bit and provide its customers a level of management visibility into its internal infrastructure that it’s been uncomfortable delivering to this point.

The ZapThink take

Abstractions hide complexity from consumers of technology, but if you do too good a job hiding the underlying complexity, then the abstraction can backfire. But that doesn’t mean that abstractions are bad; rather, you need different abstractions for different audiences.

The latest crisis impacted a wide swath of small cloud-based vendors, from Foursquare to DigitalChalk to EDU 2.0. These firms’ customers simply wanted their tools to work, and were disappointed and inconvenienced when they stopped working. But the end-user customer may not have even been aware that Amazon’s cloud was behind their tool of choice. Clearly, those customers wouldn’t find better visibility into the cloud particularly useful.

No, it’s the technology departments at the small vendors that require better visibility. They are the people who require management tools that enable them to gain a greater level of control over the cloud environments they leverage in their own products. Once Amazon supports such management tools, then Amazon’s customers will be better able to provide the seamless abstraction to the cloud end user, who simply wants stuff to work properly. And there’s nothing supernatural about that!


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.




SPECIAL PARTNER OFFER


SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.



You may also be interested in:

Thursday, April 28, 2011

Master IT support providers Chris and Greg Tinker's take on how integrated technical support is essential in a complex, cloudy world

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the blog.

As recent outages at Amazon Web Services and Sony PlayStation Network jar the common perception of IT business as usual, IT failures and performance snafus are nothing new, just perhaps much more prominent.

Someone, somewhere got the first call on those outages -- the front line IT technical support staff. And the expanding role of cloud and the online services ecosystems that more of us depend on only point up why such IT technical support is more important than ever.

It just so happens that the importance of good and fast support is forcing technical support industry changes, with an emphasis on integration and empowerment for improving how help desks respond and perform in a spiraling crisis.

To learn more about how support is adapting to the high-impact, high-exposure cloud era, BriefingsDirect recently interviewed two lauded IT Master Technologists from HP. Part of the new support philosophy comes from providing a more centralized, efficient, and powerful means of getting all the systems involved working, and all the knowledge necessary together quickly to get applications back in action and keep them there. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

These two support stars, Chris Tinker and Greg Tinker, both HP Master Technologists, who happen to be identical twins, were chosen via a recent sweepstakes hosted by HP to identify favorite customer support personnel. Learn here why they gained such recognition, and uncover their recommendations for how IT support should be done better in a rapidly changing era of increasingly hybrid and cloud-modeled computing. The two were interviewed by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: You deal with people when they are, in some cases, their darkest hour. They're under pressure. There's something that's gone wrong. They're calling you. So, you're not just there in a technical sense, which of course is important, but there must be a human dynamic to this as well. How does that work?

Chris Tinker: We become their confidant. We foster a relationship there between the two parties. For us, it's very exhilarating. It's the ultimate test. You want to build both the technical and business, but also the interpersonal relationship, because you have to weigh in on so many levels, not just technical. That’s a critical component, but not the only component.

Greg Tinker: And today the customer expects the technical master technologist, like my brother and I, not just to know the one thing they're asking about, because that question is going to quickly turn. For example, I am having an Oracle performance issue, the customer thinks it may be disk related, but when you dig into it, you find out that it's actually an ODBC call, a networking issue. So, you have to be quite proficient at a multitude of technologies and have a lot of depth and breadth.

Gardner: So what does it take to be a good IT support person nowadays?

Chris Tinker: It’s simply not enough to be a technical guru -- not in today's industry. You have to have a good understanding of technology, yes, but you also have to understand the tools and realize that technology is simply a tool for business outcomes. If you're listening to the business, understanding what their concerns and their challenges are, then you can apply that understanding to their technical situation to essentially work for a solution.

Greg Tinker: Chris and I study, almost on a daily basis, to stay ahead of the technology curve. Chris and I both do a lot in SCSI I/O control logic, with respect to the kernel structure of HP-UX as well as Linux, which is our playground, if you will.

And, it takes what I would call firm foundation to be able to provide that strong wealth of knowledge to be the customer's confidant. You can't be an expert at one point anymore. You can't be a network expert only. You have to understand the entire gamut of the business, so that you can understand the customer's technical problem.

Gardner: Let me congratulate you your award. This was I think a worldwide pool, or at least a very large group of people that you were chosen from. Did this come as a surprise?

Greg Tinker: It was an honor, I can say that, and we are very grateful for that. Our customer installed base, as well as our peers and the management team, put our names into this situation. It was a great honor. ... For each vote that was cast, HP donated $10 to the humanitarian organization Care, to max out at a $100,000. They met that goal in just a few days. It was quite astonishing.

Chris Tinker: And it was a surprise. ... Very rewarding.

Gardner: Okay, you've been at this for 12 and 13 years. What's changed over that period of time?

Chris Tinker: Catchphrases change. Today it's cloud computing, but cloud computing has been around for a long time. We just didn’t refer to it as cloud computing. Shared infrastructure of course is what we called it.

Virtualization today is becoming a big ticket item, where in years past, big iron was the thing that was a catchphrase. Big iron was very large computers. We still have big iron in storage, that’s true. We still have that big footprint, big powerhouse, that consumes a lot of power, but that’s a necessity of the storage platform.

The big thing for today is converged infrastructure. These are terms you wouldn’t have heard years ago, where we are trying to converge multiple type of protocols, physical media under one medium, networking, Fibre Channel, which of course is your storage network, TCP/IP network, going across the same physical piece of media. These are things that are changing, and of course with that comes extreme amount of complexity, especially when it comes into the actual engine that drives this.

Greg Tinker: As Chris stated, the key phrase of yesteryear was big iron. I want a big behemoth machine that can outdo mainframe. If you look back to 1999 and 2000, what you were looking for in the open system world was something to compete with Big Blue.

Today it's virtualization and blades. Everybody used to say -- probably about mid-2005 -- "I want a pizza box. I want a new blade." We no longer call those blades. Those are called pizza boxes now. Today, the concept is all about blades. If you can't make the thing 3 inches tall and 1 inch wide, there is something wrong.

Gardner: You've been describing how things have changed technically. How have things changed in terms of the customer requirements and/or the customer culture?

Chris Tinker: The expectation is more for less. They want more computing power. They want more IT for less cost, which I think that’s been true since day one, but today, of course, that "more for less" just means more computing power. The footprint of the servers has changed.

And two, the support model has changed. Keep in mind, we're in support, and we're seeing a trend with these concepts where customers are having all these physical servers and the support contracts on all these servers are being consolidated down to one physical server with virtual instances.

The support model of yesteryear doesn’t always fit the support model that they should have today.



The support model of yesteryear doesn’t always fit the support model that they should have today.

Greg Tinker: What Chris is talking about there is consolidation efforts. Customers used to have 500 servers. Today, -- I want to exaggerate my point here -- we have it on a virtualization of one or two physical machines that are behemoth and it's virtualized 500 guests.

Though that model works right for consolidating the cost effort of the infrastructure, so your capital cost is less, the problem now becomes the support model. Customers tend to reduce the support as well, because it's less infrastructure. But, keep in mind, most customers kind of forget a lot of times that they've put all their eggs into the one basket, and that basket needs a lot of protection.

So now you have your entire enterprise running on one or two pieces of physical hardware that is a grossly complex with not only the virtual servers, but the virtual Ethernet modules, the Fibre Channel model concepts are all now basically one concept to run every protocol type, whether you are running infiniband, Gigabit Ethernet, Fibre Channel, etc., the complexity requires a great deal of support.

When a customer calls up and says, "We've made a change in our environment and my server has crashed, the physical server went down, or has lost access to its storage or network," you're not just affecting that one physical server, but you're affecting hundreds. So, the support model today is quick.

Gardner: It sounds to me that there is a higher risk profile. Is that a fair characterization?

Hardware redundancy

Greg Tinker: That would be a fair characterization. There is a higher risk on the hardware end in the sense that you still have hardware redundancy, of course, but you're fully dependent upon cluster technology and complexity.

Chris Tinker: A good solution design for business risk assessments are still a critical component to your solution design.

Gardner: I'm going to guess that over the past several years in the tradeoff for cost and risk, people probably favor the cost side a bit. So, that means the people in your position are the backstop?

The new light today is that customers are focused more on the higher end support models, meaning four-hour call to repair.



Greg Tinker: That’s what the trend is becoming. The trend is, "We're going to reduce our cost in the CAPEX and reduce our cost in the infrastructure. We're going to consolidate and virtualize that concept, and we are going to look at our support strategy in a different light." That’s what most customers think.

Gardner: What is that new light?

Greg Tinker: The new light today is that customers are focused more on the higher end support models, meaning four-hour call to repair, where it used to be 24-hour or 48-hour support models, where we were not in a huge rush. If we had a disk drive failure, we had plenty of time, because we had full redundancy, whatever. So we had plenty of time to fix those components.

Today, with all this consolidation effort, it becomes a real critical need when you have a failing component, whether it be hardware or software, to get that component addressed urgently. You don’t really have the time.

Chris Tinker: That’s a great point. Looking at that standard support model, you had so many physical servers and your business was essentially interlaced with these systems. You could handle an outage, whether software or hardware condition. It wasn't as strategic or as strong as today’s virtualized environments, where you would have much heavier business impact.

To Greg’s point, this inter-support model used to work with some of these virtualized environments. I am not saying all virtualized environments, but some of these virtualized environments. With four-hour call-to-repair, you can imagine in four hours what’s required. The technologists who answer the phone first have to address the business concerns to figure out what the business impact is and understand what the problem is.

Once we ascertain what’s causing that problem and the problem has been defined, we have to figure out what’s going wrong with the technology in order to bring it back online. All that has to be done within four hours on some of our most critical contracts.

Gardner: You're sorting through implementations with loads of vendors involved. When it comes to this sort of a mission-critical situation, they're probably thankful that there's someone there trying to corral this. So, I imagine the cooperation is pretty high in these circumstances?

Stakes are high

Chris Tinker: Yeah, the stakes are high at this level. You are talking about, not only the corporation, the customer, but you are also talking about the vendors, whether it be HP or third party, and we are partnering with all these vendors. Everybody has got a stake in the game. Essentially, their reputation is on the line.

So we partner, regardless. As we don’t want to be thrown under the bus, we don’t throw anybody else under the bus. We partner. We come together as one throat to choke or one hand to shake, however you want to look at it. But, essentially, we all have the same thing in common, the customer’s well being.

Greg Tinker: I'll second Chris’ sentiment on that, in the sense that when we're engaged at our level, it's no longer a finger-pointing game. It's a partnership, regardless of who the customer is. If it's HP gear, so be it. If it's somebody else’s gear, and we see where the problem is at, we don't point the finger. We ask the customer to get their vendor on the bridge with us and we work as a team to get the business restored, because that’s priority one.

Chris Tinker: That’s HP technical support. That’s what we thrive at. That’s one of our charters. Our management has dictated that they want team effort, global effort.

Gardner:
How did you both get involved with this? Did one get into it first and the other follow? What's the story behind how you ended up here?

Lengthy road

Greg Tinker: It was quite a lengthy road. Chris and I actually started off going in one direction, and we agreed many years ago in school that one of us would go one direction and the other in another, and see who was enjoying the industry better. Chris joined HP and fell in love with it. He and I have a very strong Linux background. Then, I jumped ship and went with my brother Chris, and we have been with HP ever since, and have loved it dearly.

Chris Tinker: We look at IT support as a ladder and we just climbed that ladder. We started in mission-critical support and found it to be exhilarating. With mission-critical support you're talking about enterprise-class corporations. We're not talking about consumer products. We're talking about an entire corporation's business running on an IT solution and how we're engaged in that process.

Unfortunately, in our line of work, we do see customers, where the technology did not go as planned, predicted, or expected and it's up to us to essentially figure out what the expectations are with technology and ascertain whether or not the technology can deliver that. That's how we moved through support.

We started off as mission-critical support specialists. We became architects, designing solutions for corporations and found out that we were very good at escalations and that's where we are today.

Gardner: There have also been some shifts over the past dozen years or so in the degree to which remote support is possible and your ability to get inside and get that information. Maybe we could take a moment to learn more about what tools have been brought to bear to help you with this?

HP virtual room

Chris Tinker: The HP Virtual Room (HPVR). If you go to rooms.hp.com, it’s a good example. As you just mentioned, yesteryear it was, "Hey, send me the logs. Send me the examples. Send me some data, and I'll parse through it and figure it out." You had to wait for data to come in and then start parsing those logs, parsing that data, and building your hypothesis of what might be the problem.

Now, imagine if I were able to take that in real time. So, Greg, talk about real time.

Greg Tinker: Real time is key in today’s technology world. Nobody wants to wait. Take your phone for example. Can you stand it when you have pressed the email button and your phone takes more than three seconds to load it up? Everybody gets annoyed when it's slow. Well, the same is true in technology services support.

When customers call in, they expect immediate response. By the time it gets to our level, where Chris and I sit and our team resides inside the support model, the customer is in dire straits. We use the Virtual Room technology. It's similar to WebEx.

There are a lot of similarities out there. Different vendors have different tools. We use the HP Virtual Room toolset and we can jump onto any machine in the world, anywhere in the world, at a moment’s notice. We can do crash analysis on a Linux kernel crash in real time on a customer’s machine. The same with HP-UX, Solaris, AIX, name your favorite.

We can look at these stack traces and actually find the most likely component that compromises the infrastructure. We can find it, isolate it, and remedy it.



We can look at these stack traces and actually find the most likely component that compromises the infrastructure. We can find it, isolate it, and remedy it.

Chris Tinker: Not only is it just us troubleshooting, but it's bringing to bear our peers. It's team work, a two-heads-are-better-than-one mentality. Greg even lived that first. At the end of the day, you've got 2, 4, or 20 people on the phone. You can imagine all of those people sharing the same desktop at the same time to try to look at a problem. You get all these different levels of expertise.

You're able to take all these talents and focus them on one scenario. So, now with four-hour call to repair, how is that even possible? It's possible when we have to bring these people and partner with these people. They could be not only HP employees and HP technical support. That goes back to vendors and those relationships. We bring those vendors into the same Virtual Room, showing them where we're seeing the problem and asking what we need to do to solve this.

Gardner: While we are on the subject of tools, what's coming next? If I were to design these types of tools, you would be the guys I would go to, to get my list of requirements. What are you asking for?

Greg Tinker: The biggest thing we see today is storage. The growth rate of storage is enormous. And the biggest problems customers run into are performance and capacity.

Capacity is the easy one, right? I am 100 percent full in my file system. I just need more. That's the easy one to fix.

The hard one to fix is "My application is not running the way I want it to, Fix it." Those are the difficult ones. We have to have a lot of tools to help us understand what the load conditions are, because it's no longer the yesteryear scenario of a Superdome, HP Rack, one big behemoth machine, four terabytes of memory, 400 CPUs, loading up one storage array. That's no longer the case.

We have grid computing structures of 600+ nodes running a multitude of different things -- SAP, Oracle, Informix, Exchange, etc. All of these different load-bearing concepts are coming into one monolithic storage array. It can become quite daunting to understand what's causing that load condition, and we have a lot of tools today that are helping us ascertain the root of those problems faster.

Chris Tinker: We have become the bleeding edge of technology. Essentially, it's software that hasn't been released. It's tools which are not actually production ready, and we use these tools as well, and some tools we can’t even speak about.

Business realities

B
ut, these are tools that will be in the enterprise eventually. They will be out in the world eventually. You asked earlier what we see coming down the road? Imagination is essentially one of the only things in technology. In today's world, there are other factors of course. Business realities temper the development of technology, but it's going to be very exciting to see what technology is being developed and what's coming next.

Gardner: I wonder if you might have just some last advice for those listening to the podcast as to how they on the consumption side might help folks like you on the services and support delivery side do your job better? What advice do you have for them in order to have a better outcome?

Chris Tinker: Yeah, it's being able to articulate the actual problem at hand, and the challenge that you have with your technology, because keep in mind that technology, IT, is nothing more than a tool that allows us to have business outcomes. So it's nothing more than a tool that the business utilizes for their requirements.

Then, to have metrics around their environment. They have to have a baseline. They have to have an understanding of what the technology has been doing.

Trending is key

Greg Tinker: Trending is key in a lot of these new virtualized consolidated environments. You need to have a baseline, as Chris stated. We need to have the performance characteristics. Your logging and ESX is about as common as sliced bread in a grocery store. ESX environments are very common and thought of very highly. I enjoy them. They are very nice.

Customers tend to start moving towards ESXi, which is fine, but ESXi doesn't log. It does log but you only get like a two hour history. The point is that customers take that logging for granted. You have to have your logging enabled and you must keep at least a six month trend.

So you don't keep all your logs and your service forever, but a six month trend is very helpful when you have a mysterious problem show up. Then, we can compare yesterday to today and see what differences have shown up in the environment.

Gardner: It comes down to data, having the data at your disposal.

Chris Tinker: Not just data, but having a baseline. We get a lot of calls where customers have no idea of what the environment was doing before. They say, "We're having a problem now. Our users are complaining." We ask, "How did it used to run? How long did this job used to take? Did it use to take 2 hours, and now it takes 20 hours?" A lot of times, they simply do not know.

I wish customers would yield to knowing that logging is critical. You don't have to keep it forever, but keep it for a strategic period of time. Six months is a good number.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the blog.

You may also be interested in:

Friday, April 22, 2011

Cloud brokering: Building a cloud of clouds


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

T
he essence of cloud computing—what makes a cloud, well, cloudy—is the fact it’s an abstraction. The cloud hides underlying complexity while presenting a simplified interface to the consumer. Of course, abstractions are nothing new in the world of IT: compiled languages, graphical user interfaces, and SOA-based Business Services are all examples of abstractions. After all, everything we’re doing in IT boils down to zeroes and ones in the end. Layers of abstraction are how we deal with this never-ending stream of bits.

The business service abstraction in the SOA context provides flexible, loosely coupled access to application functionality and data. The cloud abstraction, on the other hand, delivers a shared pool of configurable computing resources of various types (processors, storage, etc.) that can be dynamically and automatically provisioned and released. The two approaches solve different problems, but nevertheless both simplify the underlying technical complexity while providing greater agility to the consumer of the respective abstractions.

It doesn’t matter how resilient your provider’s infrastructure is if they go out of business, or a denial of service attack takes them off the Internet.



Another critical benefit of both abstractions is increased fault tolerance. If something goes wrong beneath the abstraction, then it should be possible (at least in theory) to fail over to a backup or route around the problem without adversely impacting the consumer. In the case of business services, the intermediary (typically an ESB or XML appliance) handles this routing, while the underlying cloud provider infrastructure handles failover within the cloud.

That is, unless the problem is with the cloud provider itself. It doesn’t matter how resilient your provider’s infrastructure is if they go out of business, or a denial of service attack takes them off the Internet. Think of cloud providers as baskets. Do you really want to put all of your eggs in just one?

Enter cloud brokering

C
loud brokering is the capability that addresses this eggs-in-one-basket problem. A cloud broker provides cloud service intermediation, aggregation, and arbitrage across a set of cloud providers. The need for such cloud brokers, of course, is not lost on the community of cloud startups. Today, if there’s even a hint of a niche you’ll find several entrepreneurs jumping on it, and the nascent cloud broker market is no different. However, there is a twist to the current state of the cloud broker market: as far as I can tell, all the players in this space today include cloud brokering as an extension of their existing business model, rather than a pure play model in its own right.

In fact, most of the vendors offering cloud brokering are in the cloud management space. RightScale and Kaavo, for example, provide template-based cloud deployment. Build the template, and the tool will deploy your fully configured cloud instance in any of a number of cloud environments by following the template. CloudSwitch takes the template idea down a few notches to layer two of the OSI stack, which means your cloud instances will be identical down to the IP addresses and even the MAC addresses, independent of the cloud environment. A fourth player worth mentioning is enStratus, who touts cloud independence as part of cloud governance.

All the players in this space today include cloud brokering as an extension of their existing business model, rather than a pure play model in its own right.



There is another angle on the cloud brokering marketplace, however: as an extension of the cloud storage/sync market. This niche is already quite crowded, with players like Dropbox, Jungle Disk, Box.net, Wuala, and several more. A closely related market niche is the cloud backup market, featuring vendors like Mozy, Backblaze, Carbonite, CrashPlan, and Livedrive, to name a few. It’s not clear, however, if any of these vendors support cloud brokering. Instead, they all rely upon a single underlying cloud environment for each of their offerings. The inherent fault tolerance of each vendor’s chosen cloud infrastructure may be sufficient for many users, especially in the consumer and small business segments, but enterprises may require a higher degree of resilience.

One vendor, however, has apparently carved out a niche for themselves: Oxygen cloud. Oxygen cloud focuses on cloud-based sync and shared storage, but they have also taken the extra step to build cloud brokering into their offering. As a result, customers who want the benefits of sync and storage in the cloud without having to rely on a particular cloud provider have few if any options other than Oxygen cloud.

The ZapThink take

The ability to select among several public clouds is only one benefit of cloud brokering. It also supports the ability for an organization to move application instances or data between private and public clouds. In other words, cloud brokering is at the heart of dynamic hybrid clouds.

When we talk about the various cloud deployment models—public, private, community, and hybrid—it’s the hybrid model that elicits the most head scratching. People wonder under what circumstances would it ever be worth the trouble to mix private and public clouds together. And they have a point: hybrid clouds sound like a huge hassle. Without cloud brokering, managing a hybrid cloud may be more trouble than it’s worth.

Cloud brokering, however, abstracts out the deployment model altogether, creating what we might even call a “cloud of clouds.” From the perspective of the consumer, all clouds might as well be hybrid clouds, because the decision whether to leverage on-premise or off-premise resources is simply part of the dynamic provisioning benefit of the cloud of clouds. The notion of a cloud of clouds that brokering enables, however, is a temporary phenomenon. Today we require visibility into the selection of individual cloud providers. Tomorrow, the brokering-based cloud of clouds will simply be the cloud.

ZapThink has no business relationship with any of the vendors mentioned in this ZapFlash. We’re simply calling ‘em like we see ‘em.


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in: