Monday, July 9, 2007

SOA and SaaS convergence produces new integration-as-a-service benefits

Read a full transcript of the discussion. Listen to the podcast. Sponsor: Cape Clear Software.

As the use of Enterprise 2.0 mashups sweeps the IT industry, the concept of converging enterprise services has expanded to hosted munging of business applications and back-office functions.

Why not extend SOA itself by embracing more integration services that help vendors, ISVs, and service providers bring more elements of business processes together, too?

The budding notion of "integration-as-a-service" allows enterprise business leaders to "shop around" for their services, regardless of hosting, and opens up the prospect for a thriving new ecology of services and integration models for mission critical activities. The advancement to SOA for many companies may well be accelerated by more choices on means of connection, both inside and outside the organization's own IT boundaries.

I recently conducted a sponsored BriefingsDirect podcast discussion with Annrai O'Toole, CEO of Cape Clear Software, on the eye-opening prospects for integration as a service. Early adopters are already outsourcing aspects of integration. The implications are staggering: Business operators and entrepreneurs create and amend complex processes and workflows through a simple point and click interface on someone else's infrastructure. Pay by the use or general subscription. Retain control over data and ID management.

Here are some excerpts:

A couple of factors are driving this. First, it’s the whole technology maturity thing. Six or seven years ago, the standards around Web services were in their infancy, and people didn’t have a lot of experience with them. Because they were young, unproven, untested, and lacking in key bits of functionality, people didn’t really want to go there. Technology is one element of it, but there are a few more important elements driving it as well.

One is a secular trend toward simplicity and flexibility. At some levels, this has been driven by teams through virtualization. Storage and processing power are being very quickly virtualized. Applications are being virtualized, with software-as-a-service on demand. There is a long-term shift by customers, who are saying, “We don’t want to own complex infrastructure anymore. We’ve been there, and done that. We want something else.”

We had an RFP come in – and this isn’t all that unusual – from someone looking to do a big SOA initiative. It was – and I’m not joking -- a 111-page RFP.

Customers look at the choices available to them, and say, “Do we want to do all this big SOA integration on our own by buying these complex things, or are we prepared to look at alternatives? And, do those alternatives have any reality?” They do, and many companies are shying away from these big, complex initiatives.

You can sit in a room with a bunch of executives, both from the business and IT segments and, say, “Hosted integration is a good idea,” and they’ll know that. We’ve got some proof points around it. Most notably, one of our marquee customers in the software-as-a-service base is Workday. The PeopleSoft founders got together to rebuild an ERP application, but this time on a hosted basis.

We’ve seen two fundamental preferences here, and there are two options for what you want to host. The first option we would broadly categorize as very loosely coupled data transformation. A lot of the things that people need to solve in terms of integration problems are really data transformation. How do I take payroll information from one provider, transform it, and send it down to another provider? Most people can deal with that. Most people can wrap their head around how that can be done in a hosted manner. What’s involved there is that it’s loosely coupled and it’s data. It’s ultimately some kind of XML or it gets converted into XML somewhere along the line.

The next thing is a step up from that. Now that I can get information between these things, do I want to have some orchestrations or some kind of inter-company business processes? It’s not just getting data from A to B, but it’s, “I want to get data from A to B, and then I want to call C, and when C has completed its job, then I want to call D, and when that’s complete, the whole thing is done.” That’s next level of complexity, and it involves a more sophisticated approach. But, both of them are possible and both are in operation today. As far as what customers are going to go for, I think they’ll be happy to do data transformation initially, and when that’s really working for them, they might be prepared to take the next step and host business processes in the cloud.

The whole idea is that they’re taking the integration burden off the customers, so that the customers can integrate their applications with Workday, without having to do any work on the customer end of the connection. This is a huge portion of the unfolding software-as-a-service story.

As we wind the clock forward, we’re going to see more customers wanting to use on-demand style applications, and wanting integration to be solved in an on-demand way. They don’t want to build all these integrations again. You can also take this one step further. We’ve seen a lot of our enterprise customers, as they think about rolling out big SOA initiatives, are saying, “Maybe, we should really model ourselves as a mini software-as-a-service to our own internal organizations.”

if we are going to virtualize storage and processing power, we want to virtualize integration. It’s not something that is being rebuilt again and again and again by different companies or different departments within different companies. Let’s really start to move to a hosted model for us, and, as you say, these can be federated in a very coherent way. What’s new now is that the underlying technologies and standards can actually support that model. So, while this model might have been a pipe dream five or six years ago, today it’s reality, and the technologies and capabilities are there to do it.

As the Web 2.0 generation gets into the enterprise, they’re going to have a very different view of how things should be done. They want it done the way that they have experienced this medium as teenagers. They’ll say, “What do you mean you can’t do it the way I want to do it?” I certainly hope that that’s the way it turns out, because we are just about due for another major innovation in the app development life-cycle.

Read a full transcript of the discussion. Listen to the podcast. Sponsor: Cape Clear Software.

Sunday, July 8, 2007

SOA Insights analysts probe Software AG's webMethods buy, wikis for SOA governance, and SOA hype curves

Read a full transcript of the discussion.

How good of a match-up was the recent Software AG acquisition of WebMethods? Was is strictly a geographical sales force synergy? Or will webMethods become the de facto R&D arm of Software AG while the parent firm's legacy cash flow sustains the movement toward SOA? Are the mutual product sets well aligned to provide a fuller SOA suite offering? All of the above?

We posed these and other questions to our panel of independent IT industry analysts in a recent BriefingsDirect SOA Insights Edition roundtable podcast discussion. We also delve into the ongoing heavy-breathing between SOA and Web 2.0. Should governance by done by wikis, for example? Is this mashup of SOA and Web 2.0 a weekend dalliance? Or a love affair for life?

Lastly, we sink our collective analyst teeth into the notion that SOA -- gasp! -- is being hyped too much. Some of us actually thing the opposite.

So set your SOA compass to "I" for insights and join us for another 50-minute discussion. Feel free to listen, subscribe via iTunes (search on BriefingsDirect), or peruse the full transcript.

Not a SOA junkie? Well then just take a peek at some of the highlights here.

Our analyst panel for this edition of the podcasts consists of noted IT industry analysts and practitioners Steve Garone, Joe McKendrick, Jim Kobielus, Tony Baer, and Todd Biske. I was your host and moderator.

Here are some excerpts:

On SAG-webMethods ...

I think that webMethods has been looking for an exit strategy for some time, because basically they're trying to build up their SOA platform story. The fact is that large corporate customers are going to be nervous with a $200 million company. They’re probably a lot more comfortable with a company that’s closer to one billion, if they're looking for a platform play.

One of the challenges that will be before Software AG, and I think an indicator as to whether they are successfully getting the message out to their customers, is how they handle this transition with BPM. Obviously, having an internal product is going to be a lot more attractive than having to partner for it.

It’s pretty clear that from a geographic standpoint it’s very complementary. Actually, it’s more complementary from a product standpoint than many there have been there willing take credit for. Software AG ... is very strong on legacy modernization of the whole mainframe-based setup products for development, databases, and so forth.

WebMethods is very strong on integration, BPM, and the whole SOA stack registries. There is some redundancy with Software AG’s products, such as the whole Crossvision Suite, but I think that from a technological standpoint webMethods is stronger on BPM, the repository, and all of those SOA components than the company that’s acquiring it. There definitely are a lot of synergies there.

So, you’re saying that webMethods is ahead of its time, and Software AG might be behind the times, and so together they are going to be on time?

This smacks of a good sales and channel match-up, and they might run webMethods as a subsidiary for some time. Then there's also this balance-sheet issue, where Software AG has recurring revenue. It’s got an old cash cow to continue to milk, and that gives webMethods an opportunity to be funded and financed -- without the vagaries of a quarterly report to Wall Street -- to pursue the larger brass ring here, which is SOA.

On SOA and Web 2.0 mashups ...

Let’s just leave the Web 2.0 definition off the table and look at the issue of any of these new activities, whether it’s social networking or rich Internet application interfaces or whether it’s taking advantage of more semantics and BPEL as a process relating to Web activities instead of just as a publishing medium. Let’s just say, "All of the above" for defining Web 2.0 and how this relates to SOA.

Gee, maybe wikis would be a good concept for how people manage their SOA services. It's sort of an open source, open collaboration approach to policy and use of services and their agreements.

Wikis and the whole Web 2.0 repertoire of collaborative tools can be very valuable in this upfront design, modeling, simulation, and shoot-the-breeze aspects that are critically necessary for design time. But runtime SOA governance really depends on clear-cut policies, designs, data definitions, and so forth that have been handed down by the policy gurus, and now are governing ongoing operations without ambiguity.

In that case, you don’t necessarily want any Joe Blow to be able to overwrite the policies and the business rules that are guiding the ongoing monitoring, management control, or security of your SOA.

I don’t know that you really want a wiki-style collaboration for governance. ... Even if you look at collaborative environments, whether it’s the large open-source projects, or something like Wikipedia, there's some hierarchy that eventually was put in place, where certain people were allowed to do commits or were designated as senior editors.

So, you always wind up with some form of governance structure around that. The area where I think wikis are going to be important in the SOA space is in the service management lifecycle or service development lifecycle. You got companies that have to move to a service-provider model, whether it’s internally to internal consumers or externally.

It’s like an open source project. You have a broad range of contributors, but only a handful of committers who can actually commit changes to the underlying code base. So, you might have a wiki that has potentially 3,000 different contributors, but ultimately there might be a moderator or two whose job it is to periodically weed out the nonsense, and crack the wiki whip to make sure that what’s actually been posted reflects the wisdom of the crowd of 3,000 people and not necessarily the vandalism of the few who decide to just disrupt the process.

Web 2.0 is really HOA, Human Oriented Architecture. It is pretty much giving human beings the tools to share what’s in their minds, to share their creativity with the big wide world. SOA, Service Oriented Architecture, is about sharing and reusing all matter of resources in a standardized way. HOA, the Web 2.0, is the most critical resource, and the most inexhaustible energy supply is human ingenuity and creativity.

On SOA hype ...

There are still a lot of of companies that just don’t know how to do cultural change. It’s not an easy thing to do. We hyped SOA a lot from the IT perspective, and a lot of the IT managers certainly may be growing tried of hearing about it, but haven't done anything to actually start that process of cultural change.

Is it really adopted by the business side and do they understand what it means and how it can impact our business? If they aren't having those communications, we haven’t really changed anything, and that means they’re still open for that message to continue, and to increase.

I think the SOA hype is pretty high, but I think that it's difficult to sell to decision-makers due to two factors: 1) the degree to which cultural change needs to take place, and 2) as time has progressed through this decade so far we’ve seen greater caution in IT departments because of shrinking budgets. So, the hype is high, but it needs to be sustained longer with messaging that’s going to be more aligned with business goals, rather than technology.

It could be that companies are being run more by the accountants -- of, for, and by the accountants -- and therefore the vision around IT is not getting through to them, and the purse strings are not opening up. Is that possible?

I think we should take an accountant out to lunch. Anyone who knows an accountant, take them out to lunch and tell them how great IT is and what SOA can do in terms of long-term efficiency and lower total costs. Bring in some of the other mega trends, such as software as a service, virtualization, and data master management. It behooves us all to educate the accounts on why IT is important, because I think they are suffering from a lack of understanding.

Another hour not wasted.

Listen to the entire podcast, or read the full transcript for more IT analysis and SOA insights. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media IT content production and distribution.

SOA demands a new 80-20 rule for application development

Services oriented architectures (SOAs) are not only forcing a new way of looking at how to construct applications and composite services -- SOA is now changing the meaning of what custom applications are and how they will be acquired. And this change has significant ramifications for IT developers, architects and operators as they seek a new balance between "packaged" and "custom" applications.

As more applications are broken down into modular component services, and as more services are newly created specifically for use and reuse in composited business services, the old 80-20 rule needs to make way for a new 80-20 rule.

The old 80-20 rule for development (and there are variations) holds that 80 percent of the time and effort of engineering an application goes into the 20 percent requiring the most customization. Perhaps that's why we have the 90-10 rule, too, which hold that 90 percent of the execution time of a computer program is spent executing 10 percent of the code!

SOA skews the formula by making more elements of an application's stated requirements readily available as a service. As those services are acquired from many sources, including more specialized ones (from third-party developers or vendors), a funny thing happens.

At first the amount of needed customization is high -- maybe 80 percent (a perversion of the old rule) -- either because there are not many services available, or because the services are too general and not specific to a specialized vertical industry or niche function.

Then, over time, with investment, the balance shifts toward the 50-50 point, and reuse forms a majority of a composite applications or business process, even for highly specialized applications. These composited functions then become business-focused service frameworks, to then be reused and adjusted. Those architects that gain experience within business niches and verticals to create such frameworks can make significant reuse of the services.

They, and their employers, enjoy tipping points where the majority of their development comes from existing services. The higher they can drive the percentage of reuse, the more scale and productivity they gain. They become the go-to organization for cost-efficient applications in a specific industry, or for specialized business processes.

These organizations benefit from the new 80-20 rule of SOA: The last 20 percent of customization soon takes only 50 percent of the total time to value. The difference from 80 percent is huge in terms of who does SOA best for specialized business processes. And it makes the decision all the more difficult over how to best exploit SOA: internally, or via third parties, integrators, or vendors.

It used to be that the large packaged applications vendors, like SAP, Oracle and Microsoft, used similar logic to make their vast suites of applications must-haves for enterprises. They exploited reusable objects and components to create commodity-level business functional suites that were soon deemed best bought and loaded, and not made, company by company, across the globe.

But with SOA the same efficiencies of scale and reuse can be brought to much more specific and customized applications. And those applications, if implemented as web services, can be ripped up, shifted, mashed adjusted and changed rapidly, if you know enough about them. The flexible orchestration of loosely coupled services means that development teams can better meet business’s changing needs.

A big question for me is which development teams will be benefiting most from the new 80-20 rule SOA activities? After attending the recent IBM Innovate conference in May, it's clear that specialization in SOA-related business processes via services frameworks will change the nature of custom application development. IBM and its services divisions are banking that the emerging middle ground between packaged applications and good-old customization opens up a new category they and their partners can quickly dominate with such offerings as WebSphere Business Services Fabric.

If IT departments inside of enterprises and their developer corps can not produce such flexible, efficient services-driven business processes, their business executives will evaluate alternatives as they seek agility. Such a market, in essence, forces a race to SOA proficiency and economy. That race is now getting under way, pitting traditional application providers, internal custom developers, vertical application packagers, and systems integrators against one another.

Even as businesses examine services-oriented efficiency and SOA benefits, they will also need to consider who owns the innovation that comes when you develop a service framework that enables a vertical business process. It would be clearly dangerous for any company to outsource its process innovation, and to lose intellectual property control over their core differentiating processes, either to a consulting firm that would develop the customizations, or to a software vendor that would productize it as a framework.

Executives will need to balance the enticements of outside organizations with a powerful SOA arsenal against the need to innovate in private, and to keep those innovations inside. They must be able to use what's available on an acquired basis to compete -- and which benefits from a common structural approach of highly granular reuse -- but not so much that they lose their unique competitive advantage.

And this brings us back to the new 80-20 rule for SOA, which also holds that companies need to retain 80 percent control over the 20 percent of the business processes that make it most competitive in its own markets. The move to a services environment via outside parties alone risks violating this rule. Therefore, internal IT must advance in SOA proficiencies as well, if only to be able to keep up with the third parties that will be servicing the competition.

We really have entered a race to SOA benefits and efficiencies. Time to lace up your running shoes once again.

Friday, July 6, 2007

Microsoft ups ante on GPLv3 in apparent challenge to license's reach

Any attempt to tag Microsoft as being "it" on its support coupons for Novell SUSE Linux, and therefore now fall under the terms of the new open source license, will apparently be fought ... or at least sidestepped.

Despite assertions that the new GPLv3 released June 29 closes loopholes with GPLv2, and therefore binds Microsoft as a Linux distributor of some kind, appears headed to legal -- not to mention PR -- brinkmanship.

Microsoft said Thursday
that it is not affected by the new license, even if it's partner Novell and the users of SUSE Linux are. Can't touch me with that license business, they seem to be saying.

"In fact, we do not believe that Microsoft needs a license under GPL to carry out any aspect of its collaboration with Novell, including its distribution of support certificates, even if Novell chooses to distribute GPLv3 code in the future. Furthermore, Microsoft does not grant any implied or express patent rights under or as a result of GPLv3, and GPLv3 licensors have no authority to represent or bind Microsoft in any way," said Horacio Gutierrez, Microsoft's vice president of intellectual property and licensing in a statement.

One side says the license binds, the other says it does not. Between any resolution sits months or years, perhaps millions of dollars and hoards of what my son calls Lawbots on Toontown. Maybe we should just throw gags at the lawyers and they will blow up and go away.

Or perhaps this sets up a meaty legal challenge by the Free Software Foundation (FSF) to Microsoft's bevy of deals with open source support providers. Perhaps it could blow up into an outright challenge of open source licenses writ large -- or even conversely become a flanked challenge to intellectual property law as applies to any software. The stakes could be quite high as this may escalate to a huge legal level ... an industry wide event. Microsoft is no stranger to long legal tussles, for sure.

In any event, Microsoft has achieved its near-term goals: Confuse the market and insert doubt about Linux and GPL usage, drag lawyers into every rack-mounted closet, increase the perceived risk of using open source code. Even if Microsoft has no legal footing (and I have no idea), they win on the perception clause.

Why? We can go from here only to more obfuscation and FUD. Let's say FSF vigorously challenges Microsoft, and Microsoft, of course, aggressively resists. Or let's say FSF does nothing and Microsoft, of course, aggressively resists. Or let's say Microsoft double-downs and not only aggressively resists but also ups the challenges in legal mumbo jumbo press release volleys.

Guess what Joe Procurement Officer at Enterprise XYZ will say: "Play it safe. Wait. Pay Microsoft for Windows and pay Microsoft for Linux, too. Next."

Maybe enterprises should just break the Microsoft-defined rules and then count on a Presidential commutation of any penalties.

And like I said the other day, SaaS is the real worry for Microsoft anyway. The open source challenge has become a worldwide force that no U.S. legal wrangling can mute. My survey on the topic so far bears out Microsoft's perceived downside risk.

Thursday, July 5, 2007

Reuse of code is the key to keeping IBM from being a 500,000-employee company

Nice piece in the The New York Times today about how IBM is again re-inventing itself as the "blended IT provider." Those are my words. IBM is seeking the means to blend talents, technologies, locations, approaches, people-process, and -- above all -- applications and services.

After attending a few IBM conferences over the past two months, I have a pretty good sense of what the new IBM is all about. I think they have a strong and correct vision, that success nonetheless depends on extremely good global execution across many disciplines, that time is short, and that it will be quite hard for nearly any other IT provider to emulate IBM if it succeeds and develops a sizable lead in crucial markets like health care.

What The Times story did not address is the essential role that SOA and code reuse will play in this vision and its execution. IBM and most large IT providers (and users!) recognize: They can not just throw more people at the problem (even if they could get them). Enterprises need to rush to find the right blend of wetware and flexible software services, and then aggressively reuse the software.

As IT solutions become highly customized and depend increasingly on deep knowledge of industry verticals and individual companies -- "people as solution" just won't scale. Any and all technology options should be on the table. Costs should be managed for long term ROI. Locations of the assorted workers and the IT itself should be as flexible as possible, but not remote nor on-site as default.

Once these productivity adjustments are in full swing, the deciding factors for success will be about how smartly software components and networked application and data services are defined, exploited, leveraged and extended. The 80-20 rule will have great bearing on the efficiencies. In my interpretation for this blog, the 80-20 rule holds that 80 percent of the cost and labor comes from the 20 percent of the applications requiring the most customization. And that's true even if 80 percent of the applications are crafted from past services, shrink-wrapped precedents and stock components.

In highly customized development, however, reuse percentages start out low -- due to the high degree of specific requirements and unique characteristics of the tasks at hand. For vertical industry custom development, it may be the 50-50 rule (half reuse and half fully custom) at best for some time. Costs will be high. But those IT providers and outsourcers willing to get in first and learn, that share the knowledge acquisitions risk with the clients will gain significant long-term advantage.

I can see why IBM is buying back so much of its stock.

That's because those IT providers that can boost the percentage of reuse closer to 80 percent -- even on highly vertical and specific custom projects -- to be blunt, win. As IBM, SAP, Oracle, Microsoft and HP roll up their sleeves and get their people deeply into these business-specific accounts, they will come away from each encounter richer. They will be richer in understanding the business, the industry, the problems and discrete solutions -- but they will also be richer by walking away with components and services germane to that business problem/solution set, ones that will be easily reused in the next accounts (anywhere on Earth).

The providers can also apply these components to creating on-demand applications and SaaS services for the potentially larger SMB markets that they may wish to serve, in highly virtualized and account-specific fashion. That recurring revenue may make up for the investment period in sharing risk inside those larger vertical industries. Lessons learned on-demand can be applied back to the larger enterprises. And so on.

The growing library of code and services assets that blossoms for these early-in blended IT providers will assuage the need to scale the people over time. Expertise and experience on a granular, "long tail" basis coupled with highly efficient general computing fabric will be the next differentiating advantage in IT and outsourcing. Reuse will solve both the people problem and the business model problem. This, incidentally, has SOA written all over it. Grid and utility computing with myriad hosting options also supports this vision quite well. Get it?

Lastly, who controls and owns the libraries of business execution knowledge is a huge issue. Individual companies should and will want to own their business logic, even patent or protect it legally. They may want their blended IT provider to help them develop and create this intellectual property, and remain assured that the client always owns it. None of this "my IP" popping up in China business, either.

There will be tension between what the blended IT provider owns and what they client business owns. Knowing where the demarcation point is that separates the provider's intellectual property from the client's will be an area for especially careful consideration -- as early in the project as possible. IT providers and clients will need to partner more than tussle. IBM get this and is already doing "play nice" vendor dance. There are still some steps to learn here for Oracle, Microsoft and SAP.

These boundary and ownership issues will be worked out such that the businesses can keep the business logic and innovation jewels (and compete well by them for a long time), while also allowing the IT provider to take away enough knowledge and code assets to make its initially costly, people-laden work worth the slog. An amenable bridging of these concerns will ultimately lower costs and speed of execution for all.

In those areas -- large and varied they will be -- where ownership and control are hard to agree upon ... open source licenses on the services and applications within the boundary zone of ownership makes great sense. It will be hard to draw a fine line between client assets and provider assets, so install a mutually beneficial grey area of open source "ownership" between them. That's why developments like GPL v3 are so important. Open source will apply to applications, components, and services far more importantly than platforms and runtimes in the future.

In this "blended IT provider" environment, the providers that can be trusted to leave the jewels alone may end up getting the better spoils of general reuse efficiency. And therefore become the low-cost, high-productivity provider in myriad specialized field (what makes you special?), and energize their own global partner ecologies, to boot.

Yes, there is indeed a new brass ring in the global IT business, and IBM has its eye on it.

Windows losing ground to Linux clients -- how far off can the servers be?

Martin LaMonica at CNET has a blog on a new Evans Data survey that shows erosion on developer allegiance to Windows client ports.

This makes for some interesting reading, along with the comments. I wonder who sponsored, if anyone, the survey.

[Addendum: SAP seems to be reaching the same conclusion, blogs Phil Wainewright.]

I can't vouch for the survey integrity, but the findings jibe with what I'm seeing and hearing -- only I think the erosion trend is understated in this survey, especially if global markets are factored.

I think too that this represents more than a tug away from Windows purely by Linux clients, and may mask the larger influence of open source more generally, as well as virtualization, RIAs, Citrix, OS X, and SaaS.

The impact of SaaS in particular bears watching as more ISVs adopt on-demand approaches as their next best growth opportunity, even though they will still have a Windows port. SaaS, not Linux, is the slippery slope that will affect Microsoft most negatively over time. Go to SaaS fully and forget the client beneath the browser.

Those ISVs that remain exclusive to rich Windows clients alone for deployment won't for much longer, even as they continue to use Microsoft tools and servers. I wonder what the percentage of ISVs that ONLY go to market via Windows rich clients is, and how that has tracked over the past five years.

If ISV developers can satisfy the need for being on those Windows clients via the browser or RIAs alone, and as more users look to browsers as the primary means for applications access, the need to use Windows on the development and deployment side could sizably slide, too. It's the same threat Netscape represented more than 10 years ago ... it just took some time. Pull the finger from the dike ... and the gusher begins to promote the flood.

SOA has an influence here too. When any back-end platform can be used just as well to support the services that can be best integrated, orchestrated, and extended into business processes -- watch out. Choices on deployment for consolidation and virtualized environments moves swiftly to a pure cost-benefit analysis -- not based on interdependencies. Windows on the server will have to compete with some pretty formidable deployment options.

When such a tipping point would occur -- whereby the browser or standards-based approach to clients/mobile/services grows such that the back-end choices shift to low-cost, best-of-breed alone -- remains an open question. When you break the bond to client, however, how far off can the server choices shift fully to what's most economically attractive?