Sunday, July 8, 2007

SOA demands a new 80-20 rule for application development

Services oriented architectures (SOAs) are not only forcing a new way of looking at how to construct applications and composite services -- SOA is now changing the meaning of what custom applications are and how they will be acquired. And this change has significant ramifications for IT developers, architects and operators as they seek a new balance between "packaged" and "custom" applications.

As more applications are broken down into modular component services, and as more services are newly created specifically for use and reuse in composited business services, the old 80-20 rule needs to make way for a new 80-20 rule.

The old 80-20 rule for development (and there are variations) holds that 80 percent of the time and effort of engineering an application goes into the 20 percent requiring the most customization. Perhaps that's why we have the 90-10 rule, too, which hold that 90 percent of the execution time of a computer program is spent executing 10 percent of the code!

SOA skews the formula by making more elements of an application's stated requirements readily available as a service. As those services are acquired from many sources, including more specialized ones (from third-party developers or vendors), a funny thing happens.

At first the amount of needed customization is high -- maybe 80 percent (a perversion of the old rule) -- either because there are not many services available, or because the services are too general and not specific to a specialized vertical industry or niche function.

Then, over time, with investment, the balance shifts toward the 50-50 point, and reuse forms a majority of a composite applications or business process, even for highly specialized applications. These composited functions then become business-focused service frameworks, to then be reused and adjusted. Those architects that gain experience within business niches and verticals to create such frameworks can make significant reuse of the services.

They, and their employers, enjoy tipping points where the majority of their development comes from existing services. The higher they can drive the percentage of reuse, the more scale and productivity they gain. They become the go-to organization for cost-efficient applications in a specific industry, or for specialized business processes.

These organizations benefit from the new 80-20 rule of SOA: The last 20 percent of customization soon takes only 50 percent of the total time to value. The difference from 80 percent is huge in terms of who does SOA best for specialized business processes. And it makes the decision all the more difficult over how to best exploit SOA: internally, or via third parties, integrators, or vendors.

It used to be that the large packaged applications vendors, like SAP, Oracle and Microsoft, used similar logic to make their vast suites of applications must-haves for enterprises. They exploited reusable objects and components to create commodity-level business functional suites that were soon deemed best bought and loaded, and not made, company by company, across the globe.

But with SOA the same efficiencies of scale and reuse can be brought to much more specific and customized applications. And those applications, if implemented as web services, can be ripped up, shifted, mashed adjusted and changed rapidly, if you know enough about them. The flexible orchestration of loosely coupled services means that development teams can better meet business’s changing needs.

A big question for me is which development teams will be benefiting most from the new 80-20 rule SOA activities? After attending the recent IBM Innovate conference in May, it's clear that specialization in SOA-related business processes via services frameworks will change the nature of custom application development. IBM and its services divisions are banking that the emerging middle ground between packaged applications and good-old customization opens up a new category they and their partners can quickly dominate with such offerings as WebSphere Business Services Fabric.

If IT departments inside of enterprises and their developer corps can not produce such flexible, efficient services-driven business processes, their business executives will evaluate alternatives as they seek agility. Such a market, in essence, forces a race to SOA proficiency and economy. That race is now getting under way, pitting traditional application providers, internal custom developers, vertical application packagers, and systems integrators against one another.

Even as businesses examine services-oriented efficiency and SOA benefits, they will also need to consider who owns the innovation that comes when you develop a service framework that enables a vertical business process. It would be clearly dangerous for any company to outsource its process innovation, and to lose intellectual property control over their core differentiating processes, either to a consulting firm that would develop the customizations, or to a software vendor that would productize it as a framework.

Executives will need to balance the enticements of outside organizations with a powerful SOA arsenal against the need to innovate in private, and to keep those innovations inside. They must be able to use what's available on an acquired basis to compete -- and which benefits from a common structural approach of highly granular reuse -- but not so much that they lose their unique competitive advantage.

And this brings us back to the new 80-20 rule for SOA, which also holds that companies need to retain 80 percent control over the 20 percent of the business processes that make it most competitive in its own markets. The move to a services environment via outside parties alone risks violating this rule. Therefore, internal IT must advance in SOA proficiencies as well, if only to be able to keep up with the third parties that will be servicing the competition.

We really have entered a race to SOA benefits and efficiencies. Time to lace up your running shoes once again.

Friday, July 6, 2007

Microsoft ups ante on GPLv3 in apparent challenge to license's reach

Any attempt to tag Microsoft as being "it" on its support coupons for Novell SUSE Linux, and therefore now fall under the terms of the new open source license, will apparently be fought ... or at least sidestepped.

Despite assertions that the new GPLv3 released June 29 closes loopholes with GPLv2, and therefore binds Microsoft as a Linux distributor of some kind, appears headed to legal -- not to mention PR -- brinkmanship.

Microsoft said Thursday
that it is not affected by the new license, even if it's partner Novell and the users of SUSE Linux are. Can't touch me with that license business, they seem to be saying.

"In fact, we do not believe that Microsoft needs a license under GPL to carry out any aspect of its collaboration with Novell, including its distribution of support certificates, even if Novell chooses to distribute GPLv3 code in the future. Furthermore, Microsoft does not grant any implied or express patent rights under or as a result of GPLv3, and GPLv3 licensors have no authority to represent or bind Microsoft in any way," said Horacio Gutierrez, Microsoft's vice president of intellectual property and licensing in a statement.

One side says the license binds, the other says it does not. Between any resolution sits months or years, perhaps millions of dollars and hoards of what my son calls Lawbots on Toontown. Maybe we should just throw gags at the lawyers and they will blow up and go away.

Or perhaps this sets up a meaty legal challenge by the Free Software Foundation (FSF) to Microsoft's bevy of deals with open source support providers. Perhaps it could blow up into an outright challenge of open source licenses writ large -- or even conversely become a flanked challenge to intellectual property law as applies to any software. The stakes could be quite high as this may escalate to a huge legal level ... an industry wide event. Microsoft is no stranger to long legal tussles, for sure.

In any event, Microsoft has achieved its near-term goals: Confuse the market and insert doubt about Linux and GPL usage, drag lawyers into every rack-mounted closet, increase the perceived risk of using open source code. Even if Microsoft has no legal footing (and I have no idea), they win on the perception clause.

Why? We can go from here only to more obfuscation and FUD. Let's say FSF vigorously challenges Microsoft, and Microsoft, of course, aggressively resists. Or let's say FSF does nothing and Microsoft, of course, aggressively resists. Or let's say Microsoft double-downs and not only aggressively resists but also ups the challenges in legal mumbo jumbo press release volleys.

Guess what Joe Procurement Officer at Enterprise XYZ will say: "Play it safe. Wait. Pay Microsoft for Windows and pay Microsoft for Linux, too. Next."

Maybe enterprises should just break the Microsoft-defined rules and then count on a Presidential commutation of any penalties.

And like I said the other day, SaaS is the real worry for Microsoft anyway. The open source challenge has become a worldwide force that no U.S. legal wrangling can mute. My survey on the topic so far bears out Microsoft's perceived downside risk.

Thursday, July 5, 2007

Reuse of code is the key to keeping IBM from being a 500,000-employee company

Nice piece in the The New York Times today about how IBM is again re-inventing itself as the "blended IT provider." Those are my words. IBM is seeking the means to blend talents, technologies, locations, approaches, people-process, and -- above all -- applications and services.

After attending a few IBM conferences over the past two months, I have a pretty good sense of what the new IBM is all about. I think they have a strong and correct vision, that success nonetheless depends on extremely good global execution across many disciplines, that time is short, and that it will be quite hard for nearly any other IT provider to emulate IBM if it succeeds and develops a sizable lead in crucial markets like health care.

What The Times story did not address is the essential role that SOA and code reuse will play in this vision and its execution. IBM and most large IT providers (and users!) recognize: They can not just throw more people at the problem (even if they could get them). Enterprises need to rush to find the right blend of wetware and flexible software services, and then aggressively reuse the software.

As IT solutions become highly customized and depend increasingly on deep knowledge of industry verticals and individual companies -- "people as solution" just won't scale. Any and all technology options should be on the table. Costs should be managed for long term ROI. Locations of the assorted workers and the IT itself should be as flexible as possible, but not remote nor on-site as default.

Once these productivity adjustments are in full swing, the deciding factors for success will be about how smartly software components and networked application and data services are defined, exploited, leveraged and extended. The 80-20 rule will have great bearing on the efficiencies. In my interpretation for this blog, the 80-20 rule holds that 80 percent of the cost and labor comes from the 20 percent of the applications requiring the most customization. And that's true even if 80 percent of the applications are crafted from past services, shrink-wrapped precedents and stock components.

In highly customized development, however, reuse percentages start out low -- due to the high degree of specific requirements and unique characteristics of the tasks at hand. For vertical industry custom development, it may be the 50-50 rule (half reuse and half fully custom) at best for some time. Costs will be high. But those IT providers and outsourcers willing to get in first and learn, that share the knowledge acquisitions risk with the clients will gain significant long-term advantage.

I can see why IBM is buying back so much of its stock.

That's because those IT providers that can boost the percentage of reuse closer to 80 percent -- even on highly vertical and specific custom projects -- to be blunt, win. As IBM, SAP, Oracle, Microsoft and HP roll up their sleeves and get their people deeply into these business-specific accounts, they will come away from each encounter richer. They will be richer in understanding the business, the industry, the problems and discrete solutions -- but they will also be richer by walking away with components and services germane to that business problem/solution set, ones that will be easily reused in the next accounts (anywhere on Earth).

The providers can also apply these components to creating on-demand applications and SaaS services for the potentially larger SMB markets that they may wish to serve, in highly virtualized and account-specific fashion. That recurring revenue may make up for the investment period in sharing risk inside those larger vertical industries. Lessons learned on-demand can be applied back to the larger enterprises. And so on.

The growing library of code and services assets that blossoms for these early-in blended IT providers will assuage the need to scale the people over time. Expertise and experience on a granular, "long tail" basis coupled with highly efficient general computing fabric will be the next differentiating advantage in IT and outsourcing. Reuse will solve both the people problem and the business model problem. This, incidentally, has SOA written all over it. Grid and utility computing with myriad hosting options also supports this vision quite well. Get it?

Lastly, who controls and owns the libraries of business execution knowledge is a huge issue. Individual companies should and will want to own their business logic, even patent or protect it legally. They may want their blended IT provider to help them develop and create this intellectual property, and remain assured that the client always owns it. None of this "my IP" popping up in China business, either.

There will be tension between what the blended IT provider owns and what they client business owns. Knowing where the demarcation point is that separates the provider's intellectual property from the client's will be an area for especially careful consideration -- as early in the project as possible. IT providers and clients will need to partner more than tussle. IBM get this and is already doing "play nice" vendor dance. There are still some steps to learn here for Oracle, Microsoft and SAP.

These boundary and ownership issues will be worked out such that the businesses can keep the business logic and innovation jewels (and compete well by them for a long time), while also allowing the IT provider to take away enough knowledge and code assets to make its initially costly, people-laden work worth the slog. An amenable bridging of these concerns will ultimately lower costs and speed of execution for all.

In those areas -- large and varied they will be -- where ownership and control are hard to agree upon ... open source licenses on the services and applications within the boundary zone of ownership makes great sense. It will be hard to draw a fine line between client assets and provider assets, so install a mutually beneficial grey area of open source "ownership" between them. That's why developments like GPL v3 are so important. Open source will apply to applications, components, and services far more importantly than platforms and runtimes in the future.

In this "blended IT provider" environment, the providers that can be trusted to leave the jewels alone may end up getting the better spoils of general reuse efficiency. And therefore become the low-cost, high-productivity provider in myriad specialized field (what makes you special?), and energize their own global partner ecologies, to boot.

Yes, there is indeed a new brass ring in the global IT business, and IBM has its eye on it.

Windows losing ground to Linux clients -- how far off can the servers be?

Martin LaMonica at CNET has a blog on a new Evans Data survey that shows erosion on developer allegiance to Windows client ports.

This makes for some interesting reading, along with the comments. I wonder who sponsored, if anyone, the survey.

[Addendum: SAP seems to be reaching the same conclusion, blogs Phil Wainewright.]

I can't vouch for the survey integrity, but the findings jibe with what I'm seeing and hearing -- only I think the erosion trend is understated in this survey, especially if global markets are factored.

I think too that this represents more than a tug away from Windows purely by Linux clients, and may mask the larger influence of open source more generally, as well as virtualization, RIAs, Citrix, OS X, and SaaS.

The impact of SaaS in particular bears watching as more ISVs adopt on-demand approaches as their next best growth opportunity, even though they will still have a Windows port. SaaS, not Linux, is the slippery slope that will affect Microsoft most negatively over time. Go to SaaS fully and forget the client beneath the browser.

Those ISVs that remain exclusive to rich Windows clients alone for deployment won't for much longer, even as they continue to use Microsoft tools and servers. I wonder what the percentage of ISVs that ONLY go to market via Windows rich clients is, and how that has tracked over the past five years.

If ISV developers can satisfy the need for being on those Windows clients via the browser or RIAs alone, and as more users look to browsers as the primary means for applications access, the need to use Windows on the development and deployment side could sizably slide, too. It's the same threat Netscape represented more than 10 years ago ... it just took some time. Pull the finger from the dike ... and the gusher begins to promote the flood.

SOA has an influence here too. When any back-end platform can be used just as well to support the services that can be best integrated, orchestrated, and extended into business processes -- watch out. Choices on deployment for consolidation and virtualized environments moves swiftly to a pure cost-benefit analysis -- not based on interdependencies. Windows on the server will have to compete with some pretty formidable deployment options.

When such a tipping point would occur -- whereby the browser or standards-based approach to clients/mobile/services grows such that the back-end choices shift to low-cost, best-of-breed alone -- remains an open question. When you break the bond to client, however, how far off can the server choices shift fully to what's most economically attractive?