Friday, July 6, 2007
Despite assertions that the new GPLv3 released June 29 closes loopholes with GPLv2, and therefore binds Microsoft as a Linux distributor of some kind, appears headed to legal -- not to mention PR -- brinkmanship.
Microsoft said Thursday that it is not affected by the new license, even if it's partner Novell and the users of SUSE Linux are. Can't touch me with that license business, they seem to be saying.
"In fact, we do not believe that Microsoft needs a license under GPL to carry out any aspect of its collaboration with Novell, including its distribution of support certificates, even if Novell chooses to distribute GPLv3 code in the future. Furthermore, Microsoft does not grant any implied or express patent rights under or as a result of GPLv3, and GPLv3 licensors have no authority to represent or bind Microsoft in any way," said Horacio Gutierrez, Microsoft's vice president of intellectual property and licensing in a statement.
One side says the license binds, the other says it does not. Between any resolution sits months or years, perhaps millions of dollars and hoards of what my son calls Lawbots on Toontown. Maybe we should just throw gags at the lawyers and they will blow up and go away.
Or perhaps this sets up a meaty legal challenge by the Free Software Foundation (FSF) to Microsoft's bevy of deals with open source support providers. Perhaps it could blow up into an outright challenge of open source licenses writ large -- or even conversely become a flanked challenge to intellectual property law as applies to any software. The stakes could be quite high as this may escalate to a huge legal level ... an industry wide event. Microsoft is no stranger to long legal tussles, for sure.
In any event, Microsoft has achieved its near-term goals: Confuse the market and insert doubt about Linux and GPL usage, drag lawyers into every rack-mounted closet, increase the perceived risk of using open source code. Even if Microsoft has no legal footing (and I have no idea), they win on the perception clause.
Why? We can go from here only to more obfuscation and FUD. Let's say FSF vigorously challenges Microsoft, and Microsoft, of course, aggressively resists. Or let's say FSF does nothing and Microsoft, of course, aggressively resists. Or let's say Microsoft double-downs and not only aggressively resists but also ups the challenges in legal mumbo jumbo press release volleys.
Guess what Joe Procurement Officer at Enterprise XYZ will say: "Play it safe. Wait. Pay Microsoft for Windows and pay Microsoft for Linux, too. Next."
Maybe enterprises should just break the Microsoft-defined rules and then count on a Presidential commutation of any penalties.
And like I said the other day, SaaS is the real worry for Microsoft anyway. The open source challenge has become a worldwide force that no U.S. legal wrangling can mute. My survey on the topic so far bears out Microsoft's perceived downside risk.
Thursday, July 5, 2007
After attending a few IBM conferences over the past two months, I have a pretty good sense of what the new IBM is all about. I think they have a strong and correct vision, that success nonetheless depends on extremely good global execution across many disciplines, that time is short, and that it will be quite hard for nearly any other IT provider to emulate IBM if it succeeds and develops a sizable lead in crucial markets like health care.
What The Times story did not address is the essential role that SOA and code reuse will play in this vision and its execution. IBM and most large IT providers (and users!) recognize: They can not just throw more people at the problem (even if they could get them). Enterprises need to rush to find the right blend of wetware and flexible software services, and then aggressively reuse the software.
As IT solutions become highly customized and depend increasingly on deep knowledge of industry verticals and individual companies -- "people as solution" just won't scale. Any and all technology options should be on the table. Costs should be managed for long term ROI. Locations of the assorted workers and the IT itself should be as flexible as possible, but not remote nor on-site as default.
Once these productivity adjustments are in full swing, the deciding factors for success will be about how smartly software components and networked application and data services are defined, exploited, leveraged and extended. The 80-20 rule will have great bearing on the efficiencies. In my interpretation for this blog, the 80-20 rule holds that 80 percent of the cost and labor comes from the 20 percent of the applications requiring the most customization. And that's true even if 80 percent of the applications are crafted from past services, shrink-wrapped precedents and stock components.
In highly customized development, however, reuse percentages start out low -- due to the high degree of specific requirements and unique characteristics of the tasks at hand. For vertical industry custom development, it may be the 50-50 rule (half reuse and half fully custom) at best for some time. Costs will be high. But those IT providers and outsourcers willing to get in first and learn, that share the knowledge acquisitions risk with the clients will gain significant long-term advantage.
I can see why IBM is buying back so much of its stock.
That's because those IT providers that can boost the percentage of reuse closer to 80 percent -- even on highly vertical and specific custom projects -- to be blunt, win. As IBM, SAP, Oracle, Microsoft and HP roll up their sleeves and get their people deeply into these business-specific accounts, they will come away from each encounter richer. They will be richer in understanding the business, the industry, the problems and discrete solutions -- but they will also be richer by walking away with components and services germane to that business problem/solution set, ones that will be easily reused in the next accounts (anywhere on Earth).
The providers can also apply these components to creating on-demand applications and SaaS services for the potentially larger SMB markets that they may wish to serve, in highly virtualized and account-specific fashion. That recurring revenue may make up for the investment period in sharing risk inside those larger vertical industries. Lessons learned on-demand can be applied back to the larger enterprises. And so on.
The growing library of code and services assets that blossoms for these early-in blended IT providers will assuage the need to scale the people over time. Expertise and experience on a granular, "long tail" basis coupled with highly efficient general computing fabric will be the next differentiating advantage in IT and outsourcing. Reuse will solve both the people problem and the business model problem. This, incidentally, has SOA written all over it. Grid and utility computing with myriad hosting options also supports this vision quite well. Get it?
Lastly, who controls and owns the libraries of business execution knowledge is a huge issue. Individual companies should and will want to own their business logic, even patent or protect it legally. They may want their blended IT provider to help them develop and create this intellectual property, and remain assured that the client always owns it. None of this "my IP" popping up in China business, either.
There will be tension between what the blended IT provider owns and what they client business owns. Knowing where the demarcation point is that separates the provider's intellectual property from the client's will be an area for especially careful consideration -- as early in the project as possible. IT providers and clients will need to partner more than tussle. IBM get this and is already doing "play nice" vendor dance. There are still some steps to learn here for Oracle, Microsoft and SAP.
These boundary and ownership issues will be worked out such that the businesses can keep the business logic and innovation jewels (and compete well by them for a long time), while also allowing the IT provider to take away enough knowledge and code assets to make its initially costly, people-laden work worth the slog. An amenable bridging of these concerns will ultimately lower costs and speed of execution for all.
In those areas -- large and varied they will be -- where ownership and control are hard to agree upon ... open source licenses on the services and applications within the boundary zone of ownership makes great sense. It will be hard to draw a fine line between client assets and provider assets, so install a mutually beneficial grey area of open source "ownership" between them. That's why developments like GPL v3 are so important. Open source will apply to applications, components, and services far more importantly than platforms and runtimes in the future.
In this "blended IT provider" environment, the providers that can be trusted to leave the jewels alone may end up getting the better spoils of general reuse efficiency. And therefore become the low-cost, high-productivity provider in myriad specialized field (what makes you special?), and energize their own global partner ecologies, to boot.
Yes, there is indeed a new brass ring in the global IT business, and IBM has its eye on it.
This makes for some interesting reading, along with the comments. I wonder who sponsored, if anyone, the survey.
[Addendum: SAP seems to be reaching the same conclusion, blogs Phil Wainewright.]
I can't vouch for the survey integrity, but the findings jibe with what I'm seeing and hearing -- only I think the erosion trend is understated in this survey, especially if global markets are factored.
I think too that this represents more than a tug away from Windows purely by Linux clients, and may mask the larger influence of open source more generally, as well as virtualization, RIAs, Citrix, OS X, and SaaS.
The impact of SaaS in particular bears watching as more ISVs adopt on-demand approaches as their next best growth opportunity, even though they will still have a Windows port. SaaS, not Linux, is the slippery slope that will affect Microsoft most negatively over time. Go to SaaS fully and forget the client beneath the browser.
Those ISVs that remain exclusive to rich Windows clients alone for deployment won't for much longer, even as they continue to use Microsoft tools and servers. I wonder what the percentage of ISVs that ONLY go to market via Windows rich clients is, and how that has tracked over the past five years.
If ISV developers can satisfy the need for being on those Windows clients via the browser or RIAs alone, and as more users look to browsers as the primary means for applications access, the need to use Windows on the development and deployment side could sizably slide, too. It's the same threat Netscape represented more than 10 years ago ... it just took some time. Pull the finger from the dike ... and the gusher begins to promote the flood.
SOA has an influence here too. When any back-end platform can be used just as well to support the services that can be best integrated, orchestrated, and extended into business processes -- watch out. Choices on deployment for consolidation and virtualized environments moves swiftly to a pure cost-benefit analysis -- not based on interdependencies. Windows on the server will have to compete with some pretty formidable deployment options.When such a tipping point would occur -- whereby the browser or standards-based approach to clients/mobile/services grows such that the back-end choices shift to low-cost, best-of-breed alone -- remains an open question. When you break the bond to client, however, how far off can the server choices shift fully to what's most economically attractive?