Friday, March 20, 2009
Sure a merger as is rumored is good -- but not urgently or obviously so -- for IBM. Big Blue gains modest improvement in share of some servers, mostly Unix-based. It would actually gain just enough share of high-end servers to justly draw anti-trust scrutiny nearly worldwide.
Yet these types of servers are not today's growth engines for IT vendors, they are the blunt trailing edge. Users have been dumping them in droves, with their sights set on far lower-cost alternatives and newer utility models of deployment and payment. IBM may want the next generation of data centers to be built of mainframes, but not too many others do.
In any event, server hardware is not a meaningful differentiator in today’s IT markets. Sun, if anyone, has proven that. IBM to claim it as the rationale for the buyout is fishy. A lot of other analysts are holding their noses too. UPDATE: Good analysis from Redmonk's Stephen O'Grady.
The rumored IBM-Sun deal for $6.4 billion is incremental improvement for IBM on several fronts: open source software (low earnings), tape storage (modest albeit dependable revenue), Java (already mostly open), engineering talent (easier to get these days given Sun layoffs), new intellectual property (targeted by design by Sun on undercutting IBM's cash cows). In short, there are no obvious game changers or compelling synergies in IBM buying Sun other than setting the sun on Sun.
I initially thought the rumored deal, which drove up Sun's stock, JAVA, by nearly 80 percent on rumor day one, didn't make sense. But it does make sense. Unfortunately it only makes sense for IBM in a fairly ugly way. As Tom Foremski said, it smacks of a spoiler role.
If IBM, would you spend what may end up being $4 billion in actual cost to slow or stifle the deterioration of a $100 billion data center market, and, at the same time, take the means of accelerating the move to cloud computing off the table from your competitors? As Mister Rogers would say, "Sure, sure you would."
Most likely, though the denials are in the works, IBM will plunder and snuff, plunder and snuff its way across the Sun portfolio -- from large account to large account, developer community to developer community, employee project to project. The tidy market share and technology gems will be absorbed quietly, the rest canceled or allowed to wither on the vine.
Certain open source communities and projects that Sun has fostered will be cultivated, or not. IBM is the very best at knowing how to play the open source cards, and that does not mean playing them all.
Listen, this would be a VERY different acquisition than any IBM has done in recent memory. It’s really about taking a major competitor out when they are down. It’s bold and aggressive, and it’s ignoble. But these are hard times and many people are distracted.
The deal is not good for Sun and it's customers (unless they already decided to move from being a Sun shop to an IBM shop), and may put in jeopardy the momentum of open source use up into middleware, SOA, databases and cloud infrastructure. That’s because, even at the price of $6.4 billion (twice Sun's market value before the deal talk), IBM will gain far more from the deal over the long term by eradicating Sun than by joining Sun's vector.
This deal is all about control. Control of Java, of markets, developers, cost of IT -- even about the very pace of change across the industry. For much of it's history IBM has had its hand on the tiller of the IT progression. It's was a comfortable position except for an historically exceptional past 17 years for IBM. It's time to get back in the saddle.
Clearly, Sun has little choice in the matter, other than to jockey for the best price and perhaps some near-term concessions for its employees. It's freaking yard sale. Sun is being run by -- gasp -- investment bankers. Here's a rare bonus bonanza in a M&A desert, for sure.
But let's be clear, this is no merger of partners or equals. This is assimilation. It’s Borg-like, and resistance may be futile. It is important to know when you're being assimilated, however.
Scott McNealy, Sun’s chairman, former CEO and co-founder, famously called the 2001 proposed merger of HP and Compaq a collision between two "garbage trucks." Well, IBM’s proposed/rumored purchase of Sun is equivalent to a garbage truck being airlifted out of sight and over the horizon by a C-17 cargo transport plane. Just open the door and drive it in. The plane was probably designed on Sun hardware, too. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Sun’s fate has been shaky for a long time now. The reasons are fodder for Harvard case studies.
But what of the general good of enterprise IT departments, of communities of idealistic developers, or of open and robust competition in the new age of cloud computing? In the new age, incidentally, you may no longer need an army of consultants and C-17 full of hardware and software at each and every enterprise. As Nick Carr correctly points out, this changes everything. That kind of change may not be what IBM has in mind.
It’s not easy resting having IBM in control of a vast portions of the open source future, and the legacy installed past. Linux and Apache Web servers might have made sense for IBM, but do open source cloud databases, middleware, SOA, and the next generations of on- and off-premises utility and virtualization fabric infrastructure?
IBM today is making the lion's share of its earnings from the software and services that run yesterday's data centers. Even the professional services around the newer cloud models (and subscription fees of actual, not low-utilization, use) does not make up for lost software license revenues. In many ways, cloud is more a threat than an opportunity to Big Blue. It ultimately means lower revenues, lower margins, less control, and feisty competitors that make money from ads and productivity, not sales and service.
Cloud models will take a long time to become common and mainstream, but any sense of inevitability must make IBM (and others) nervous. Controlling the pace of the change is essential.
The hastening shift to virtualization, application modernization, SaaS, mobile, cloud, and increased use of open source for legacy infrastructure could seriously disrupt the business models of IBM, HP, Cisco, Microsoft, Oracle and others. Moving from legacy-and-license to cloud-and-subscription (on OSS or commercial code) poses a huge risk to IBM, especially if it happens fast -- something this unexpected economic crisis could accelerate.
Enterprises could soon gain the equivalent of the powerful and efficient IT engines that run a Google or Amazon, either for itself, or rented off the wire, or both. IBM probably won't have 60 percent of the cloud services market in five years like it does the high-end Unix market (if it gets Sun). In fact, what has happened to Sun in terms of disruption may be a harbinger of could happen to IBM during the next red-shift in the market.
Sun should have gotten to these compelling cloud values first, made a business of it before Amazon. Sun was on the way, had the vision, but ran out of time and out of gas.
Sun has let a lot of us down by letting it come to this. The private equity firms that control Sun now don't give a crap about open source, or innovation, clouds or whether the network is the computer, or my dog's pajamas are the computer. They need to get their money back ASAP.
As a result, they and Sun could well be handing over to IBM the very keys to being able to time the market to IBM's strategic needs above all else. All for $6.4 billion in cash, minus the profits from chopping off Sun's remaining limbs and keeping the ones that make a good Borg fit.
There should be a better outcome. Should the deal emerge, regulators should insist what IBM itself called for more than 10 years ago. Something as important as Java and other critical open software specifications (OpenSolaris?) should be in the control and ownership of a neutral standards body, not in the control of the global market dominant legacy vendor.
It’s sort of like letting General Motors decide when to build the next generation of fuel efficient and alternative energy cars. And we know how that worked out.
IBM has the deep pockets now to buy strategic advantage during an economic crisis that helps it in coming years. It's during this coming period when the cloud vision begins to stick, when the madness of how enterprise IT has evolved in cost and complexity is shaken off for something much better, faster and cheaper.
And that’s what IT has always been about.
Wednesday, March 18, 2009
The San Mateo, Calif. company, which provides large-scale analytics and data warehousing, says SG Streaming has allowed customers to achieve production-loading speeds of over four terabytes per hour with negligible impacts on concurrent database operations. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]
Under the "parallel everywhere" approach to loading data flows from one or more source systems to every node of the database without any sequential choke points. This differs from traditional “bulk loading” technologies, used by most mainstream database and parallel-processing appliance vendors that push data from a single source, often over a single or small number of parallel channels, and result in fundamental bottlenecks and ever-increasing load times.
The new technology "scatters" data from all source systems across hundreds or thousands of parallel streams that simultaneously flow to all nodes of the database. Performance scales with the number of nodes, and the technology supports both large batch and continuous near-real-time loading patterns with negligible impact on concurrent database operations.
Data can be transformed and processed in-flight, utilizing all nodes of the database in parallel, for extremely high-performance extract-load-transform (ELT) and extract-transform-load-transform (ETLT) loading pipelines. Final 'gathering' and storage of data to disk takes place on all nodes simultaneously, with data automatically partitioned across nodes and optionally compressed.
It was just six months ago that Greenplum publicly unveiled how it wrapped MapReduce approaches into the newest version of its data solution. That advance allowed users to combine SQL queries and MapReduce programs into unified tasks executed in parallel across thousands of cores.
Active Endpoints aims at greater process design and implementation productivity with ActiveVOS enhancements
The latest offering from the Waltham, Mass. company provides what amounts to shrink-wrapped service-oriented architecture (SOA) and provides business process management (BPM) automation, while adhering to business process execution language (BPEL) standards. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]
There's an Active Endpoints podcast on the solution, and a new white paper on SOA implications of the process efficiencies from Dave Linthicum. We also recently did an Analyst Insights podcast on recent BPEL4People work.
Following close on the heels of version 6.0, which debuted in September, and 6.0.2, which made its appearance in December, the newest ActiveVOS offering brings features aimed at smoothing the way for developers. For example, a new tool, the "participant's view," eliminates the need for developers to manually code complex programming constructs like BPEL partner links and BPEL partner link types that are needed to define how services are to be used in a BPM application.
Another major enhancement is "process rewind." At design time, no BPM application can anticipate all of the operational issues and error handling that will be required. Process rewind gives developers the ability to rewind a process to a specific activity and redo the work without having to invoke any of the built-in compensation logic. This allows certain steps of the process need to be “redone” without impacting work already performed.
Among the other improvements:
- Any-order development, which presents services details as graphical tables into which details can be entered at any time. This is in contrast to earlier systems in which developers needed to know the details in advance.
- Automatic development, which eases the tasks for developers new to SOA-based BPM. Version 6.1 automatically understands “private” versus “public” web services description language (WSDL) files and creates the required WSDLs in both a standards-compliant mode and a human-understandable format.
- Improved data handling, which allows developers to visually specify what data is needed in each activity and guides the developer through XPath and XQuery statement generation. The BPEL standard separates assignment of data to activities from the invocation of those activities. While the technical reasons for this are clear to experienced developers, for new developers this can be an impediment.
ActiveVOS is available as a perpetual license. In an internal development environment, the price is $5,000 per CPU socket. In a deployment environment, the price is $12,000 per CPU socket when the deployment environment licenses are ordered with a first-time purchase of internal development environment licenses. Annual support and maintenance is 20 percent of total license fees.
Panda, with North American operations in Glendale, CA, allows individual companies as well as value added resellers (VARs) to deploy and extend its hosted security services, which originally launched in May 2008. Panda says its solution can be more than 50 percent more efficient than traditional endpoint security software.
I expect that SMBs will be more likely to seek a full package of PC support services via third parties. Those third parties will want to deliver help desk, software management, patch management and -- now -- security as a full service, cloud-based offering.
By adding the Web-based Panda SaaS security benefits, branded under the third parties, the hassle and cost of managing each desktop on premises drops significantly. And it allows the SMBs to get closer to their goal of no IT department, or at least a majority of IT support gained as a service.
Enhancements to Panda's MOP, include:
- Optimized management of end devices through a new Web-based management console that allows administrators to resolve deployment challenges from one centralized dashboard from on any computer with an Internet connection.
- Increased reporting flexibility that allows administrators to select from an expanded set of security reports, including executive, activity and detection reports.
- Easier software deployment, which allows IT managers to leverage automatic uninstallers along with unique MAC addresses, facilitating personalized security settings for each end-device.
- Simplified computer management that allows offline handling of exported files.
- Improved client network status control, which allows VARs providing security services to SMB clients to have remote access via the service provider administration console, where they can centrally manage any update on every device in the client network.
The channel and PC support third parties gain a more complete package of services, while letting their partner, in this case Panda, pick up the security and on-going threats response requirements.
Another benefit comes from today's highly mobile workforce. Administrators are increasingly concerned with managing laptops belonging to traveling employees. A SaaS-based device support solution allows administrators to monitor and configure anti-malware software no matter what the employee's location.
In a recent study, Panda Security compared its SaaS product to three different traditional security products. The study found that using a SaaS product could be more than 50 percent less expensive over a two-year period than using the traditional products, when you consider staffing costs, capital expenditures, and deployment costs.
Panda MOP is available immediately in licenses sold by the seat in one- to three-year subscription packages. More information is available from www.pandasecurity.com.
If IBM wanted to buy Sun it would have done so years ago, at least on the merits of synergy and technology. If IBM wanted to buy Sun simply to trash the company, plunder the spoils and do it on the cheap -- the time for that was last fall.
So more likely, given that Sun has reportedly been shopping itself around (nice severance packages for the top brass, no doubt), is that Sun has been too successful at selling itself -- just to the wrong party at too low of a price. This may even be in the form of a chop shop takeover. The only thing holding up a hostile takeover of Sun to sell for spare parts over the past six months was the credit crunch, and the fact that private equity firms have had some distractions.
By buying Sun IBM gains little other than some intellectual property and mySQL. IBM could have bought mySQL or open sourced DB2 or a subset of DB2 any time, if it wanted to go that route. IBM has basically already played its open source hand, which it did masterfully at just the right time. Sun, on the other hand, played (or forced) its open source hand poorly, and at the wrong time. What's the value to Sun for having "gone open source"? Zip. Owning Java is not a business model, or not enough of one to help Sun meaningfully.
So, does IBM need chip architectures from Sun? Nope, has their own. Access to markets from Sun's long-underperforming sales force? Nope. Unix? IBM has one. Linux? IBM was there first. Engineering skills? Nope. Storage technology? Nope. Head-start on cloud implementations? Nope. Java license access or synergy? Nope, too late. Sun's deep and wide professional services presence worldwide? Nope. Ha!
Let's see ... hardware, software, technology, sales, cloud, labor, market reach ... none makes sense for IBM to buy Sun -- at any price. IBM does just fine by continuing to watch the sun set on Sun. Same for Oracle, SAP, Microsoft, HP.
With due respect to Larry Dignan on ZDNet, none of his reasons add up in dollars and cents. No way. Sun has fallen too far over the years for these rationales to stand up.
Only in playing some offense via data center product consolidation against HP and Dell would buying Sun help IBM. And the math doesn't add up there. The cost of getting Sun is more than the benefits of taking money from enterprise accounts from others. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
The cost of Sun is not cheap, or at least not cheap like a free puppy. Taking over Sun for technology and market spoils ignores the long-term losses to be absorbed, the decimated workforce, the fact that Cisco will now eat Sun's lunch as have the other server makers for more than five years.
So who might by Sun on the cheap, before Sun's next financial report to Wall Street? Cisco, Dell, EMC, Red Hat. That's about it for vendors. And it would be a big risk for them, unless the price tag were cheap, cheap, cheap. Anything under $4 billion might make sense. Might.
Other buyers could come in the form of carriers, cloud providers or other infrastructure service provider types. This is a stretch, because even cheap Sun would come with a lot of baggage for their needs. Another scenario is a multi-party deal, of breaking up Sun among several different kinds of firms. This also is hugely risky.So my theory -- and it's just a guess -- is that today's trial balloon on an IBM deal is a last-ditch effort by Sun to find, solidify, or up the price on some other acquisition or exit strategy by Sun. The risk of such market shenanigans only underscores the depths of Sun's malaise. The management at Sun probably sees its valuation sinking yet gain to below tangible assets and cash value when it releases it's next quarterly performance results. ... Soon.
The economic crisis has come at a worst time for Sun than just about any other larger IT vendor. Sun, no matter what happens, will go for a fire sale deal -- not a deal of strength among healthy synergistic partners. No way.
Monday, March 16, 2009
The result has lead to a tectonic market shift that combines stunning customer adoption, whole new types of user productivity, a thriving third-party developer community -- and mobile and PC market boundaries that are swiftly blurring. Doing the advance work of pulling together elements of the full solution -- so that the users or channel players or consultants do not -- has worked well for Apple. It was bold, risky, and it worked.
Carriers could never pull off the iPhone integration value for users. Indeed, the way carriers go to market practically forbids it. It took an outsider and new entrant to the field to change the game, to remove the complexity and cost of integration -- and pass along both the savings and seductive leap in functionality to the buyers.
With today's announcement of the Cisco Unified Computing System -- along with a deep partnership with VMware on software and management -- Cisco Systems is attempting a similar solution-level value play as Apple with the iPhone. The solution may be at the other end of the IT spectrum -- but the potential leap in value, and therefore the disruption, may be as impactful.
We're seeing a whole new packaging of the modern data center in a way that may very well change the market. It's bold, and it's risky. Cisco -- as an entrant to the full data center solution field, but with a firm command of certain key elements (like the network) -- may be able to do what the incumbent data center providers -- along with the ecology of support armies -- have not. One-stop shopping for data centers is been only a goal, never fully realized. In fact, many enterprises probably don't want any one vendor to have such control, especially when standards are in short supply. But they need lower costs and lower complexity.
Cisco, therefore, is using the latest software and standards (to SOME degree at least) to integrate the major elements of "compute, network, storage access and virtualization into a cohesive system," according to Cisco. They go on to claim this leads to "IT as a service" when combined with VMware's upcoming vSphere generation of data center virtualization and management products. I'd like to see more open source software choices in the mix, too. Perhaps the marker will demand this?
The concept remains appealing, though. Rather than have a systems integrator, or outsourcer, or major vendor, or your own IT department (or all of the above) cobble these complex data center elements together -- at high initial and ongoing monstrous cost ad infinitum -- the "integration is the data center" (as distinct from the network is the computer) has a nice ring to it.
Cisco is proposing that the next-generation data center, then, is actually an appliance -- or a series of like appliances. Drop in, turn on, tune in and run your applications and services faster, better, cheaper. Works if it works. This may be too much for most seasoned IT professionals to stomach, but it's worth a try, I suppose.
And this will, of course, greatly appeal during a prolonged period of economic stress and uncertainty. Say hello to 2010. And the approach could be appealing to enterprises, carriers, hosting companies, and a variety of what are loosely called cloud providers. Indeed, the more common the data center architecture approaches across all of these players, the more likely for higher-order efficiencies and process-level integrations. Federating, sharing, tiering, cost-sharing -- all of these become more possible to the heightened productivity of the community of participants.
The cloud of clouds needs a common architecture to reach it's potential. Remember Metcalfe's Law on the network's value based on number of participants on it? Well, supplant "node" and participant with "data center" and the Law and the network gain entirely new levels of value if the interoperability is broad and deep.
Make no mistake, the next generation data center business is a very large, multi-tens-of-billions of dollars market, and the competition is global, well-positioned, cash-secure and tough. Selling these data center appliances and "IT as a service" into individual accounts will be a huge challenge, especially if they are perceived as replacements alone. The Cisco solution needs to work well inside, alongside and inclusive of the other stuff, and the integrators have deep claws into the very accounts Cisco must enter.
We'll need to see the Cisco Unified Computing System act as a data center of data centers first. It's appeal, then, must be breathtaking to supplant the frisky incumbents, all of which also understand the importance of virtualization and low-cost hardware.
IBM, HP, Oracle, EMC, Microsoft, Sun, and the global SIs -- all will see any market game changing by Cisco as disruptive in perhaps the wrong way. But the enterprise IT market is ripe for major better ways of doing things, just like the buyers of iPhone have been for the last two years. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
UPDATE: HP has a response.
At the very least, Cisco's salvo will accelerate the shifts already under way in the next generation data center market toward highly-efficient on-premises clouds, complete and integrated applications support solutions, a deep adoption of virtualization -- and probably to a lot less total cost, real estate use, and energy demand as a result. The move by Cisco could also spur the embrace of open source software, along with standards, standards, standards. It's hard to see the economics working without them.
Already, Red Hat and Cisco announced a global OEM partnership. Cisco will sell and support Red Hat Enterprise Linux as part of its Unified Computing System, and will also support the newly announced Red Hat Enterprise Virtualization portfolio when it ships.
"Combined, Red Hat and Cisco will offer customers next-generation computing beyond RISC, beyond UNIX, beyond yesterday's legacy solutions for both virtualized and non-virtualized systems," says the statement.
Cisco and VMware are leaders in their areas, for sure, but they will need a community of global partners like Red Hat to pull this off. How about the larger open source universe? Unlike with Apple, it's a lot harder to create a data center support ecology than an app store. So the risks here are pretty huge. The enemy of my enemy is my friend effect may well kick in ... or not.
Or even more weirdness may ensue. What if Microsoft wanted in in a big way, given where it needs to go? What if Windows became the default virtualized container in Cisco's shiny new data center appliance? Disruption can be, well, disruptive.
Cisco has been seeking a way for many years now to extend its networking successes into new businesses. It has bought, it's built, and it's partnered -- but not to great effect in the past. Could this be the big one? The one that works? Is this the new $20 billion business that Cisco so desperately needs?
Sunday, March 15, 2009
This is the conclusion of a Forrester Research report, TechRadar For Sourcing & Vendor Management Professionals: Software as a Service. After talking to customers, vendors, and researchers, Forrester discovered that about 21 percent of enterprises were piloting or already using SaaS and another 26 percent are interested in it or considering it.
I expect this growth of SaaS use to increase under the dour economy as companies look to increase applications productivity but without any up-front capital spending, and also as they shut off expensive standalone applications on older hardware. SaaS as an economic appeal well suited to the challenges facing IT managers.
At the same time, says Forrester, companies are taking a more strategic approach to SaaS, which until now often flew in under the radar. That means IT didn't bring SaaS apps in, workers and managers did. Part of the strategic interest now comes from IT too -- to rein in system redundancies and costs.
Any responsible IT department should now conduct the audits and due-dilgence to determine which old and new applications would be best delivered as SaaS from third parties. The ability to absord these apps well also puts IT department in a better position to leverage cloud-based services and infrastructure fabrics.
SaaS's march into enterprises is tempered, however, from real or perceived increased security risks that come from using off-premises systems. This may account for the fact that the number of people not interested in using SaaS has increased over the past year. Do we hav a culture gap on SaaS use? I advise enterprises to thing like start-ups these days -- and that means use SaaS aggressively.
Another key finding of the March 13 report: SaaS offerings have proliferated and moved beyond their traditional "vanilla" customer relationship management (CRM) and human capital management functions.
Forrester determined 13 areas where SaaS applications are making headway. These include:
- Archiving and eDiscovery
- Business Intelligence (BI)
- Digital asset management
- Enterprise content management
- Enterprise resource planning (ERP)
- Human resources
- IT management
- Online backup
- Supply chain management
- Web content management
- Web conferencing
The bottom line for enterprises considering getting into the SaaS arena:
Sourcing and vendor management executives must keep ahead of the growing trend to understand where SaaS is most heavily used and where it lurks on the horizon, so that they can enable their business users to be more successful in business led SaaS deployments as well as to consider SaaS as a viable alternative to IT-led vendor evaluations. Regardless of where the SaaS deployment originates, sourcing and vendor management executives have a key role to play in contracts and pricing, due diligence, and vendor governance and risk.The full Forrester report is available from http://www.forrester.com/go?docid=46747
None of this is surprising news to regular readers of BriefingsDirect or those who listen regularly to the podcasts. Our analysts and guests talk about the growing reliance on SaaS applications, especially in view of the economic decline. In fact, our year-end predictions for 2009 focused quite intensely on the role of SaaS in helping companies weather the storm -- and even chart a new course for the enterprise.
One of our regular analyst-guests and fellow ZDNet blogger Phil Wainewright charted out most of the 2008 developments over a year ago in his 2008 predictions. His predictions were based on what he saw as an awakening among users and vendors as to the potential of SaaS.
Jeff Kaplan in his Think IT Strategies blog made many of the same arguments in his 2009 predictions, in which he predicted that the thinking among IT executives was beginning to shift from whether to do SaaS to how to do it.
These bullish predictions and observations stand in stark contrast to a crepe-hanging piece last July in BusinessWeek, in which Gene Marks of the Marks Group declared SaaS overhyped, overpriced, and in need to debunking. The Marks Group sells customer relationship, service, and financial management tools to small and midsize businesses.
Nothing like a recession to focus the mind on practicality over ideology.