From a variety of sources, I'm hearing the same thing that Robert Scoble is, and that is that Microsoft under Ray Ozzie is making major strides in giving Web developers what they want, opening the client-side stuff well with IE 8, putting core productivity apps as services online, and assembling the cloud-supported infrastructure to make a compelling new case for keeping Microsoft on the short list of premier tools, runtime vendors AND service providers.
The Google fear on the business model disruption, the Apple fear on the client disruption, and the Amazon fear on the cloud disruption, seems to be making Microsoft do what anti-trust regulators, Java, open source developers, Linux, Firefox, OpenDocument, IBM, Novell, and a chorus of Microsoft bashers like myself have been trying for many years. And that is ultimately to save Microsoft from itself.
At the PDC in LA a mere 2.5 years ago it seemed like Redmond was slipping backwards in time into a gradual descent with its Connected Computing drive, and with us all connected to the Indigo bus using only MS file formats. This was, as I said at the time, an attempt to make the web a client/server affair, with Microsoft's fat clients (not its browser) the client bits. Microsoft seemed to think it has whipped the wed sufficiently to go back to the old tricks -- integrated tools plus client monopoly plus closed packaged apps equals total domination.
Now, we're seeing a much different approach, of actually meeting the Internet on its terms, and making the Microsoft way shift -- and not the other way around. We'll see more open tools, plus less lock-in to the client monopoly, plus less closed and packaged services, with a differentiated subscription and ad-supported business model. Total domination, perhaps not; but long slog to irrelevance and demise -- no way.
With Silverlight, we see RIA tools that bridge client environments -- even non-Microsoft mobile runtimes and Linux. We're seeing an IE 8 that supports (rather than subverts) de facto and official web standards. With Microsoft Online Services you can side-step the closed fat client apps. We're seeing low-cost commodity infrastructure in the cloud with SQL Server Data Services instead of server lock-in. [Message to Sun: Get MySQL Services on your cloud ASAP, and for free!]
Yes, all those that have been surrounding Microsoft with 1,000 cuts for years, ganging up on them, picking on them, teasing them, disrupting their cash cows and taking the punch out of their arrogance -- you have done a great job. You mooned the giant, and the giant changed instead of charged. Jack did not get a chance to cut the beanstalk while the giant was still in descent. The giant went back to the lab in his castle, lead by Ray Ozzie.
As a result, Google is not going to get away with chopping down the vine unmolested. Yahoo and Amazon are not going to combine to form the perfect web services/ecommerce cloud. Apple remains an elitist playground with a nice music businesses. Time Warner, AT&T, Motorola, Novell and Red Hat remain out to lunch. Microsoft will still generate enough gravity to hold IBM, SAP, HP, Dell, Intel, Nokia, and the global SIs in a tight orbit. And if Microsoft plays the advertising network card (with Yahoo) right, it will form a new center of gravity for media and entertainment (and perhaps business services) to provide the second source to Google.
Trouble is, this is a good news, bad news moment.
The good news is that Microsoft can change and adapt (a least in its intentions and early deliverables so far). The bad news is that Microsoft can change and adapt, even if they need to hamstring their traditional cash cows to do it.
Microsoft used to want to prevent the need for a web monopoly play (almost impossible by definition) by embracing and extending its way to keeping its monopoly as the gatekeeper to the business and commerce Web. Now it making the bold move to convert its old monopoly into the new largest comprehensive web player. It may not be number one in all things web, but it might be in the top three for most everything web -- and that is also the bad news.
Microsoft, the violator of anti-trust laws and the consent decrees and EU rulings, is now poised to become the second source to Google in the ad-supported media world. Meet the new boss, same as the old boss.
And that raises the same old questions. Will the power increase to a point where the openness declines? Will the standards over time be increasingly set by the de facto marker leader? Will the Internet and its efficiencies work best for consumers and users, or those that can manipulate it best?
On the other hand, has Microsoft shot itself in the foot by going so open that they can never go back? Is the lock-in on the web no longer possible, for one vendor to create a choke-hold with critical mass with enough influence to reinstall the Church and shut down the bazaar?
These are the questions we'll need to revisit in three years. Seriously.
Thursday, March 6, 2008
Wednesday, March 5, 2008
Cloud computing for enterprises, work it through your head
Here are some great quotes from a Hiperware white paper I just read:
Cloud computing is not just for Google and Amazon, folks. It will be synonymous with high performance and then good old enterprise mission-critical computing, in all its forms, in the coming years.
The new neat trick will be managing how the clouds and SOAs relate and interact. And that spells more integration as a service, and more federated policy management and enforcement as a service. It's a whole new abstraction for middleware.
Cloud computing could be the next big opportunity for middleware.
In combination, cluster computing and multi-core computers have the potential to provide unprecedented performance, scalability and reliability for enterprise software.The new paper goes on to detail several enterprise computing use-case scenarios that show how cloud computing architectures and methodologies, if enterprise developers can exploit them, will rapidly advance cost-benefits.
Much of the significant benefit evident in the ideology of multicore and cluster computing -- lower costs, higher availability and scalability -- is effectively negated by the cost, time, risk and complexity involved in developing and deploying software that can run on these systems.
... What hinders businesses from taking advantage of multicore and clustered hardware is the lack of a simple means – such as a Rapid Application Development (RAD) method – so that software developers can quickly develop, test and deploy
enterprise software on these systems.
By taking the engineering complexity away from multi-core and cluster-computing, Hiperware Platform makes it significantly easier for developers to write software that can be partitioned across multiple computers or CPU-cores or virtual machines.
Cloud computing is not just for Google and Amazon, folks. It will be synonymous with high performance and then good old enterprise mission-critical computing, in all its forms, in the coming years.
The new neat trick will be managing how the clouds and SOAs relate and interact. And that spells more integration as a service, and more federated policy management and enforcement as a service. It's a whole new abstraction for middleware.
Cloud computing could be the next big opportunity for middleware.
Tuesday, March 4, 2008
Splunk goes 'platform' to extend IT search benefits across more IT management functions
Gaining more insights early and often into what vast arrays of servers, routers and software stacks are actually doing has long been on the top of the IT wish list. Traditional IT management approaches force the trade-off been depth and comprehensive reach, meaning you can't get the full, integrated picture across mixed systems with sufficient clarity.
Splunk's approach to this problem has been to index and make searchable the flood of constantly generated log files being emitted from IT systems, and then aligning the time stamps to draw out business intelligence inferences about actual IT performance.
The San Francisco company took the IT information assembly and digestion process a step further two years ago by creating Splunk Base, an open reservoir of knowledge about IT searched systems for administrators to share and benefit from. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts, including this one on Splunk Base.]
Now, recognizing the power of mashed up services and Enterprise 2.0 tools for associating applications, services, and data, Splunk has gone "platform." Instead of only providing the fruits of IT search to sys admins and IT operators, Splunk has created the means to offer developers easy access to that data and the powerful inferences gleaned from comprehensive IT search. That means the data can go places no log file has gone before.
Through a common set of services and APIs, the Splunk Platform now allows developers and equipment makers to build and integrate applications that include IT-search generated data. Because Splunk collects and manages logs, configurations, messages, traps and alerts -- compiling statistics from nearly every IT component -- the makers of IT equipment can build better management and maintenance applications (not to mention billable services).
In trial use, the Splunk Platform has already been leveraged by OEMs and systems integrators in the form of bundling and embedding Splunk with their own hardware, software and services. The opportunity there is for these OEMs and systems integrators to seek new business opportunities for offering ongoing maintenance and support values for their products and services.
What's more, the applications that the various OEMs, service providers, hosting organizations, and service bureau outsourcers build on Splunk, the more the applications can be used in coordination together, and the findings then integrated for faster problem solving, greater threat response, heightened compliance reporting, and for gaining business intelligence insight into user activity and transactions.
I like this approach because gaining an insight into total datacenter behavior in near real-time has been so difficult, but its importance is growing with the advances in virtualization, mixed-hosting arrangements, co-location, and SOA-based systems and infrastructure. In effect, both the complexity and heterogeneity of systems has kept growing, while the ability to gain common-denominator meta data about systems behaviors hasn't kept pace. We've long needed a way to make all systems "readable" in common ways.
With Splunk Platform and the applications it will spawn, IT information can now much better support and interact with distributed management applications. And we certainly need more innovative applications that can leverage this common meta data about systems to produce better management and quick feedback from systems and users.
Taking this all a step further, many of these applications and services can and should support an ecosystem. By easily distributing their applications and gaining the ability to download other applications created by anyone in the Splunk ecosystem, IT managers and the makers of IT equipment will benefit. To kick-start the effort, the first Splunk-built application on the platform was announced this week. Splunk for PCI Compliance is available for download from SplunkBase.
The application provides 125 searches, reports and alerts to help satisfy PCI requirements, including secure remote access, file integrity monitoring, secure log collection, daily log review, audit trail retention, and PCI control reporting, says Splunk. The goal is to make it simpler and faster for IT managers to comply, to answer auditor questions, and to control access to sensitive systems data. Splunk has taken pains to provide security and access control to the sensitive data, while opening up access to the non-sensitive information for better analysis.
Consequently, Splunk's foray into the developer world and applications ecosystems coincides with the company's release of Splunk 3.2, which now includes a Splunk for Windows version (on the same single code base that runs on Linux, Mac OSX, Solaris, FreeBSD and AIX). New features in Splunk 3.2 include transaction search and interactive field extraction to create easier ways for end users to generate their own applications. The update also extends the platform's capabilities with filesystem change monitoring, flexible roles, data signing and audit trails. A new REST API and SDKs for .Net and Python further opens the platform for more developers.
The Splunk Platform and associated ecosystem should quickly grow the means to bridge the need for transparency between runtime actualities and design-time requirements. When developers can easily know more about what applications and systems do in the real world in real time, they can make better decisions and choices in the design and test phases. This obviously has huge time- and money-saving implications.
The need for such transparency will quickly grow as virtualization and a services-based approach to applications gains stream and acceptance. We have seen some very powerful productivity improvements as general enterprise data has been mined for business intelligence. Now its time to better mine systems data for better IT intelligence.
Splunk's approach to this problem has been to index and make searchable the flood of constantly generated log files being emitted from IT systems, and then aligning the time stamps to draw out business intelligence inferences about actual IT performance.
The San Francisco company took the IT information assembly and digestion process a step further two years ago by creating Splunk Base, an open reservoir of knowledge about IT searched systems for administrators to share and benefit from. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts, including this one on Splunk Base.]
Now, recognizing the power of mashed up services and Enterprise 2.0 tools for associating applications, services, and data, Splunk has gone "platform." Instead of only providing the fruits of IT search to sys admins and IT operators, Splunk has created the means to offer developers easy access to that data and the powerful inferences gleaned from comprehensive IT search. That means the data can go places no log file has gone before.
Through a common set of services and APIs, the Splunk Platform now allows developers and equipment makers to build and integrate applications that include IT-search generated data. Because Splunk collects and manages logs, configurations, messages, traps and alerts -- compiling statistics from nearly every IT component -- the makers of IT equipment can build better management and maintenance applications (not to mention billable services).
In trial use, the Splunk Platform has already been leveraged by OEMs and systems integrators in the form of bundling and embedding Splunk with their own hardware, software and services. The opportunity there is for these OEMs and systems integrators to seek new business opportunities for offering ongoing maintenance and support values for their products and services.
What's more, the applications that the various OEMs, service providers, hosting organizations, and service bureau outsourcers build on Splunk, the more the applications can be used in coordination together, and the findings then integrated for faster problem solving, greater threat response, heightened compliance reporting, and for gaining business intelligence insight into user activity and transactions.
I like this approach because gaining an insight into total datacenter behavior in near real-time has been so difficult, but its importance is growing with the advances in virtualization, mixed-hosting arrangements, co-location, and SOA-based systems and infrastructure. In effect, both the complexity and heterogeneity of systems has kept growing, while the ability to gain common-denominator meta data about systems behaviors hasn't kept pace. We've long needed a way to make all systems "readable" in common ways.
With Splunk Platform and the applications it will spawn, IT information can now much better support and interact with distributed management applications. And we certainly need more innovative applications that can leverage this common meta data about systems to produce better management and quick feedback from systems and users.
Taking this all a step further, many of these applications and services can and should support an ecosystem. By easily distributing their applications and gaining the ability to download other applications created by anyone in the Splunk ecosystem, IT managers and the makers of IT equipment will benefit. To kick-start the effort, the first Splunk-built application on the platform was announced this week. Splunk for PCI Compliance is available for download from SplunkBase.
The application provides 125 searches, reports and alerts to help satisfy PCI requirements, including secure remote access, file integrity monitoring, secure log collection, daily log review, audit trail retention, and PCI control reporting, says Splunk. The goal is to make it simpler and faster for IT managers to comply, to answer auditor questions, and to control access to sensitive systems data. Splunk has taken pains to provide security and access control to the sensitive data, while opening up access to the non-sensitive information for better analysis.
Consequently, Splunk's foray into the developer world and applications ecosystems coincides with the company's release of Splunk 3.2, which now includes a Splunk for Windows version (on the same single code base that runs on Linux, Mac OSX, Solaris, FreeBSD and AIX). New features in Splunk 3.2 include transaction search and interactive field extraction to create easier ways for end users to generate their own applications. The update also extends the platform's capabilities with filesystem change monitoring, flexible roles, data signing and audit trails. A new REST API and SDKs for .Net and Python further opens the platform for more developers.
The Splunk Platform and associated ecosystem should quickly grow the means to bridge the need for transparency between runtime actualities and design-time requirements. When developers can easily know more about what applications and systems do in the real world in real time, they can make better decisions and choices in the design and test phases. This obviously has huge time- and money-saving implications.
The need for such transparency will quickly grow as virtualization and a services-based approach to applications gains stream and acceptance. We have seen some very powerful productivity improvements as general enterprise data has been mined for business intelligence. Now its time to better mine systems data for better IT intelligence.
Monday, March 3, 2008
Nexaweb Advance takes RIA value to the enterprise application modernization imperative
There are so many good reasons to modernize legacy and 3GL/4GL applications that enterprises are moving wholesale to modernization activities, changing entire classes of applications, and aligning them with SOA, SaaS, data center consolidation, ITIL, and energy-conservation/green initiatives.
Oh, and modernization allows you to gracefully get out of the costly fat PC client software support business and focus on the browser-only end points.
The building interest in virtualization is also a spur to getting out the client/server business and making more applications Web-facing and services-based. These moves, in turn, allow for better organizing data into common warehouses and SANs, allowing for BI and other benefits, while reducing storage and back-ups costs. Business continuity also gets a boost, because everything is on the server-side (often of low-cost x86 Linux).
In short, what enterprise's are really up to these days is datacenter transformation, the whole ball of wax, and in which applications modernization is an early and essential ingredient to begin enjoying the larger holistic productivity and costs benefits.
The trick is to keep those same older (and often mission critical) applications performing well, with the rich GUIs that users expect, and quickly leading to the back-end integration flexibility to make the legacy logic also part of any enterprise's SOA patterns.
For those applications deemed no longer mission-critical, application modernization allows for proper sunsetting. It is often worthwhile to cull out the still valued logic, transactional mappings, and data -- and apply them anew to other applications or processes -- before pulling the plug.
Yep, so many reasons to modernize, so few ways to do it without pain, confusion, and cost. And so into this gapping need, Nexaweb today takes its rich Internet application (RIA) solution value with Nexaweb Advance. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]
For more on the whole rationale and business case for application modernization, check out a sponsored podcast I did with HP Services. ITIL v3 factors into this in a big, so here's some background on that, too.
For Nexaweb, the end game for enterprises is flexible composite workflows, and so the newest offerings are more than tools and platform, there's a professional services component, to take the best practices and solutions knowledge to market as well. The process includes applications assets capture and re-factoring (sort of like IT resources forensics), re-composition, deployment and then proper maintenance. In the bargain, you can gain a enhanced platform, increased automation, and services orientation.
The goal is to harvest all those stored procedures, but target them to newer architectures -- from Struts to Spring -- and move from client/server to Enterprise 2.0, is a leap-frog of sorts. The re-use of logic then allows those assets to be applied to model-driven architectures and the larger datacenter transformation values.
Nexaweb Advance pairs Nexaweb’s Enterprise Web Suite with automated code generation tools and professional services to deliver a model-driven architecture approach to the transformation of legacy PowerBuilder, ColdFusion, C++, VisualBasic, and Oracle Forms applications, according to the Burlington, Mass. company.
We have seen quite a bit of associating RIA values with SOA in the past few years, so I'm happy to see RIAs also becoming essential to other mainstream enterprise imperatives, like datacenter transformation.
Oh, and modernization allows you to gracefully get out of the costly fat PC client software support business and focus on the browser-only end points.
The building interest in virtualization is also a spur to getting out the client/server business and making more applications Web-facing and services-based. These moves, in turn, allow for better organizing data into common warehouses and SANs, allowing for BI and other benefits, while reducing storage and back-ups costs. Business continuity also gets a boost, because everything is on the server-side (often of low-cost x86 Linux).
In short, what enterprise's are really up to these days is datacenter transformation, the whole ball of wax, and in which applications modernization is an early and essential ingredient to begin enjoying the larger holistic productivity and costs benefits.
The trick is to keep those same older (and often mission critical) applications performing well, with the rich GUIs that users expect, and quickly leading to the back-end integration flexibility to make the legacy logic also part of any enterprise's SOA patterns.
For those applications deemed no longer mission-critical, application modernization allows for proper sunsetting. It is often worthwhile to cull out the still valued logic, transactional mappings, and data -- and apply them anew to other applications or processes -- before pulling the plug.
Yep, so many reasons to modernize, so few ways to do it without pain, confusion, and cost. And so into this gapping need, Nexaweb today takes its rich Internet application (RIA) solution value with Nexaweb Advance. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]
For more on the whole rationale and business case for application modernization, check out a sponsored podcast I did with HP Services. ITIL v3 factors into this in a big, so here's some background on that, too.
For Nexaweb, the end game for enterprises is flexible composite workflows, and so the newest offerings are more than tools and platform, there's a professional services component, to take the best practices and solutions knowledge to market as well. The process includes applications assets capture and re-factoring (sort of like IT resources forensics), re-composition, deployment and then proper maintenance. In the bargain, you can gain a enhanced platform, increased automation, and services orientation.
The goal is to harvest all those stored procedures, but target them to newer architectures -- from Struts to Spring -- and move from client/server to Enterprise 2.0, is a leap-frog of sorts. The re-use of logic then allows those assets to be applied to model-driven architectures and the larger datacenter transformation values.
Nexaweb Advance pairs Nexaweb’s Enterprise Web Suite with automated code generation tools and professional services to deliver a model-driven architecture approach to the transformation of legacy PowerBuilder, ColdFusion, C++, VisualBasic, and Oracle Forms applications, according to the Burlington, Mass. company.
We have seen quite a bit of associating RIA values with SOA in the past few years, so I'm happy to see RIAs also becoming essential to other mainstream enterprise imperatives, like datacenter transformation.
Microsoft opens Pandora's box on online services, betting convenience is the killer app
Now that Microsoft has shown how online productivity applications and communications/groupware should be properly packaged, we can enter the new era of worker choice.
It's not that different from the choices developers have been making for years: Do you want the convenience of neat packaging (at the cost of flexibility and choice) or do you want to pick ala carte components that may best meet your needs and avoid lock-in?
Microsoft Online Services (MOS) is being launched for the U.S. today by Bill Gates at the annual Microsoft Office SharePoint Conference. The bevy of applications is designed to appeal to many kinds of users, and businesses of most sizes and character. A limited beta has been set up, with general availability during the second half of this year.
Core services will include Web-based e-mail, calendaring, contacts, shared workspaces, and webconferencing and videoconferencing over the Web. Microsoft is characterizing the services as part of its "software plus services" drive, so it's hard to tell how much of the "software" (that stuff installed on the PC or server) you'll need to use MOS.
Microsoft says these services will be "managed through a single Web-based interface," which sounds like a portal you'll need to log in to to add or manage users. "IT professionals can monitor the performance of the services, add and configure users, submit and track support requests, and manage users and licenses," says Microsoft.
As in development, some shops like a nice big package, with per developer seat licenses. Others give their developers more choice on tools, utilities, desktop OS, frameworks. They seem more interested in the work the developers do, than in how they do it.
We could see a similar breakdown among more general computing users, given the MOS versus Google services offerings so far. This is more than a matter of style or taste, one model is born of and imbued with client/server, and the other is of and imbued with the Web. You know which is which.
So, in effect, Microsoft is placing a Web shell on its old model, just like it put a GUI shell on DOS with DOS 5, and another shell on that with Windows 95.
Of course on costs, the beauty and/or devil is in the details. This is a subscription service, designed for businesses. Those businesses will pay on a per-user subscription basis. Those Microsoft shops, existing customers with Software Assurance on their Microsoft Client Access Licenses (CALs) will get a discount.
So there are two big issues here: Total cost, and convenience. And those will break down differently if you're a Microsoft "Assurance"-level user or a non-Microsoft user. We don't know the numbers yet, but it's going to be the real nut in this.
Microsoft will need to skate delicately on thin ice to make the total cost close enough to the way assurance users pay to prevent them from moving too quickly. But, the total cost will need to be low enough so that the Microsoft way to online SaaS will be marginally competitive against Google and other providers of online productivity applications and communications/groupware as services.
And they way this is set up, it's almost as if Microsoft has given up on competing for individuals, students, SOHOs, and perhaps businesses of less than 50 people. It's almost as if they don;t think they can compete with Google there -- at least not for the foreseeable future.
This is, then, about maintaing the base of the small businesses and department-level buyers of Microsoft products. In essence, this is defense. It is designed to make it confusing or economically difficult to calibrate total costs, given the complexity of factoring installations, older apps, licenses, and the entire 20-year-old hairball.
And what Microsoft must do, in addition to making the true cost-benefits analysis murky, is to absolutely win on packaging and convenience. And this is where Google is vulnerable. Google has still to show, aside from costs, how businesses of all sorts can adopt their services and approach in an easy to manage way, that packages things up neatly for the IT folks, and that make a transition from the hairball easy, convenient, and well-understood.
And so Google continues the march into businesses via the organic, user-generated interest and convenience level. Google takes the early lead on the individuals and younger, greenfield companies.
And Microsoft places a bulwark around its empire This could be a long slog.
It's not that different from the choices developers have been making for years: Do you want the convenience of neat packaging (at the cost of flexibility and choice) or do you want to pick ala carte components that may best meet your needs and avoid lock-in?
Microsoft Online Services (MOS) is being launched for the U.S. today by Bill Gates at the annual Microsoft Office SharePoint Conference. The bevy of applications is designed to appeal to many kinds of users, and businesses of most sizes and character. A limited beta has been set up, with general availability during the second half of this year.
Core services will include Web-based e-mail, calendaring, contacts, shared workspaces, and webconferencing and videoconferencing over the Web. Microsoft is characterizing the services as part of its "software plus services" drive, so it's hard to tell how much of the "software" (that stuff installed on the PC or server) you'll need to use MOS.
Microsoft says these services will be "managed through a single Web-based interface," which sounds like a portal you'll need to log in to to add or manage users. "IT professionals can monitor the performance of the services, add and configure users, submit and track support requests, and manage users and licenses," says Microsoft.
As in development, some shops like a nice big package, with per developer seat licenses. Others give their developers more choice on tools, utilities, desktop OS, frameworks. They seem more interested in the work the developers do, than in how they do it.
We could see a similar breakdown among more general computing users, given the MOS versus Google services offerings so far. This is more than a matter of style or taste, one model is born of and imbued with client/server, and the other is of and imbued with the Web. You know which is which.
So, in effect, Microsoft is placing a Web shell on its old model, just like it put a GUI shell on DOS with DOS 5, and another shell on that with Windows 95.
Of course on costs, the beauty and/or devil is in the details. This is a subscription service, designed for businesses. Those businesses will pay on a per-user subscription basis. Those Microsoft shops, existing customers with Software Assurance on their Microsoft Client Access Licenses (CALs) will get a discount.
So there are two big issues here: Total cost, and convenience. And those will break down differently if you're a Microsoft "Assurance"-level user or a non-Microsoft user. We don't know the numbers yet, but it's going to be the real nut in this.
Microsoft will need to skate delicately on thin ice to make the total cost close enough to the way assurance users pay to prevent them from moving too quickly. But, the total cost will need to be low enough so that the Microsoft way to online SaaS will be marginally competitive against Google and other providers of online productivity applications and communications/groupware as services.
And they way this is set up, it's almost as if Microsoft has given up on competing for individuals, students, SOHOs, and perhaps businesses of less than 50 people. It's almost as if they don;t think they can compete with Google there -- at least not for the foreseeable future.
This is, then, about maintaing the base of the small businesses and department-level buyers of Microsoft products. In essence, this is defense. It is designed to make it confusing or economically difficult to calibrate total costs, given the complexity of factoring installations, older apps, licenses, and the entire 20-year-old hairball.
And what Microsoft must do, in addition to making the true cost-benefits analysis murky, is to absolutely win on packaging and convenience. And this is where Google is vulnerable. Google has still to show, aside from costs, how businesses of all sorts can adopt their services and approach in an easy to manage way, that packages things up neatly for the IT folks, and that make a transition from the hairball easy, convenient, and well-understood.
And so Google continues the march into businesses via the organic, user-generated interest and convenience level. Google takes the early lead on the individuals and younger, greenfield companies.
And Microsoft places a bulwark around its empire This could be a long slog.
Subscribe to:
Comments (Atom)