Thursday, February 21, 2008

Informatica offers Data Migration Suite; Wipro to leverage its power

Informatica Corp. is addressing the crucial issue of data access by packaging its data integration software as Informatica Data Migration Suite, with an eye toward providing a solution for companies involved in mergers and acquisitions, consolidation, and outsourcing.

With the shifting landscape of business today, as well as such innovations as service-oriented architecture (SOA), reliable data access is more important than ever to provide the organizational flexibility necessary to respond to internal demands and external pressures.

Despite this, most Global 2000 companies report that their data migration has come in late or over budget. According to a study by Bloor Research, commissioned by Informatica, some 84 percent of projects fell short of expectation and resulted in cost overruns averaging 30 percent. This can have a serious impact on an area expected to see budgets exceeding $8 billion in the next few years.

Informatica, Redwood City, Calif., says its new suite, which provides an independent software platform designed for data migration, will include the company's PowerExhange, PowerCenter, Data Explorer, and Data Quality. The company says the new offerings is designed to ensure the success of data migration projects.

One company has already jumped on the new platform, with the announcement that Wipro Technologies in Mountain View, Calif., will use it to provide data migration to its customers worldwide. The Informatica offering will underpin Wipro's Data Migration Shared Services.

Wipro, which provides integrated business, technology, and process solutions on a global basis, will use the Data Migration Suite to automate and streamline the process. This includes creating all data mappings required to migrate data from and to any system, discovering data quality issues at their source, and cleansing and converting the data as part of the overall migration.

The primary targets for the offering will be data migrations in support of new applications including ERP /CRM implementations and upgrades, legacy system modernization, application instance consolidation, mergers and acquisitions, and outsourcing.

Tuesday, February 19, 2008

Bungee Connect beta goes public, adds oomph to development and deployment as a service

Bungee Labs, Orem, Utah, continues its march toward "platform as a service" (PaaS) with today's announcement that it has opened Bungee Connect as a public beta, inviting all developers to, in Bungee's words, "get inspired, get started, and get involved."

Bungee Connect is an end-to-end environment that allows developers to build desktop-like applications from multiple Web services and databases and then instantly deploy them on Bungee's multi-tenant grid infrastructure.

Fellow ZDNet blogger Ryan Stewart has a god piece on Bungee's coming out. As does Dan Farber.

Bungee is dangling the carrot of a reduced time to market -- as much as 80 percent -- from testing, deploying, and hosting in a single, on-demand platform. With utility-like pricing, Bungee is also offering reduced cost. The company claims that businesses can expect to pay $2-5 per user per month for a heavily used business productivity application. However, all applications will be hosted for free during the public beta.

To provide a reference application, Bungee is also releasing WideLens, a calendar application that solves integration problems between Microsoft Exchange,, Google Calendar, Facebook, MySQL, and iCalendar.

WideLens is intended to provide examples of integrating multiple databases and Web services, end-user authentication, and dynamic user interface presentation patterns. The integration accommodates a variety of protocols -- WebDav, gData, SOAP , REST, and the MySQL client libraries.

A demonstration video is available from the Bungee Web site.

One major feature of Bungee Connect is that developers can access all applications through one of the major Web browsers -- Internet Explorer, Firefox, and Safari -- with no software downloads, installs, or plug-ins.

I've been following the exploits of Bungee since it first unveiled Bungee Connect last April at the Web 2.0 Expo. Back then, I saw it's potential to expand the universe of Web services for developers:

" . . . the real innovation is how the Bungee Connect model provides an incubator, and — in essence — a business development partner to the developer so that they do not just create an application — they can create a business. That's because the cost for the use of the tools, testing, and then hosting is free, and the subscription cost for the at-scale hosting only kicks in based on the use of the application by end users. Low use means low costs, and high use means a predictable measure of the proceeds goes to the development and hosting service."

One thing we can take away from this announcement is that PaaS is now more than just pie-in-the-sky concept or a one-off product. It's gaining traction, and is offering companies a low-cost -- or scalable cost -- route to business development.

David Mitchell, Bungee founder and CEO, sees PaaS as the wave of the future. He said in a blog post on the Bungee site:

"In our view a platform includes all the systems and environments comprising the end-to-end life cycle of developing, testing, deploying, and hosting Web applications. Naturally, this platform must also be cloud based, a platform as a service,

At Bungee Labs, we believe a transformation larger than SaaS is emerging, where end-to-end development, deployment and hosting platforms as provisioned as services over Web."

Mitchell's blog post echoes what I said in an update last July:

The net effect of these trends and examples is that the time, cost and risk of going from design to full production are deeply compressed. We are entering a period on unmatched applications, services, and media creativity.

I expect we'll be seeing more application lifecycle as a service approaches -- ones that bring enterprise-calibre development, test, deployment, and hosting together. Amazon is moving in this direction. We should also expect the same from Google and Microsoft.

Indeed any cloud computing environ that wishes to support a vibrant community ought to offer what Bungee Labs is providing now. And do hope that standards rule the day so that mashups remain straightforward.

The issue of code portability also needs to be addressed. I probably won;t make sense for these hosts to make leaving easy, but it should nonetheless be quite possible and well understood.

Friday, February 15, 2008

TIBCO beefs up ActiveMatrix with 2.0 release, moves to 'Total Architecture' value

Promising hefty productivity increases and a lower TCO , TIBCO Software this week announced its beefed-up ActiveMatrix 2.0, which aims to simplify bilding and managing service-oriented architectures (SOAs).

This latest release adds BusinessWorks, which is available either in standalone mode or as a container hosted in the ActiveMatrix infrastructure, and Service Bus, a new lightweight enterprise service bus (ESB), that helps integrate services while using content or context-based routing.

It was just a year ago that I had a BriefingsDirect SOA Insights Edition podcast devoted largely to TIBCO and ActiveMatrix. At that time, the panel of analysts saw newly announced ActiveMatrix as a definite shift in the trajectory that TIBCO had been on. You can listen to the podcast here.

I also conducted a sponsored book review podcast last year with TIBCO architect Paul Brown on the concept of Total Architecture, which ActiveMatrix 2.0 undergirds, for sure. Disclosure: TIBCO has been a sponsor of my BriefingsDirect podcasts. Read a full transcript of the discussion.

ActiveMatrix 2.0 includes expanded and deep support for service component architecture (SCA) to automate and unify the assembly, deployment, hosting, and managing of SOA projects and services.

With ActiveMatrix 2.0, TIBCO says users can manage SOAs with combinations of Java, .NET, and broad service mediation and orchestration of both existing custom and packaged applications.

TIBCO has great hopes for the new release. According to Matt Quinn, vice president of product strategy:

“As organizations embark upon broader SOA initiatives, business and IT users need tools that can help them harness services across multi-vendor platforms and scale to meet the most demanding enterprise environments. The TIBCO ActiveMatrix product family continues to evolve to help all types of businesses at any phase in the SOA life-cycle to reduce the complexity with developing, deploying and managing service-oriented applications.”

All ActiveMatrix 2.0 products can be purchased separately or as part of three packages – the TIBCO SOA Starter Bundle, the TIBCO Integration Bundle, or the TIBCO Composite Application Bundle.

On the vendor sports angle, TIBCO remains one of the few remaining standalone commercial middleware/SOA infrastructure vendors. With BEA now under the auspices of Oracle, Cape Clear under Workday, IONA Technologies also remains independent (but appears to be in play, with Software AG rumored to be in pursuit).

The field of vendors not gobbled up by larger inclusive providers include focused and component-based providers such as SOA Software, and a slew of open source and commercial open source providers -- from MuleSource to WSO2 to Red Hat. Disclosure: WSO2 has been a sponsor of BriefingsDirect podcasts. Disclosure: IONA Technologies has been a sponsor of BriefingsDirect podcasts.

And many of the open source providers are rapidly expanding their SOA infrastructure perview, not the least of which is this week's Red Hat announcements about their SOA stack, built significantly from the JBoss community.

I remain bullish on open source SOA, and we may soon see the real cost-benefits competition occur not between the large, comprehensive suppliers like IBM, Oracle and Microsoft -- but more generally between open source SOA providers and commercial ones. As with IONA and Mule, we also see that combinations of open source development and commercial distribution and support can be a powerful and productive tag-team.

SOA infrastructure lends itself well to community development and advancement. I expect to see more SOA action, too, at the upcoming EclipeCon event in March.

Thursday, February 14, 2008

Citrix assembles virtualization products under Delivery Center umbrella

Citrix Systems, Inc. has announced a restructuring and rebranding of its key virtualization products, as well as the introduction of a new orchestration technology.

My question is when will the cloud computing, virtualization, SaaS and advertising combined synergies bubble up to show Citrix' substantial strategic worth? Who will get it and bet on it first?

First things first. The Citrix Delivery Center will serve as an umbrella for a new family of products that includes XenApp, formerly the Citrix Presentation Server; as well as, XenDesktop, NetScaler, and XenServer.

The Fort Lauderdale, Fla.-based company said that it is renaming the presentation server to capitalize on the connection of Xen with virtualization and to make it fit in with the rest of the product line.

Citrix has also announced the release of XenServer Platinum Edition. This will give users the functionality they need to provision both virtual and physical machines. It includes the ability to stream a workload to any server or server farm and will provision servers simultaneously from a single standard workload image.

It will also include capacity on demand and the ability to dynamically manage provisioning for disaster recovery and business continuity.

Back to the meaty stuff. On an even more fascinating note, came consultant Sramana Mitra's pithy ruminations this week of how a Citrix and SAP merger would work. And the timing of that tidbit coincides nicely with rumors of a Oracle buy of

I think both of these scenarios make a lot of sense, and demonstrate that strategic advantage in three years will be cast though a cloud. In other words, everyone who's anyone in applications needs a services fabric story. They will also need the ability to mine whatever relationship will emerge between enterprise applications delivery and advertising. I'd say even an IBM-Citrix matchup makes sense.

Citrix has assembled the means to pivot and weave to work this market disruption in many directions. It can stay on-premises, go to the cloud, deliver the desktop as a service, among other permutations. And there's always the Microsoft relationship.

Indeed, you should have seen the glint in the eyes of the Citrix executives last fall in Key Biscayne when I asked when they will inject ads into their applications delivered as services. I almost saw dollar signs amid the Florida sun-inspired crinkles by their eyes.

And let's not get hung up on the "there will never be ads in business apps" bull. Like I told Henry Blodget in a recent comment to a blog of his:
... advertising will surely morph into a smorgasbord of sponsored web services, mashups for hire, affiliated networks, search-oriented lead generation, pay as you use online infrastructure, multimedia informercial snippets, and --most importantly -- more intelligent matching of a buyer's needs and a seller's outreach.

... In a matter of months or few short years, the cloud will permit much richer buyer-seller interactions, things we should not rightly call advertising. Users can get what they need to be more productive, at a price. Sellers will find direct lines to those ready to buy, for pennies per sale. It is semantic selling in one direction, and vendor relationship management, as Doc Searls says, in the other.

And this will be a productivity boon to B2B, B2E and B2C commerce. We will soon be able to grease the skids of automated matching of buying and selling, across nearly all goods and services.
Ester Dyson has some good thoughts on the subject, too, in a recent WSJ op-ed piece.

Back to the more mundane (but necessary) news: XenServer 4.1, according to Citrix, offers more than 50 enbhancements. A full listing of the features and functionality of Platinum Edition and the latest release of XenServer 4.1 are now available on the Citrix Website.

The new Citrix orchestration technology, known as Workflow Studio, is designed to tie together the company's application delivery solutions and integrate them with users' existing technology components.

Workflow Studio is built on Microsoft .NET, PowerShell and Windows Workflow Foundation technologies. This extensible design also makes it easier for customers to link Citrix products into broader systems management solutions from partners like HP, IBM, and is designed to allow everything to function seamlessly within large enterprise environments.

ZDNet's Paula Rooney sees the news as a move away from the company's open-source mission and said that Citrix officials were trying to back away from Xen by branding its products under the Citrix Delivery System banner.

While Citrix is using the Xen name for its individual products, it is positioning the entire stack — including its NetScaler web acceleration platform — as the Citrix Delivery Center. From that, it appears that Citrix is diluting XenSource’s core identity as a virtualization company in order to score points with Microsoft and catapult Microsoft’s forthcoming HyperV hypervisor as VMware’s chief rival.

This led Simon Crosby, Citrix CTO, to respond:

Xen is profoundly important to Citrix, is changing everything about the way that Citrix develops and delivers its products. Citrix is fully supportive of open source and the community, and you will see much more than just Xen as a core community focus from Citrix in the not too distant future.

I've been bullish on the Citrix/Xen Source combo, since they joined forced last year. Back then, I said:

The acquisition also sets the stage for Citrix to move boldly into the desktop as a service business, from the applications serving side of things. We’ve already seen the provider space for desktops as a service heat up with the recent arrival of venture-backed Desktone. One has to wonder whether Citrix will protect Windows by virtualizing the desktop competition, or threaten Windows by the reverse.

The individual products in the Citrix Delivery Center family can be purchased today. A tech preview of the new Workflow Studio solution will be available in Q2 2008.

Citrix XenServer 4.1 is currently available as a public beta from the Citrix web site and will be generally available in March 2008. Citrix XenServer Platinum Edition will be generally available shortly after in Q2.

That's provided someone hasn't bought Citrix first.

Friday, February 8, 2008

New Eclipse-based tools from Genuitec offer developers more choices in migrating to IBM WebSphere 6.1

Listen to the podcast. Or read a full transcript. Sponsor: Genuitec.

The arrival of the IBM WebSphere Application Server 6.1 presents Eclipse-oriented developers with some big decisions. The newest version of this popular runtime will depend largely on Rational Application Developer (RAD) for tooling.

While this recent runtime environment release is designed to ease implementations into Services Oriented Architecture (SOA) and improve speed for Web services, the required Rational toolset -- formerly known as the WebSphere Studio Application Developer -- comes with a significant price tag and some weighty developer adjustments.

Genuitec, however, is now delivering MyEclipse Blue Edition as an alternative upgrade path for tools as enterprise architects and operators begin to adjust to these major new releases from IBM. MyEclipse Blue Edition is not competing with IBM as much as catering to an under-served market of people that may not be able to afford the quick and full Rational tool adjustment, says Genuitec.

And so Genuitec, the company behind the MyEclipse IDE, is offering a stepping-stone approach to help with this WebSphere environment tools transition. To help understand this transition, the market, and the products, I recently moderated a sponsored podcast discussion with with James Governor, a co-founder and industry analyst at RedMonk, as well as Maher Masri, president of Genuitec.

Here are some excerpts:
The economics around tools have shifted dramatically. It seems that the value add is not so much in the IDE now, but in building bridges across environments, making framework choices easier for developers, and finding ways of mitigating some of these complexity issues, when it comes to the transition on the platform side.

Eclipse obviously has become the default standard for the development environment and for building tools on top of it. I don’t think you need to go very far to find the numbers that support those kinds of claims, and those numbers continue to increase on a year-to-year basis around the globe.

When it started, it started not as a one-company project, but a true consortium model, a foundation that includes companies that compete against each other and companies in different spaces, growing in the number of projects and trying to maintain a level of quality that people can build upon to provide software on top of it from a tools standpoint.

A lot of people forget that Eclipse is not just a tools platform. It's actually an application framework. So it could be, as we describe it internally, a floor wax and a dessert topping.

The ability for it to become that mother board for applications in the future makes it possible for it to move above and beyond a tools platform into what a lot of companies already use it for -- a runtime equation.

IBM was the company that led the way for all of the IBM WebSphere implementations and many of their internal implementations. A lot of technologies are now based on Eclipse and based on the Eclipse runtime.

Customers tell us ... "I am moving into 6.1, and the reason for that is I am re-implementing or have a revival internally for Web services, SOA, rich-net applications, and data persistence requirements that are evolving out of the evolution of the technology in the broader space, and specifically as implemented into the new technology for 6.1."

Every one of them tells us exactly the same story. "I cannot use your Web service implementation because, a) I have to use this web services within WebSphere or I lose support, and b) I have invested quite a bit of money in my previous tools like WebSphere Application Developer (WSAD), and that is no longer supported now.

"I have to transition into, not only a runtime requirement, but also a tools requirement." With that comes a very nice price tag that not only requires them to retool their development and their engineers, but also reinvest into that technology.

But the killer for almost all of them is, "I have to start from scratch, in the sense that every project that I have created historically, my legacy model. I can no longer support that because of the different project model that’s inside."

From an IBM perspective, it’s a classic case of kind of running ahead of the stack. If you see the commoditization further down the stack, you want to move on up. So IBM looks at the application developer role and the application development function and thinks to itself, "Hang on a second. We really need to be moving up in terms of the value, so we can charge a fair amount of money for our software," or what they see is a fair amount of money.

IBM’s strategy is very much to look at business process as opposed to the focus on just a technical innovation. That certainly explains some of the change that's being made. They want to drive an inflection point. They can't afford to see orders-of-magnitude cheaper software doing the same thing that their products do.

They are looking for life cycle approaches, ways of bridging design time and runtime. IBM is addressing some of these needs, but, as you point out, developers are often saying, "Hey, I just want my tool. I want to stick with what I know." So we’re left with a little bit of a disconnect.

We [at Genuitec] looked at the market. Our customers looked back at us and basically gave us the same input: "If you provide us this delta of functionalities, specifically speaking, if you’re able to make my life a little easier in terms of importing projects that exist inside of WebSphere Application Developer into your tool environment, if you can support the web services standard that’s provided by WebSphere.

" ... if you could provide a richer deployment model into WebSphere so my developers could feel as if they’re deploying it from within the IBM toolset, I don’t have the need to move outside of your toolset. I can continue to deploy, develop and run all my applications from a developer's standpoint, not from an administrator's."

There are companies that are always going to be a pure IBM shop and no one is going to be able to change their mind. The ability to provide choice is very important for those that need to make that decisions going forward, but they need some form of affordability to make that decision possible. I believe [Genuitec] provides that choice in spades in our current pricing model and our ability to continue to support without the additional premium above that.
Listen to the podcast. Or read a full transcript. Sponsor: Genuitec.

Wednesday, February 6, 2008

Middleware field consolidates in services direction as Workday acquires Cape Clear

On-demand business applications provider Workday has acquired SOA and enterprise integration middleware vendor Cape Clear Software, the companies announced Wednesday.

The acquisition is novel in several respects. A middleware software vendor is being absorbed by a software as a service (SaaS) provider to expand its enterprise solutions role -- not to sell the software itself -- demonstrating that the future of software is increasingly in the services. It also shows that integration as a function is no longer an after-thought to business applications use -- it is fundamental to any applications activities, be they on-premises, online, or both.

Workday, Inc., of Walnut Creek, Calif., is an on-demand financial management and human capital management solutions vendor. It was founded by David Duffield, best known as the co-founder and former chairman of PeopleSoft, which grew to be the world’s second-largest application software company before being acquired by Oracle in 2005.

Cape Clear Software, of Dublin, Ireland, and Waltham, Mass., develops and supports enterprise service bus (ESB) platform software, designed to help large organizations to integrate their heterogeneous application, content, and processes environments. Disclosure: Cape Clear Software is a sponsor of BriefingsDirect podcasts.

Details of the deal between the two privately owned companies were not disclosed, but the deal is expected to become final in less than 30 days. Cape Clear becomes a part of Workday, forming its new integration unit. Cape Clear CEO Annrai O'Toole will become Workday's vice president of integration and head up the new unit.

The Cape Clear SOA solution set will no longer be offered standalone, and will be available only as part of Workday Integration On Demand online offerings. Both companies stressed, however, that all of Cape Clear's current 250 customers will be supported on-premises at those client sites as Workday customers.

"All customers will get support for as long as they want, but we will not take on any more [on-premises] customers," said O'Toole.

Instead of taking the Cape Clear portfolio to market as SOA infrastructure and middleware offerings Workday plans to broaden its ability to help its customers exploit SaaS, both on an application-by-applications basis as well as in playing an "integration as a service" role. That role also expands into, in effect, a brokering position between complex hybrid arrangements where digital assets and resources can come from many hosts.

"We need to be part of an application sale, not a standalone middleware sale. [Standalone middleware sales] don't exits anymore," said O'Toole.

Only IONA Technologies, of which O'Toole was a co-founder, and TIBCO Software remain as major standalone middleware vendors, now that BEA Systems has been acquired by Oracle, said O'Toole. Consolidation has incorporated middleware into larger stack or business applications offerings. And SOA infrastructure has also seem a bout of consolidation, even as open source SOA components and commercial-open source providers have entering the field aggressively.

The combined Workday-Cape Clear plans to take "Integration on Demand" to market on several levels:

Workday said it intents to assume greater responsibility for customer integrations, expanding its investment and focus on integrations in three areas:
  • Packaged: Workday offers a growing number of common integrations to solutions such as payroll. These connections are managed by Workday as a service, and the company will continue to add to this portfolio.
  • Custom: Workday and its partners deliver tailored connections between Workday and third-party or custom applications. These links can be provided as a Workday service or implemented on premise, based on customer requirements.
  • Personal: Workday offers business users easy ways to link productivity applications, such as Microsoft Excel, to live Workday data, making it simple to create and share reports and tools with users across the enterprise.
Future focus will also include:
  • Hosted integration services for large enterprise customers, beginning with human resources activities.
  • Partner integration, to bring, for example, payroll providers like ADP and other business service ecology players into a larger offerings mix, managed by Workday.
  • RESTful integration with the Web 2.0 community, including mashups, social networks, web services, and mobile commerce services and endpoints. Such mashups will also allow the integration of personalized and custom content via Microsoft Office and Google docs/applications.
“Integrating business applications has always been much too difficult. At Workday, we made integration a core capability from the very start, and adding Cape Clear to our portfolio serves to deepen our focus and capability in this vital area,” said Aneel Bhusri, Workday president, in a release. “Increasingly, customers are looking to Workday to build and manage integration as a service. With Cape Clear’s ESB, we expect to rapidly increase our portfolio of both packaged and custom integration capabilities.”

In effect, Workday is expanding its role to not only provide business applications, but to assume the functions of integrating those applications with a client's existing and future environments.

I recently moderated a podcast discussion with fellow ZDNet blogger Phil Wainewright and O'Toole on the new and evolving subject of "integration as a service."

As Cape Clear's functional set becomes the basis for services integration and management, Workday aims to solve the integration is a business requirement problem in new ways. "Enterprises are sick of buying software and then being left with the integration," said O'Toole.

I also see an opportunity for Workday to grow its role substantially, to take what has pioneered to a greater scale and to the back-end business services that companies will increasingly seek to acquire as online services. Once the integration function is inculcated as services, many other elements of business and process, content and media, can be assembled, associated, and managed too.

The provisioning details and policies that manage the relationships among people, processes, resources, providers and application logic components also become critical. It will be interesting to see if the Workday value will extend to this level, in addition to integration.

Indeed, service providers and cloud computing-based providers will need to crack the integration AND federated policies nut to fully realize their potential for reaching enterprise and consumer users. The ability to solve the integration and policies problems could place Workday at a very advantageous hub role among and between many of the major constituents in the next generation of enterprise computing and online services.

Tuesday, February 5, 2008

New ways emerge to head off datacenter problems while improving IT operational performance

Listen to the podcast. Or read a full transcript. Sponsor: Integrien.

Complexity in today's IT systems makes previous error prevention approaches for operators inefficient and costly. IT staffs are expensive to retain, and are increasingly hard to find. There is also insufficient information about what’s going on in the context of an entire systems setup.

Operators are using manual processes -- in reactive firefighting mode -- to maintain critical service levels. It simply takes too long to interpret and resolve IT failures and glitches. We now see 70-plus-percent of the IT operations budget spent on labor costs.

IT executives are therefore seeking more automated approaches to not only remediate problems, but also to get earlier detection. These same operators don’t want to replace their system’s management investments, they want to better use them in a cohesive manner to learn more from them, and to better extract the information that these systems emit.

To help better understand the new solutions and approaches to detection and remediation of IT operations issues, I recently chatted with Steve Henning, the Vice President of Products for Integrien, in a sponsored BriefingsDirect podcast.

Here are some excerpts:
IT operations is being told to either keep their budgets static or to reduce them. Traditionally, the way that the vice president of IT operations has been able to keep problems from occurring in these environments has been by throwing more people at it.

This is just not scalable. There is no way ... (to) possibly hire the people to support that. Even with the budget, he couldn’t find the people today.

If you look at most IT environments today, the IT people will tell you that three or four minutes before a problem occurs, they will start to understand that little pattern of events that lead to the problem.

But most of the people that I speak to tell me that’s too late. By the time they identify the pattern that repeats and leads to a particular problem -- for example, a slowdown of a particular critical transaction -- it’s too late. Either the system goes down or the slowdown is such that they are losing business.

Service oriented architecture (SOA) and virtualization increase the management problem by at least a factor of three. So you can see that this is a more complex and challenging environment to manage.

So it’s a very troubling environment these days. It’s really what’s pushing people toward looking at different approaches, of taking more of a probabilistic look, measuring variables, and looking at probable outcomes -- rather than trying to do things in a deterministic way, measuring every possible variable, looking at it as quickly as possible, and hoping that problems just don’t slip by.

If you look at the applications that are being delivered today, monitoring everything from a silo standpoint and hoping to be able to solve problems in that environment is absolutely impossible. There has to be some way for all of the data to be analyzed in a holistic fashion, understanding the normal behaviors of each of the metrics that are being collected by these monitoring systems. Once you have that normal behavior, you’re alerting only to abnormal behaviors that are the real precursors to problems.

One of the alternatives is separating the wheat from the chaff and learning the normal behavior of the system. If you look at Integrien Alive, we use sophisticated, dynamic thresholding algorithms. We have multiple algorithms looking at the data to determine that normal behavior and then alerting only to abnormal precursors of problems.

Once you've learned the normal behavior of the system, these abnormal behaviors far downstream of where the problem actually occurs are the earliest precursors to these problems. We can pick up that these problems are going to occur, sometimes an hour before the problem actually happens.

The ability to get predictive alerts ... that’s kind of the nirvana of IT operations. Once you’ve captured models of the recurring problems in the IT environment, a product like Integrien Alive can see the incoming stream of real-time data and compare that against the models in the library.

If it sees a match with a high enough probability it can let you know ahead of time, up to an hour ahead of time, that you are going to have a particular problem that has previously occurred. You can also record exactly what you did to solve the problem, and how you have diagnosed it, so that you can solve it.

We're actually enhancing the expertise of these folks. You're always going to need experts in there. You’re always going to need the folks who have the tribal knowledge of the application. What we are doing, though, is enabling them to do their job better with earlier understanding of where the problems are occurring by adding and solving this massive data correlation issue when a problem occurs.
Listen to the podcast. Or read a full transcript. Sponsor: Integrien.

Monday, February 4, 2008

Microsoft-Yahoo! combination could yield an Orwellian Web world

About 10 years ago, we used to ask Jim Barksdale, then head of Netscape, a stock question during news conferences. Did you bag any "default browser" deals lately? Inevitably Jim would demur and say they were still trying.

Those were the days when the light was swiftly fading from the Netscape's browser's beacon, and Microsoft's newcomer (and inferior) browser, Internet Explorer, was bagging the default status for PC distributors and online services. That was enough to cement Microsoft's dominance of the Web browser market globally in a few short years.

The fact is, in an online world, convenience is the killer application. For most folks starting up their PCs, whatever comes up on the screen first and easiest is what they tend to use. That's why we have craplets, it's why Netscape bit the dust, and it's why Microsoft's unsolicited bid to buy Yahoo! is Redmond's last grasp at their old worldwide Web dominion strategy. The only way for Microsoft to hold onto its PC monopoly is to gain a Web monopoly too.

And Microsoft would have a good shot at cementing those two as a monopoly with the acquisition of Yahoo!. Because for the vast majority of people who simply do no more than fire up their PCs, click to start their browsers, open a Microsoft Word document, an Outlook calendar entry, an online email or instant message -- they will entering (mostly unbeknownst to themselves) a new default Web services environment.

With both Yahoo!'s and Microsoft's directories of users integrated, the miracle of single sign-on makes them in the probable near future part of the Microsoft advertising network, the Microsoft ID management complex, the Microsoft "software plus services" environment -- all by default, all quite convenient. And once you're in as a user, and once the oxygen is cut off to the competition, the world begins to look a lot more like Windows Everywhere over time.

Indeed, the proposed Microsoft takeover of Yahoo! is really the continuation of the failed (but strategically imperative) Hailstorm initiative. You might recall how Microsoft wanted to use single sign-on to link any users of Hotmail, or Instant Messanger, or Microsoft's myriad Web portals and services (MSN) to all be onramps to the same federated ID management overlay to reach all kinds of services. It was the roach motel attempt to corner the burgeoning network -- use Internet protocols, sure, but create a separate virtual Web of, by, and for Microsoft. The initiative caused quite a donnybrook because it seemed to limited users' ability to freely navigate among other Internet services -- at least on a convenient basis (and for a price).

So Microsoft's first stab at total Web dominance worked at the level of gaining the default browser, but failed at the larger enterprise. Microsoft thought it was only a matter of time, however. And it planned prematurely to begin pulling users back from the Web into the Microsoft world of single sign-on access to Microsoft services -- from travel, to city directories, to maps, to search. Microsoft incorrectly thought that the peril of Web as a Windows-less platform had been neutralized, it's competitors' oxygen cut off. Microsoft began to leverage its own Web services and monopoly desktop status to try and keep users on its sites, using its Web server, and its Web browser and its content offerings -- making for the Microsoft Wide Web, while the real Web withered away for use by scientists (again).

But several unexpected things happened to thwart this march into a Big Brother utopia -- a place where users began and ended their digital days (as workers and consumers) within the Microsoft environment. Linux and Apache Web Server stunted the penetration of the security risk Internet Information Services (nee Server) (IIS). AOL created a bigger online home-based community. Mozilla became a fine and dandy Web browser alternative (albeit not the default choice). Java became a dominant language for distributed computing, and an accepted runtime environment standard.

Dial-up gave way to broadband for both homes and businesses. The digital gusher was provided by several sources (many of which were hostile to to Microsoft and its minions). Software as a service (Saas) became viable and succeeded. And, most importantly, Google emerged as the dominant search engine and created the new economics of the Web -- search-based juxtaposed automated link ads.

Microsoft had tried to gain the Web's revenues via dominance of the platform, rather than via the compelling relationship of convenience of access to all the relevant information. Microsoft wanted Windows 2.0 instead of Web 2.0.

Social networks like MySpace LinkedIn, and Facebook replaced AOL as the communities of choice. And resurgent IBM and Apple were containing Microsoft at the edges, and even turning their hegemony back meaningfully. Mobile networks were how many of the world's newest Internet users access content and services, sans a Microsoft client.

And so only a mere three years ago, Microsoft's plans for total dominance were dashed, even though they seemingly had it all. Just like Tom Brady, they just couldn't hold on to cement the sweep, and their perfect season ended before the season itself was over. What Microsoft could not control was the Internet, thirst for unfettered knowledge, and the set of open standards -- TCP/IP & HTML -- that sidesteps Windows.

Yet at every step of the way Microsoft tried to buy, bully, create or destroy in order to control the onramps, applications, developers, content, media, and convenience of the Web - even if the genie was out of the bottle. They did their own dial-up networks, they had proxies buy up cable franchises, they tried to dominate mobile software. They created television channels, and publishing divisions, and business applications. They largely failed against an open market in everything but their original successes: PC platform, productivity apps, tools, and closed runtime.

And so the bid for Yahoo! both underscores that failure as well as demonstrates the desperate last attempt to dominate more than their desktop software monopoly. This is a make or break event for Microsoft, and has huge ramifications for the futures of several critical industries.

If Redmond succeeds with acquiring Yahoo!, imagine a world that was already once feared, back some 10 years ago. That is an Orwellian world in which a huge majority of all users of the Internet globally can -- wittingly or otherwise -- only gain their emails, their word processing, their news, their services, their spreadsheets, their data, their workflow -- all that which they do online essentially -- only by passing through the Microsoft complex and paying their tolls along the way.

All those who wish to reach that mass audience, be it on a long tail or conventional mass markets basis, must use what de facto standards Microsoft has anointed. They must buy the correct proprietary servers and infrastructure, they must develop on the proscribed frameworks. They must view the world through Microsoft Windows, at significant recurring cost. Would Microsoft's historic economic behavior translate well to such control over knowledge, experience, and personal choice?

This may sound shrill, but a dominant federated ID management function is the real killer application of convenience that is at stake today. Google knows it, and quietly and mostly responsibly linked many of its services to a single sign-on ID cloud. When you get a gmail account, it becomes your passport to many Google services, and it contains much about your online definition, as well as aids and abets the ability to power the automated advertising juggernaut that Microsoft rightly fears. But at least Google (so far) lets the content and media develop based on the open market. They don't exact a mandatory toll as much as take a portion of valued voluntary transactions, and they remain in support of open standards and choice of platforms.

We now may face a choice between a "do no evil" philosophy of seemingly much choice, or an extend-the-monopoly approach that has tended to limit choice. The Microsoft monopoly has already needed to be reigned in by global regulators who fear a blind ambition powerhouse, or who fear unmitigated control over major aspects of digital existence. Orwell didn't know how political power would be balanced or controlled in his future vision, perched as he was at the unfortunate mid-20th century.

How the power of the Internet is balanced is what now is at stake with the Microsoft-Yahoo! bid. Who can you trust with such power?