Thursday, February 14, 2008

Citrix assembles virtualization products under Delivery Center umbrella

Citrix Systems, Inc. has announced a restructuring and rebranding of its key virtualization products, as well as the introduction of a new orchestration technology.

My question is when will the cloud computing, virtualization, SaaS and advertising combined synergies bubble up to show Citrix' substantial strategic worth? Who will get it and bet on it first?

First things first. The Citrix Delivery Center will serve as an umbrella for a new family of products that includes XenApp, formerly the Citrix Presentation Server; as well as, XenDesktop, NetScaler, and XenServer.

The Fort Lauderdale, Fla.-based company said that it is renaming the presentation server to capitalize on the connection of Xen with virtualization and to make it fit in with the rest of the product line.

Citrix has also announced the release of XenServer Platinum Edition. This will give users the functionality they need to provision both virtual and physical machines. It includes the ability to stream a workload to any server or server farm and will provision servers simultaneously from a single standard workload image.

It will also include capacity on demand and the ability to dynamically manage provisioning for disaster recovery and business continuity.

Back to the meaty stuff. On an even more fascinating note, came consultant Sramana Mitra's pithy ruminations this week of how a Citrix and SAP merger would work. And the timing of that tidbit coincides nicely with rumors of a Oracle buy of Salesforce.com.

I think both of these scenarios make a lot of sense, and demonstrate that strategic advantage in three years will be cast though a cloud. In other words, everyone who's anyone in applications needs a services fabric story. They will also need the ability to mine whatever relationship will emerge between enterprise applications delivery and advertising. I'd say even an IBM-Citrix matchup makes sense.

Citrix has assembled the means to pivot and weave to work this market disruption in many directions. It can stay on-premises, go to the cloud, deliver the desktop as a service, among other permutations. And there's always the Microsoft relationship.

Indeed, you should have seen the glint in the eyes of the Citrix executives last fall in Key Biscayne when I asked when they will inject ads into their applications delivered as services. I almost saw dollar signs amid the Florida sun-inspired crinkles by their eyes.

And let's not get hung up on the "there will never be ads in business apps" bull. Like I told Henry Blodget in a recent comment to a blog of his:
... advertising will surely morph into a smorgasbord of sponsored web services, mashups for hire, affiliated networks, search-oriented lead generation, pay as you use online infrastructure, multimedia informercial snippets, and --most importantly -- more intelligent matching of a buyer's needs and a seller's outreach.

... In a matter of months or few short years, the cloud will permit much richer buyer-seller interactions, things we should not rightly call advertising. Users can get what they need to be more productive, at a price. Sellers will find direct lines to those ready to buy, for pennies per sale. It is semantic selling in one direction, and vendor relationship management, as Doc Searls says, in the other.

And this will be a productivity boon to B2B, B2E and B2C commerce. We will soon be able to grease the skids of automated matching of buying and selling, across nearly all goods and services.
Ester Dyson has some good thoughts on the subject, too, in a recent WSJ op-ed piece.

Back to the more mundane (but necessary) news: XenServer 4.1, according to Citrix, offers more than 50 enbhancements. A full listing of the features and functionality of Platinum Edition and the latest release of XenServer 4.1 are now available on the Citrix Website.

The new Citrix orchestration technology, known as Workflow Studio, is designed to tie together the company's application delivery solutions and integrate them with users' existing technology components.

Workflow Studio is built on Microsoft .NET, PowerShell and Windows Workflow Foundation technologies. This extensible design also makes it easier for customers to link Citrix products into broader systems management solutions from partners like HP, IBM, and is designed to allow everything to function seamlessly within large enterprise environments.

ZDNet's Paula Rooney sees the news as a move away from the company's open-source mission and said that Citrix officials were trying to back away from Xen by branding its products under the Citrix Delivery System banner.

While Citrix is using the Xen name for its individual products, it is positioning the entire stack — including its NetScaler web acceleration platform — as the Citrix Delivery Center. From that, it appears that Citrix is diluting XenSource’s core identity as a virtualization company in order to score points with Microsoft and catapult Microsoft’s forthcoming HyperV hypervisor as VMware’s chief rival.

This led Simon Crosby, Citrix CTO, to respond:

Xen is profoundly important to Citrix, is changing everything about the way that Citrix develops and delivers its products. Citrix is fully supportive of open source and the community, and you will see much more than just Xen as a core community focus from Citrix in the not too distant future.

I've been bullish on the Citrix/Xen Source combo, since they joined forced last year. Back then, I said:

The acquisition also sets the stage for Citrix to move boldly into the desktop as a service business, from the applications serving side of things. We’ve already seen the provider space for desktops as a service heat up with the recent arrival of venture-backed Desktone. One has to wonder whether Citrix will protect Windows by virtualizing the desktop competition, or threaten Windows by the reverse.

The individual products in the Citrix Delivery Center family can be purchased today. A tech preview of the new Workflow Studio solution will be available in Q2 2008.

Citrix XenServer 4.1 is currently available as a public beta from the Citrix web site and will be generally available in March 2008. Citrix XenServer Platinum Edition will be generally available shortly after in Q2.

That's provided someone hasn't bought Citrix first.

Friday, February 8, 2008

New Eclipse-based tools from Genuitec offer developers more choices in migrating to IBM WebSphere 6.1

Listen to the podcast. Or read a full transcript. Sponsor: Genuitec.

The arrival of the IBM WebSphere Application Server 6.1 presents Eclipse-oriented developers with some big decisions. The newest version of this popular runtime will depend largely on Rational Application Developer (RAD) for tooling.

While this recent runtime environment release is designed to ease implementations into Services Oriented Architecture (SOA) and improve speed for Web services, the required Rational toolset -- formerly known as the WebSphere Studio Application Developer -- comes with a significant price tag and some weighty developer adjustments.

Genuitec, however, is now delivering MyEclipse Blue Edition as an alternative upgrade path for tools as enterprise architects and operators begin to adjust to these major new releases from IBM. MyEclipse Blue Edition is not competing with IBM as much as catering to an under-served market of people that may not be able to afford the quick and full Rational tool adjustment, says Genuitec.

And so Genuitec, the company behind the MyEclipse IDE, is offering a stepping-stone approach to help with this WebSphere environment tools transition. To help understand this transition, the market, and the products, I recently moderated a sponsored podcast discussion with with James Governor, a co-founder and industry analyst at RedMonk, as well as Maher Masri, president of Genuitec.

Here are some excerpts:
The economics around tools have shifted dramatically. It seems that the value add is not so much in the IDE now, but in building bridges across environments, making framework choices easier for developers, and finding ways of mitigating some of these complexity issues, when it comes to the transition on the platform side.

Eclipse obviously has become the default standard for the development environment and for building tools on top of it. I don’t think you need to go very far to find the numbers that support those kinds of claims, and those numbers continue to increase on a year-to-year basis around the globe.

When it started, it started not as a one-company project, but a true consortium model, a foundation that includes companies that compete against each other and companies in different spaces, growing in the number of projects and trying to maintain a level of quality that people can build upon to provide software on top of it from a tools standpoint.

A lot of people forget that Eclipse is not just a tools platform. It's actually an application framework. So it could be, as we describe it internally, a floor wax and a dessert topping.

The ability for it to become that mother board for applications in the future makes it possible for it to move above and beyond a tools platform into what a lot of companies already use it for -- a runtime equation.

IBM was the company that led the way for all of the IBM WebSphere implementations and many of their internal implementations. A lot of technologies are now based on Eclipse and based on the Eclipse runtime.

Customers tell us ... "I am moving into 6.1, and the reason for that is I am re-implementing or have a revival internally for Web services, SOA, rich-net applications, and data persistence requirements that are evolving out of the evolution of the technology in the broader space, and specifically as implemented into the new technology for 6.1."

Every one of them tells us exactly the same story. "I cannot use your Web service implementation because, a) I have to use this web services within WebSphere or I lose support, and b) I have invested quite a bit of money in my previous tools like WebSphere Application Developer (WSAD), and that is no longer supported now.

"I have to transition into, not only a runtime requirement, but also a tools requirement." With that comes a very nice price tag that not only requires them to retool their development and their engineers, but also reinvest into that technology.

But the killer for almost all of them is, "I have to start from scratch, in the sense that every project that I have created historically, my legacy model. I can no longer support that because of the different project model that’s inside."

From an IBM perspective, it’s a classic case of kind of running ahead of the stack. If you see the commoditization further down the stack, you want to move on up. So IBM looks at the application developer role and the application development function and thinks to itself, "Hang on a second. We really need to be moving up in terms of the value, so we can charge a fair amount of money for our software," or what they see is a fair amount of money.

IBM’s strategy is very much to look at business process as opposed to the focus on just a technical innovation. That certainly explains some of the change that's being made. They want to drive an inflection point. They can't afford to see orders-of-magnitude cheaper software doing the same thing that their products do.

They are looking for life cycle approaches, ways of bridging design time and runtime. IBM is addressing some of these needs, but, as you point out, developers are often saying, "Hey, I just want my tool. I want to stick with what I know." So we’re left with a little bit of a disconnect.

We [at Genuitec] looked at the market. Our customers looked back at us and basically gave us the same input: "If you provide us this delta of functionalities, specifically speaking, if you’re able to make my life a little easier in terms of importing projects that exist inside of WebSphere Application Developer into your tool environment, if you can support the web services standard that’s provided by WebSphere.

" ... if you could provide a richer deployment model into WebSphere so my developers could feel as if they’re deploying it from within the IBM toolset, I don’t have the need to move outside of your toolset. I can continue to deploy, develop and run all my applications from a developer's standpoint, not from an administrator's."

There are companies that are always going to be a pure IBM shop and no one is going to be able to change their mind. The ability to provide choice is very important for those that need to make that decisions going forward, but they need some form of affordability to make that decision possible. I believe [Genuitec] provides that choice in spades in our current pricing model and our ability to continue to support without the additional premium above that.
Listen to the podcast. Or read a full transcript. Sponsor: Genuitec.

Wednesday, February 6, 2008

Middleware field consolidates in services direction as Workday acquires Cape Clear

On-demand business applications provider Workday has acquired SOA and enterprise integration middleware vendor Cape Clear Software, the companies announced Wednesday.

The acquisition is novel in several respects. A middleware software vendor is being absorbed by a software as a service (SaaS) provider to expand its enterprise solutions role -- not to sell the software itself -- demonstrating that the future of software is increasingly in the services. It also shows that integration as a function is no longer an after-thought to business applications use -- it is fundamental to any applications activities, be they on-premises, online, or both.

Workday, Inc., of Walnut Creek, Calif., is an on-demand financial management and human capital management solutions vendor. It was founded by David Duffield, best known as the co-founder and former chairman of PeopleSoft, which grew to be the world’s second-largest application software company before being acquired by Oracle in 2005.

Cape Clear Software, of Dublin, Ireland, and Waltham, Mass., develops and supports enterprise service bus (ESB) platform software, designed to help large organizations to integrate their heterogeneous application, content, and processes environments. Disclosure: Cape Clear Software is a sponsor of BriefingsDirect podcasts.

Details of the deal between the two privately owned companies were not disclosed, but the deal is expected to become final in less than 30 days. Cape Clear becomes a part of Workday, forming its new integration unit. Cape Clear CEO Annrai O'Toole will become Workday's vice president of integration and head up the new unit.

The Cape Clear SOA solution set will no longer be offered standalone, and will be available only as part of Workday Integration On Demand online offerings. Both companies stressed, however, that all of Cape Clear's current 250 customers will be supported on-premises at those client sites as Workday customers.

"All customers will get support for as long as they want, but we will not take on any more [on-premises] customers," said O'Toole.

Instead of taking the Cape Clear portfolio to market as SOA infrastructure and middleware offerings Workday plans to broaden its ability to help its customers exploit SaaS, both on an application-by-applications basis as well as in playing an "integration as a service" role. That role also expands into, in effect, a brokering position between complex hybrid arrangements where digital assets and resources can come from many hosts.

"We need to be part of an application sale, not a standalone middleware sale. [Standalone middleware sales] don't exits anymore," said O'Toole.

Only IONA Technologies, of which O'Toole was a co-founder, and TIBCO Software remain as major standalone middleware vendors, now that BEA Systems has been acquired by Oracle, said O'Toole. Consolidation has incorporated middleware into larger stack or business applications offerings. And SOA infrastructure has also seem a bout of consolidation, even as open source SOA components and commercial-open source providers have entering the field aggressively.

The combined Workday-Cape Clear plans to take "Integration on Demand" to market on several levels:

Workday said it intents to assume greater responsibility for customer integrations, expanding its investment and focus on integrations in three areas:
  • Packaged: Workday offers a growing number of common integrations to solutions such as payroll. These connections are managed by Workday as a service, and the company will continue to add to this portfolio.
  • Custom: Workday and its partners deliver tailored connections between Workday and third-party or custom applications. These links can be provided as a Workday service or implemented on premise, based on customer requirements.
  • Personal: Workday offers business users easy ways to link productivity applications, such as Microsoft Excel, to live Workday data, making it simple to create and share reports and tools with users across the enterprise.
Future focus will also include:
  • Hosted integration services for large enterprise customers, beginning with human resources activities.
  • Partner integration, to bring, for example, payroll providers like ADP and other business service ecology players into a larger offerings mix, managed by Workday.
  • RESTful integration with the Web 2.0 community, including mashups, social networks, web services, and mobile commerce services and endpoints. Such mashups will also allow the integration of personalized and custom content via Microsoft Office and Google docs/applications.
“Integrating business applications has always been much too difficult. At Workday, we made integration a core capability from the very start, and adding Cape Clear to our portfolio serves to deepen our focus and capability in this vital area,” said Aneel Bhusri, Workday president, in a release. “Increasingly, customers are looking to Workday to build and manage integration as a service. With Cape Clear’s ESB, we expect to rapidly increase our portfolio of both packaged and custom integration capabilities.”

In effect, Workday is expanding its role to not only provide business applications, but to assume the functions of integrating those applications with a client's existing and future environments.

I recently moderated a podcast discussion with fellow ZDNet blogger Phil Wainewright and O'Toole on the new and evolving subject of "integration as a service."

As Cape Clear's functional set becomes the basis for services integration and management, Workday aims to solve the integration is a business requirement problem in new ways. "Enterprises are sick of buying software and then being left with the integration," said O'Toole.

I also see an opportunity for Workday to grow its role substantially, to take what Salesforce.com has pioneered to a greater scale and to the back-end business services that companies will increasingly seek to acquire as online services. Once the integration function is inculcated as services, many other elements of business and process, content and media, can be assembled, associated, and managed too.

The provisioning details and policies that manage the relationships among people, processes, resources, providers and application logic components also become critical. It will be interesting to see if the Workday value will extend to this level, in addition to integration.

Indeed, service providers and cloud computing-based providers will need to crack the integration AND federated policies nut to fully realize their potential for reaching enterprise and consumer users. The ability to solve the integration and policies problems could place Workday at a very advantageous hub role among and between many of the major constituents in the next generation of enterprise computing and online services.

Tuesday, February 5, 2008

New ways emerge to head off datacenter problems while improving IT operational performance

Listen to the podcast. Or read a full transcript. Sponsor: Integrien.

Complexity in today's IT systems makes previous error prevention approaches for operators inefficient and costly. IT staffs are expensive to retain, and are increasingly hard to find. There is also insufficient information about what’s going on in the context of an entire systems setup.

Operators are using manual processes -- in reactive firefighting mode -- to maintain critical service levels. It simply takes too long to interpret and resolve IT failures and glitches. We now see 70-plus-percent of the IT operations budget spent on labor costs.

IT executives are therefore seeking more automated approaches to not only remediate problems, but also to get earlier detection. These same operators don’t want to replace their system’s management investments, they want to better use them in a cohesive manner to learn more from them, and to better extract the information that these systems emit.

To help better understand the new solutions and approaches to detection and remediation of IT operations issues, I recently chatted with Steve Henning, the Vice President of Products for Integrien, in a sponsored BriefingsDirect podcast.

Here are some excerpts:
IT operations is being told to either keep their budgets static or to reduce them. Traditionally, the way that the vice president of IT operations has been able to keep problems from occurring in these environments has been by throwing more people at it.

This is just not scalable. There is no way ... (to) possibly hire the people to support that. Even with the budget, he couldn’t find the people today.

If you look at most IT environments today, the IT people will tell you that three or four minutes before a problem occurs, they will start to understand that little pattern of events that lead to the problem.

But most of the people that I speak to tell me that’s too late. By the time they identify the pattern that repeats and leads to a particular problem -- for example, a slowdown of a particular critical transaction -- it’s too late. Either the system goes down or the slowdown is such that they are losing business.

Service oriented architecture (SOA) and virtualization increase the management problem by at least a factor of three. So you can see that this is a more complex and challenging environment to manage.

So it’s a very troubling environment these days. It’s really what’s pushing people toward looking at different approaches, of taking more of a probabilistic look, measuring variables, and looking at probable outcomes -- rather than trying to do things in a deterministic way, measuring every possible variable, looking at it as quickly as possible, and hoping that problems just don’t slip by.

If you look at the applications that are being delivered today, monitoring everything from a silo standpoint and hoping to be able to solve problems in that environment is absolutely impossible. There has to be some way for all of the data to be analyzed in a holistic fashion, understanding the normal behaviors of each of the metrics that are being collected by these monitoring systems. Once you have that normal behavior, you’re alerting only to abnormal behaviors that are the real precursors to problems.

One of the alternatives is separating the wheat from the chaff and learning the normal behavior of the system. If you look at Integrien Alive, we use sophisticated, dynamic thresholding algorithms. We have multiple algorithms looking at the data to determine that normal behavior and then alerting only to abnormal precursors of problems.

Once you've learned the normal behavior of the system, these abnormal behaviors far downstream of where the problem actually occurs are the earliest precursors to these problems. We can pick up that these problems are going to occur, sometimes an hour before the problem actually happens.

The ability to get predictive alerts ... that’s kind of the nirvana of IT operations. Once you’ve captured models of the recurring problems in the IT environment, a product like Integrien Alive can see the incoming stream of real-time data and compare that against the models in the library.

If it sees a match with a high enough probability it can let you know ahead of time, up to an hour ahead of time, that you are going to have a particular problem that has previously occurred. You can also record exactly what you did to solve the problem, and how you have diagnosed it, so that you can solve it.

We're actually enhancing the expertise of these folks. You're always going to need experts in there. You’re always going to need the folks who have the tribal knowledge of the application. What we are doing, though, is enabling them to do their job better with earlier understanding of where the problems are occurring by adding and solving this massive data correlation issue when a problem occurs.
Listen to the podcast. Or read a full transcript. Sponsor: Integrien.

Monday, February 4, 2008

Microsoft-Yahoo! combination could yield an Orwellian Web world

About 10 years ago, we used to ask Jim Barksdale, then head of Netscape, a stock question during news conferences. Did you bag any "default browser" deals lately? Inevitably Jim would demur and say they were still trying.

Those were the days when the light was swiftly fading from the Netscape's browser's beacon, and Microsoft's newcomer (and inferior) browser, Internet Explorer, was bagging the default status for PC distributors and online services. That was enough to cement Microsoft's dominance of the Web browser market globally in a few short years.

The fact is, in an online world, convenience is the killer application. For most folks starting up their PCs, whatever comes up on the screen first and easiest is what they tend to use. That's why we have craplets, it's why Netscape bit the dust, and it's why Microsoft's unsolicited bid to buy Yahoo! is Redmond's last grasp at their old worldwide Web dominion strategy. The only way for Microsoft to hold onto its PC monopoly is to gain a Web monopoly too.

And Microsoft would have a good shot at cementing those two as a monopoly with the acquisition of Yahoo!. Because for the vast majority of people who simply do no more than fire up their PCs, click to start their browsers, open a Microsoft Word document, an Outlook calendar entry, an online email or instant message -- they will entering (mostly unbeknownst to themselves) a new default Web services environment.

With both Yahoo!'s and Microsoft's directories of users integrated, the miracle of single sign-on makes them in the probable near future part of the Microsoft advertising network, the Microsoft ID management complex, the Microsoft "software plus services" environment -- all by default, all quite convenient. And once you're in as a user, and once the oxygen is cut off to the competition, the world begins to look a lot more like Windows Everywhere over time.

Indeed, the proposed Microsoft takeover of Yahoo! is really the continuation of the failed (but strategically imperative) Hailstorm initiative. You might recall how Microsoft wanted to use single sign-on to link any users of Hotmail, or Instant Messanger, or Microsoft's myriad Web portals and services (MSN) to all be onramps to the same federated ID management overlay to reach all kinds of services. It was the roach motel attempt to corner the burgeoning network -- use Internet protocols, sure, but create a separate virtual Web of, by, and for Microsoft. The initiative caused quite a donnybrook because it seemed to limited users' ability to freely navigate among other Internet services -- at least on a convenient basis (and for a price).

So Microsoft's first stab at total Web dominance worked at the level of gaining the default browser, but failed at the larger enterprise. Microsoft thought it was only a matter of time, however. And it planned prematurely to begin pulling users back from the Web into the Microsoft world of single sign-on access to Microsoft services -- from travel, to city directories, to maps, to search. Microsoft incorrectly thought that the peril of Web as a Windows-less platform had been neutralized, it's competitors' oxygen cut off. Microsoft began to leverage its own Web services and monopoly desktop status to try and keep users on its sites, using its Web server, and its Web browser and its content offerings -- making for the Microsoft Wide Web, while the real Web withered away for use by scientists (again).

But several unexpected things happened to thwart this march into a Big Brother utopia -- a place where users began and ended their digital days (as workers and consumers) within the Microsoft environment. Linux and Apache Web Server stunted the penetration of the security risk Internet Information Services (nee Server) (IIS). AOL created a bigger online home-based community. Mozilla became a fine and dandy Web browser alternative (albeit not the default choice). Java became a dominant language for distributed computing, and an accepted runtime environment standard.

Dial-up gave way to broadband for both homes and businesses. The digital gusher was provided by several sources (many of which were hostile to to Microsoft and its minions). Software as a service (Saas) became viable and Saleforce.com succeeded. And, most importantly, Google emerged as the dominant search engine and created the new economics of the Web -- search-based juxtaposed automated link ads.

Microsoft had tried to gain the Web's revenues via dominance of the platform, rather than via the compelling relationship of convenience of access to all the relevant information. Microsoft wanted Windows 2.0 instead of Web 2.0.

Social networks like MySpace LinkedIn, and Facebook replaced AOL as the communities of choice. And resurgent IBM and Apple were containing Microsoft at the edges, and even turning their hegemony back meaningfully. Mobile networks were how many of the world's newest Internet users access content and services, sans a Microsoft client.

And so only a mere three years ago, Microsoft's plans for total dominance were dashed, even though they seemingly had it all. Just like Tom Brady, they just couldn't hold on to cement the sweep, and their perfect season ended before the season itself was over. What Microsoft could not control was the Internet, thirst for unfettered knowledge, and the set of open standards -- TCP/IP & HTML -- that sidesteps Windows.

Yet at every step of the way Microsoft tried to buy, bully, create or destroy in order to control the onramps, applications, developers, content, media, and convenience of the Web - even if the genie was out of the bottle. They did their own dial-up networks, they had proxies buy up cable franchises, they tried to dominate mobile software. They created television channels, and publishing divisions, and business applications. They largely failed against an open market in everything but their original successes: PC platform, productivity apps, tools, and closed runtime.

And so the bid for Yahoo! both underscores that failure as well as demonstrates the desperate last attempt to dominate more than their desktop software monopoly. This is a make or break event for Microsoft, and has huge ramifications for the futures of several critical industries.

If Redmond succeeds with acquiring Yahoo!, imagine a world that was already once feared, back some 10 years ago. That is an Orwellian world in which a huge majority of all users of the Internet globally can -- wittingly or otherwise -- only gain their emails, their word processing, their news, their services, their spreadsheets, their data, their workflow -- all that which they do online essentially -- only by passing through the Microsoft complex and paying their tolls along the way.

All those who wish to reach that mass audience, be it on a long tail or conventional mass markets basis, must use what de facto standards Microsoft has anointed. They must buy the correct proprietary servers and infrastructure, they must develop on the proscribed frameworks. They must view the world through Microsoft Windows, at significant recurring cost. Would Microsoft's historic economic behavior translate well to such control over knowledge, experience, and personal choice?

This may sound shrill, but a dominant federated ID management function is the real killer application of convenience that is at stake today. Google knows it, and quietly and mostly responsibly linked many of its services to a single sign-on ID cloud. When you get a gmail account, it becomes your passport to many Google services, and it contains much about your online definition, as well as aids and abets the ability to power the automated advertising juggernaut that Microsoft rightly fears. But at least Google (so far) lets the content and media develop based on the open market. They don't exact a mandatory toll as much as take a portion of valued voluntary transactions, and they remain in support of open standards and choice of platforms.

We now may face a choice between a "do no evil" philosophy of seemingly much choice, or an extend-the-monopoly approach that has tended to limit choice. The Microsoft monopoly has already needed to be reigned in by global regulators who fear a blind ambition powerhouse, or who fear unmitigated control over major aspects of digital existence. Orwell didn't know how political power would be balanced or controlled in his future vision, perched as he was at the unfortunate mid-20th century.

How the power of the Internet is balanced is what now is at stake with the Microsoft-Yahoo! bid. Who can you trust with such power?

Friday, February 1, 2008

Microsoft's Yahoo bids speaks as much of failure as opportunity

Is Microsoft buying Yahoo! because it has succeeded in its own Windows Everywhere strategy and 12 years of lackluster performance on the Web?

Is Microsoft trying to buy Yahoo! because Yahoo! is seemingly at a weak point, unable to dominate in the key areas of search, advertising, and media?

Nope, Microsoft is trying to buy Yahoo! because neither Microsoft nor Yahoo! is succeeding on the Web in the ways that they should. And it's not just Google that has an edge: Consider Apple, eBay, Salesforce.com, Facebook, MySpace, Disney.

And how much sense does putting Microsoft and Yahoo! together make now? Not as much as it did two years ago when Yahoo! was stronger and Google was weaker. We should also thrown in that Apple and Amazon are also much stronger now than at any time in the past. The media conglomerates are starting to figure things out.

So once again, we have Microsoft throwing outrageous amounts of money late at what should have been an obvious merger for them a long time ago. I recall is discussion on the Gillmor Gang podcast at least two years ago that wondered when -- not if -- Microsoft would buy Yahoo! Most of those on the call, including me said it was the only outcome for Yahoo! and the only way for Microsoft to blunt Google.

But that was then, and this is now. So the burning question today is not whether a Microsoft-Yahoo! mashup makes sense -- it has made sense for years. The question is whether it makes sense now, at this outlandish price, and if this in fact marks the point where Microsoft makes a desperate and devastating mistake.

Is the Yahoo! cloud built on Windows? Nope. So the model of Windows Everywhere is junked. Accessing Yahoo! services only requires a browser -- so much for the "software plus services." Will the burgeoning Microsoft cloud and the aging Yahoo clouds work well together? Will one be able to absorb the other. I say no to both. These will be separate and ill-fitting infrastructures. Will the Redmond and Silicon Valley cultures work well, or will huge layoffs in California portend even more gridlock in the eastern Seattle suburbs?

What might be even worse -- Microsoft make try and require all the Yahoo! users to get better service via their clients. Would they be deluded enough to try and tie Microsoft client-side software to Yahoo! web services? Watch the flood to Google, if they do. Watch for Google to scream about monopoly abuses if they do. [Good thing the new mega mother of all hairballs will be under anti-trust review for a bit longer, eh?]

Does this mean what Microsoft was wrong about open source too? Because Yahoo! has built its infrastructure on a lot of open source code, including its cloud infrastructure keystone Hadoop. So Microsoft will own one of the world's most massive open source distributed datacenters. As an enterprise, should you choose a Windows platform -- or Microsoft's new choice to win on the web -- open source?

Right, so for the need to win in search, media and adverting, Microsoft is now selling its Windows Everywhere soul. They have been handing you an expensive line of proprietary crap for years, and by buying Yahoo! and its totally different approach to Web infrastructure -- they admit it.

What's more, will the world like getting their news from Microsoft? As a user, which search engine will I get when I log in to Yahoo! or MS Live? Which email will I get when I log in? Can he Yahoo! directory merge with the Live, nee Hotmail, directory? Which company will be the one I think of as the "brand"?

This spells a significant period of confusion. And that's for consumers, IT buyers, enterprise CIOs, and advertisers as well.

And for the enterprises that have invested their fates in Microsoft infrastructure, how will they get their Web services? Will it be Yahoo! for the consumers, and Microsoft Live or the business folk? Or vice versa? Both, a mish-mash? Yikes!

What's more, the Microsoft-Yahoo! amalgamation will become the enemy of the media companies worldwide. There was a certain détente between Microsoft alone and Yahoo! alone and the media world. No more. And Google could position itself as the happy medium (pun intended).

This proposed deal smacks of desperation, not multiplication of growth opportunities. But the price premium probably makes it inevitable. The only way to make this work is for Microsoft to spread itself more thickly as a media, advertising, technology, services, platform, tool -- everything to everybody. The risk is to be less and less of anything to anybody.

Microsoft is perhaps perceiving itself as pouncing on Yahoo!, given its current disarray. There's the weird board action, and the layoffs, and the performance issues. But this is weakness buying weakness, with a large period of confusion, dilution of value and brands, and risky alignments of cultures and technology.

And this from Microsoft - the hithertofore conservative acquirer that doesn't go for the big, blow-out acquisitions. Well, this is the big blow-out media merger of the year. Seems that going in the other direction, of splitting Microsoft up into logical sections that can operate and compete on their own, is out. For some time, no doubt.

The biggest risk is that if this ends up the mess it appears, that it just may end up just driving more consumers, advertisers, and businesses into the waiting arms of the singularly understood and focused Google, Apple, and IBM. It could well backfire.

And I for one will miss both Yahoo! and Microsoft because whatever they cobble together from the two won't be able to do the same that either did separately. It will be hard to define just what it is ... I think I'll call it Amalgamated Digital. It certainly isn't "micro," and it's not "soft." Any yahoo can see that.

Monday, January 28, 2008

WSO2 targets 'Social Enterprise' with combined JavaScript/Web services Mashup Server

WSO2, an open-source SOA provider, has combined JavaScript programming and Web services with the launch of its Mashup Server 1.0.

This open-source offering, which can be downloaded without subscription fees, will allow enterprises to consume, aggregate, and publish information in a variety of forms and from a variety of sources.

At the same time, WSO2, based in Colombo, Sri Lanka, and Mountain View, Calif., has announced the beta release of Mooshup.com, a hosted online version of the Mashup Server, which provides a community site for developing, running, and sharing mashups. [Disclosure: WSO2 has been a sponsor of BriefingsDirect podcasts.]

Each new service in the mashup comes with metadata that is designed to simplify consumption by other mashups and Web services clients, as well as artifacts that simplify construction of user interfaces (UIs) in browsers, rich applications, and other environments. Because it supports the separation of content and presentation, Mashup Server enables recursive mashups, meaning one mashup can be consumer by another. It also broadens the user interface beyond HTML to RSS and Atom feeds, email, and instant messaging.

The use of JavaScript leverages the broad base of developers who use the broad-based language, and mashups can be authored directly within the administrative UI, with a simple text editor, or with any popular integrated development environment (IDE).

The beta version of Mashup Server has already gotten good notices. Ohloh.net estimates that it would have cost an enterprise $571,736 to write this project from scratch, figuring nearly 45,000 lines of code and 10 person-years.

Ganesh Prasad, who blogs at The Wisdom of Ganesh, has a lot of good things to say, based on the beta release:

So is the WSO2 Mashup Server the one that will bring balance to the Force? A powerful programming language. Laughably easy XML manipulation. Simple access to SOAP services and REST resources. Transparent publication of itself as a service or resource in turn. Isn't this the holy grail of service composition?

WSO2 Mashup Server seems to be the industry's best-kept secret for now.

The Mashup Server is built on the WSO2 Web Services Application Server, based on Apache /Axis2, and WSO2’s built-in registry. Key features include:

  • The ability to author and deploy mashups using notepad and a Mashup Server virtual directory.
  • Auto-generation of Web service and UI artifacts, such as WSDL, REST URLs, JavaScript stubs.
  • Try-It feature to help developers invoke and debug mashups or start developing their own rich HTML clients.
  • Web 2.0-style console, powered by the WSO2 Registry, which natively supports different users, and allows tags, comments, and ratings and a powerful search capability.

The Mashup Server is available for download. Mooshup.com membership is free, contingent on email verification.

Progress Software adds cross-process visibility with Actional 7.1

Progress Software has beefed up its Actional SOA management offerings with the release today of Progress Actional 7.1, which provides unified visibility into business processes, and connects those business processes to the underlying SOA infrastructure.

Key features of the latest release include an automatic discovery feature that keeps information accurate, allowing users to compare how processes change from day to day. User can also set thresholds for alters about behavior and performance, and policy enforcement will automatically adjust when services or processes change.

Progress, Bedford, Mass., added the Actional product line to its SOA arsenal just a little over two years ago with the acquisition of Actional Corporation in a $32-million deal.

Progress said that Actional 7.1 will integrate with Lombardi TeamWorks, and the company plans to provide native support for other business-process management (BPM) solutions, including offerings from Software AG and Fujitsu. Actional also includes a software development kit (SDK) that allows third parties to add support for other BPM and SOA infrastructure products.

The new version also includes support for non-XML payload data, which is designed to allow users to inspect and analyze message content in such existing services as Remote Method Invocation (RMI) and Enterprise JavaBeanT (EJB).

Last July, I had a lengthy podcast discussion about Software as a Service (SaaS) with Colleen Smith, managing director of Saas for Progress. You can listen to the podcast here.

For more information on the latest offering, see the Actional Web site.


Wednesday, January 23, 2008

IBM's AptSoft acquistion opens SOA event processing to line of business personnel

IBM has beefed up it's business process management (BPM) offerings in the service oriented architecture (SOA) space with today's announcement that Big Blue is acquiring AptSoft Corp., Burlington, Mass., a provider of business event-processing software. The move also extends event-processing capabilities to line-of-business personnel.

Business event processing identifies event patterns, connections between events, and allows users to establish triggers for action when certain trends appear. As SOA extends the reach of businesses and incorporates data and transactions both from inside and outside the enterprise, identifying both positive and negative trends can aid the company in responding quickly to either opportunities or threats.

IBM already has event-processing offerings in their portfolio, but according to Ed Lynch, business integration portfolio product manager, what AptSoft brings to the table is a set of tools that takes these capabilities out of IT and puts them in the hands of business people. It also moves event processing out of its traditional niche in financial services and enables it across industries and sectors.

Retailers, for example, can use event processing to proactively alert them about the success or failure of a product as goods move off the shelf, allowing them to make changes to pricing, inventory and marketing campaigns in real time; and by fleet management companies, to help them make instantaneous decisions on how to deal with products that are lost in transit or delayed due to unforeseen circumstances.

AptSoft is privately held, and neither company released the financial details of the deal. The AptSoft offerings will be wrapped into the WebSphere brand.

Thursday, January 17, 2008

IBM and Kapow on how enterprises exploit application mashups and lightweight data access

Listen to the podcast. Read a full transcript. Sponsor: Kapow Technologies.

The choices among enterprise application development and deployment technologies has never been greater. But what's truly different about today's applications is that line of business people can have a greater impact than ever on how technology supports their productive work.

By exploiting mashups, situational applications, Web 2.0 techniques and lightweight data access, new breeds of Web-based applications and services are being cobbled together fast, cheap, and without undue drain on IT staffs and developers. Tools and online services both are being used to combine external web services like maps and weather with internal data feeds and services to add whole new dimensions of business intelligence and workflow automation, often in a few days, often without waiting in line in order to get IT's attention.

And while many of these mashups happen outside of IT's purview, more IT leaders see these innovative means as a productivity boon that can't be denied, and which may even save them time and resources while improving IT's image in the bargain. The trick is to manage the people and new processes without killing off the innovation.

To help weed through the agony and ecstasy of Enterprise 2.0 application development and deployment in the enterprise, I recently chatted with Rod Smith, Vice President of Internet Emerging Technologies at IBM, and Stefan Andreasen, the Founder and CTO of Kapow Technologies.

Here are some excerpts:
In times of innovation you get some definite chaos coming through, but IT and line of businesses see this as a big opportunity. ... The methodology here is very different from the development methodology we’ve been brought up to do. It’s much more collaborative, if you’re line of business, and it’s much more than a set of specifications.

This current wave is really driven by line of business getting IT in their own hands. They’ve started using it, and that’s created the chaos, but chaos is created because there is a need. The best thing that’s happening now is acknowledging that line-of-business people need to do their own thing. We need to give them the tools, environments and infrastructure so they can do it in a controlled way -- in an acceptable, secured way.

... As we opened up this content [we found] that this isn't just about IT managing or controlling it. It’s really a partnership now. ... The line of business wants to be involved when information is available and published. That’s a very different blending of responsibility than we've seen before on this.

There is a lot of information that’s out there, both on the public Web and on the private Web, which is really meant to be human-readable information. You can just think about something as simple as going to U.S. Geological Service and looking at fault lines of earthquakes and there isn't any programmatic API to access this data.

This kind of data might be very important. If I am building a factory in an earthquake area, I don’t want to buy a lot that is right on the top of a fault line. So I can turn this data into a standard API, and then use that as part of my intelligence to find the best property for my new factory.

It’s just not internal information they want. It's external information, and we really are empowering these content developers now. The types of applications that people are putting together are much more like dashboards of information, both internally and externally over the Internet, that businesses use to really drive their business. Before, the access costs were high.

Now the access costs are continuing to drop very low, and people do say, "Let’s go ahead and publish this information, so it can be consumed and remixed by business partners and others,” rather than thinking about just a set of APIs at a low level, like we did in the past with Java.

If you want to have automatic access to data or content, you need to be able to access it in a standard way. What is happening now with Web Oriented Architecture (WOA) is that we're focusing on a few standard formats like RESTful services and on feeds like RSS and Atom.

So first you need to be able to access your data that way. This is exactly what we do. Our customers turn data they work with in an application into these standard APIs and feeds, so they can work with them in an automated way. ... With the explosion of information out there, there's a realization that having the right data at the right time is getting more and more important. There is a huge need for getting access in an automated way.

The more forward-thinking people in IT departments realize that the faster they can put together publishable data content, they can get a deeper understanding in a very short time about what their customers want. They can then go back and decide the best way to open up that data. Is it through syndication feeds, XML, or programmatic API?

Before, IT had to guess usage and how many folks might be touching it, and then build it once and make it scalable. ... We've seen a huge flip now. Work is commensurate with some results that come quickly. Now we will see more collaboration coming from IT on information and partnerships.

What is interesting about it is, if you think about what I just described -- where we mashed in some data with AccuWeather -- if that had been an old SOA project of nine or 18 months, that would have been a significant investment for us, and would have been hard to justify. Now, if that takes a couple of weeks and hours to do -- even if it fails or doesn’t hit the right spot -- it was a great tool for learning what the other requirements were, and other things that we try as a business.

That’s what a lot of this Web 2.0 and mashups are about -- new avenues for communication, where you can be engaged and you can look at information and how you can put things together. And it has the right costs associated with it -- inexpensive. If I were going to sum up a lot of Web 2.0 and mashups, the magnitude of drop in “customization cost” is phenomenal.

What’s fun about this, and I think Stefan will agree, is that when I go to a customer, I don’t take PowerPoint charts anymore. I look on their website and I see if they have some syndication feeds or some REST interfaces or something. Then I look around and I see if I can create a mashup of their material with other material that hadn’t been built with before. That’s compelling.

People look and they start to get excited because, as you just said, they see business patterns in that. "If you could do that, could you grab this other information from so-and-so?" It’s almost like a jam session at that point, where people come up with ideas.
Listen to the podcast. Read a full transcript. Sponsor: Kapow Technologies.

Wednesday, January 16, 2008

Sun refuses to give up on software acquisitions, buys MySQL for $1 billion

We knew that Sun has been lusting after a real software business in addition to Solaris. We knew that Sun "shares" -- that it digs open source, including Solaris and Java. And we knew that Sun had a love-hate relationship with Oracle and a hate-hate relationship with IBM and Microsoft.

So toss this all in a big pot, put on simmer and you get a logical -- if not three years too late -- stew: Sun Microsystems intends to buy MySQL AB and its very popular open source database. The announcement comes today with a hefty price tag of $1 billion.

The MySQL purchase by Sun makes more sense than any other acquisition they have done since they botched NetDynamics 10 years ago. This could be what saves Sun.

Sun can make a lot of mischief with this one, by taking some significant oxygen out of its competitors' core database revenues. Sun can package MySQL with its other software (and sell some hardware and storage, to boot), with the effect that the database can drive the sales of operating systems, middleware and perhaps even tools. Used to be the other way around, eh? Fellow blogger Larry Dignan sees synergies, too. And Tony Baer has some good points.

Who could this hurt if Sun executes well? IBM, Oracle, Microsoft, Sybase, Red Hat, Ingres. It could hurt Microsoft and SQL Server the most. Sun could hasten the tipping point for the commercial relational database to go commodity, like Linux did to operating systems like Unix/Solaris. Sun could far better attract developers to a data services fabric efficiency than with its tools-middleware-Solaris stack alone. As we recently saw, with Microsoft buying Fast Search & Transfer, the lifecycle of data and content is where software productivity begins and ends.

Sun will need to do this right, which has its risks given Sun's record with large software acquisitions. And Sun won't get a lot of help ecology-wise, from any large vendors. This puts Sun on a solo track, which it seems to prefer anyway. I wonder if the global SIs other than IBM will grok this?

Yes, it makes a lot of sense, which makes the timing so frustrating. I for one -- and I was surely not alone -- told very high-up folks at Sun to buy and seduce MySQL three years ago. (I also told them to merge with SAP, but that's another blog.) When Sun went and renamed it's SunONE stack to the Java what's-it-all, I warned them it would piss off the community. It did. I also told them Oracle was kicking their shins in. It did. I said: "Oracle has Linux, and you have MySQL." Oh, well.

[Now, Oracle has BEA, which pretty much dissolves any common market goals that Oracle and Sun once had as leaders of the anti-Microsoft coalition. The BEA acquisition by Oracle was a given, hastened no doubt to the close by the gathering gloom of of a U.S. economic recession.]

I'm glad the Sun-MySQL logic still holds, but Oracle has already done the damage with Linux, we saw how that Unix-to-Linux transition put Sun on its knees, and on the defensive. And we know that Sun has only been able to get one leg up since then, albeit refraining from falling over completely. Now, with BEA, Oracle with its Linux and other open source strengths -- not to mention those business apps -- will seek to choke out the last light from Sun, and focus on IBM on the top end, and Microsoft on the lower end. As Larry Ellison said, there will be room for only a handful of mega-vendors -- and we cannot be assured yet that Sun will meaningfully be one of them (or perhaps instead the next Unisys).

Indeed, the timing may still have some gold lining .... err, silver lining. Sun has had to pay big-time for MySQL (a lot more than if they had taken a large position in the AB two years ago). And what do they get for the cool $1 billion? Installed base, really. Sun says MySQL has millions of global deployments including Facebook, Google, Nokia, Baidu and China Mobile.

There's more, though. The next vendor turf battles are moving up yet another abstraction. Remember the cloud thing? Sun in sense pioneered the commercialization of utility computing, only to have Amazon come out strong (and added a database service in the cloud late last year). IBM has cloud lust. Google and Microsoft, too. Sun's acquisition of MySQL could also help it become a larger vendor to the other cloud builders, ie telecos, while seeding the Sun cloud to better rain down data services for its own users and developers.

And that begs the question of an Oracle-BEA cloud. Perhaps a partnership with Google on that one, eh? Then we have the ultimate mega-vendor/provider triumvirate: Apple-Google-Oracle. It's what Microsoft would be if it broke itself up properly and got the anti-trust folks off of their backs (not to mention a reduction in internal dysfunction). And that leaves loose change in the form of Sun, IBM, Amazon, eBay, and the dark horses of the telecos. Sun ought to seduce the telcos, sure, and they know it. Problem is the telecos don't yet.

Google may end up being the cloud king-maker here, playing Oracle and Sun off of one another. Playing coy with IBM, too. Who will partner with Amazon? Fun times.

Surely if Sun can produce a full-service cloud built on Solaris-Intel-Sparc that includes low-energy-use virtualized runtimes, complementary tools, and integrated database -- and price it to win -- well, the cloud wars are on. Sun might hang on for yet another day or two.

Tuesday, January 15, 2008

MuleSource takes aim at SOA governance, launches subscription-based ESB

MuleSource, a provider of open-source service-oriented architecture (SOA) infrastructure software, has jumped into the SOA governance pool with the community release today of Mule Galaxy 1.0.

Galaxy, an open-source platform with integrated registry and repository, allows users to store and manage an increasing number of SOA artifacts and can be used in conjunction with the Mule enterprise service bus (ESB) or as a standalone product. It was also designed with federation in mind, being pluggable to other registries.

In other news today, Mule also announced a subscription-only version of its ESB, as well as a beta version of Mule Saturn, an activity monitoring tool for business processes and workflow.

The subscription ESB smacks of "Mule on-demand.com." It will be interesting to see how well this does in terms of uptake. Integration as a service seems to be gaining traction. We're also told this "ESB in the cloud" supports IBM CICS, which is interesting ... are we approaching transactional mashups en masse?

As enterprises use SOA to expand their consumption of services from both inside and outside the business, governance becomes an all-important issue for control. Galaxy provides such registry and repository features as lifecycle, dependency, and artifact management -- along with querying and indexing.

A RESTful HTTP Atom Pub interface facilities integration with such frameworks as Mule, Apache CXF, and WCF. Galaxy also provides out-of-the-box support for various artifact types, including Mule, WSDLs, and custom artifacts.

Galaxy can be downloaded now, and a fully tested enterprise edition will be available in Q2 for Mule Enterprise subscribers.

On the ESB front, Mule has taken aim at the Fortune 2000 customer base with the introduction of Mule 1.5 Enterprise Edition, a subscription-only commercial enterprise packaging of the Mule ESB integration platform. Prior to this announcement, the ESB had been available only in the community edition.

It's sort of funny, as commercial providers offer open source versions of their products, we also see open source providers handing up commercial versions. I guess that means everyone needs one of each? Perhaps the versions (ala Fedora to RHEL) are becoming alike, in that it takes a subscription of some sort to get the real goods and use them.

Take the traffic when you can, I've always said. Mule's popularity was in evidence in November, when the company announced that community downloads had surpassed one million.

The new enterprise offering is available for a single annual fee and encompasses news features, including:

  • Support for Apache CXF Web Services Framework
  • Patch management and provisioning via MuleHQ
  • Streaming of large data objects through Mule without being read into memory
  • Nested routers to decouple service implementations from service interfaces
  • Support for multiple models
  • Diagnostic feedback for customer support

More information is available from the MuleSource site.

For users looking for a business-activity monitoring tool, MuleSource has released a beta version of Mule Saturn 1.0, which is designed to complement an SOA infrastructure by providing detailed logging and reporting on every transaction that flows through the Mule ESB.

Saturn allows staff to drill down on transaction details and set message-level breakpoints for deep log analytics, allowing for continuous custom improvement. Key features include:

  • Business user view into workflow and state
  • Process visualization
  • Search on transaction, date, various ID
  • Reporting on service-level agreements

Saturn is available immediately to MuleSource subscribers.

Monday, January 14, 2008

WSO2 Web services framework builds bridge between Ruby and enterprise apps

WSO2 has built a bridge between Ruby-based applications and enterprise-class Web services with the introduction of its Web Services Framework for Ruby (WSF/Ruby) 1.0.

WSF/Ruby, an open-source framework for providing and consuming Web services in the Ruby object-oriented programming language, offers support for the WS-* stack, allowing developers to combine Ruby with security and messaging capabilities required for enterprise SOAP -based Web services. Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.

WSO2 Chairman/CEO Sanjiva Weerawarana explained the bridging capabilities in a pre-release interview with Infoworld:

While Ruby has been popular in the Web 2.0 realm, sometimes it needs to talk to legacy architectures, he said. With the new framework, developers could build a Web application using Ruby and then hook into enterprise infrastructures, such as JMS (Java Message Service) queues. For example, a Web site might be built with Ruby that then needs to link to an order fulfillment system based on an IBM mainframe or minicomputer, Weerawarana said.

With WSF/Ruby, developers can also consume Web services with Representational State Transfer (REST). WSF/Ruby also provides a fully open-source Ruby extension based on Apache Axis2/C, Apache Sandesha2/C, and Apache Rampart/C.

WSF/Ruby features both client and service APIs. The client uses the WSClient class for one-way and two-way service invocation support. The service API for providing Web services used the WSService class with support for one-way and two-way operations. Both APIs incorporate WSMessage class to handle message-level options.

WSF/Ruby 1.0 supports basic Web services standards, including SOAP 1.1 and SOAP 1.2. It also provides interoperability with Microsoft .NET, the Apache Axis2/Java-based WSO2 Web Services Application Server (WSAS), and other J2EE implementations. Key features of WSF/Ruby 1.0 are:

  • Comprehensive support for the WS*- stack, including the SOAP Message Transmission Optimization Mechanism (MTOM), WS-Addressing, WS-Security, WS-SecurityPolicy, and WS-Reliable Messaging.
  • Secure Web services with advanced WS*-Security features, such as encryption and signing of SOAP messages. Users also can send messages with UsernameToken and TimeStamp support.
  • Reliable messaging for Web services and clients.
  • REST support, so a single service can be exposed both as a SOAP-style and as a REST-style service. The client API also supports invoking REST services using HTTP GET and POST methods.
  • Class mapping for services, enabling a user to provide a class and expose the class operations as service operations.
  • Attachments with Web services and clients that allow users to send and receive attachments with SOAP messages in optimized formats and non-optimized formats with MTOM support.
According to WSO2, WSF/Ruby has been tested on Windows XP with Microsoft Visual C++ version 8.0, as well as with Linux GCC 4.1.1.

LogMeIn files for IPO, sets up the market for cloud-as-PC-support continuum

I see that remote PC services start-up LogMeIn is going to conduct an IPO on Nasdaq in the not too distant future, pointing up the vibrancy of the intersection of cloud computing and the personal computer.

And the encouraging growth that LogMeIn has enjoyed shows that cloud, remote maintenance and the long-term health of the PC are all quite mutually compatible, thank you. Microsoft has is right when they chime about "software and services," just as there will be for a long time the need for PCs and the cloud services that they will increasingly rely on.

So congrats to LogMeIn, they are a great bunch of folks. Disclosure: LogMeIn has been a sponsor of BriefingsDirect podcasts. I am sure glad I had that chat about the Web as operating system way back when with Mike and Joe.

This intention of filing seems only the beginning of LogMeIn's next phase. According to the filing, LogMeIn plans to raise up to $86 million from the IPO, but this could change. It may not be that large of a sum, but it shows how Internet firms don't require the capital they used to to grow substantially. And there'a always the possibility of LogMeIn making acquisitions to fill out its services and support portfolio.

Nice thing about the LogMeIn services is that they straddle the consumer, SOHO, SMB and enterprise markets. The services can cut across them all -- adding value while cutting costs on the old way of doing things. Nice recipe these days. More telcos and service providers will need such abilities too.

As I've said, I expect to see more telcos buying software and services vendors in 2008 to expand their offerings beyond the bit-pipe and entertainment content stuff. If you can serve it up on subscription, well then do it broadly and monetize the infrastructure as many ways as possible.

Tuesday, January 8, 2008

IBM remains way out in front on information access despite Microsoft's Fast bid

Ever notice that Microsoft -- with cash to burn apparently -- waits for the obvious to become inevitable and then ends up paying huge premiums for companies in order to catch up to reality? We saw it with aQuantive, Softricity and Groove Networks.

It's happened again with today's $1.2 billion bid by Microsoft for Norway's Fast Search and Transfer. Hasn't it been obvious for more than three years (at the least) that enterprise information management is an essential task for just about any large company?

That's why IBM has been buying up companies left and right, from Ascential to Filenet to Watchfire to Datamirror to Cognos. Oracle has been on a similar acquisitions track. Google has exceptionally produced search appliances (hardware!) to get a toe-hold in the on-premises search market, and Google and Yahoo! have also both been known to make acquisitions related to search. EMC even got it with Documentum.

Ya, that's what I'd call obvious. What's more, data warehousing, SAN, data marts and business intelligence (BI) have emerged as among the few consistent double-digit growth areas for IT spending the last few years.

So now some committee inside of Microsoft took a few months to stop fighting about whether SQL Server, SharePoint and Office 200X were enough to get the job done for the Fortune 500's information needs. I guess all that Microsoft R&D wasn't enough to apply to such an inevitable market need either. What do those world-class scientists do at Microsoft? Make Bill Gates videos?

And so now Microsoft's smartens up to internal content chaos (partly the result of all those MS Office files scattered hinther and yon), sees the market for what it is rather than what they would like it to be, and pays a double-digit multiple on revenues for Fast. Whoops, should have seen that coming. Oh, well, here's a billion.

It's almost as if Microsoft thinks it competitors and customers are stupid for not just using the Windows Everywhere approach when needs arise in the modern distributed enterprise. It's almost as if Microsoft waits for the market to spoil their all-inclusive fun (again), and then concedes late that Windows Everywhere alone probably won't get the job done (again). So the MBAs reach into the Redmond deep pockets and face reality, reluctantly and expensively.

Don't get me wrong, I think highly of Fast, know a few people there (congrats, folks), and was a blogger for Fast last year. I even did a sponsored podcast with Fast's CEO and CTO. That's a disclosure, FYI.

And I'm a big fan of data, content, information, digital assets, fortune cookies -- all of it being accessable, tagged, indexed and made useful in context to business processes. Meta data management gives me goosebumps. The more content that gets cleaned, categorized and easily found, the better. I'm a leaner to the schema. I'm also quite sure that this information management task is a prerequisite for general and successful implementations of service oriented architectures and search oriented architectures.

And I'm not alone. IBM has been building a formidable information management arsenal, applying it widely within its global accounts and a new factor as a value-add to its many other software and infrastructure offerings. The meta data approach also requires hardware and storage, not to mention professional services. IBM knows getting your information act together leads to SOA (both kinds) efficiencies and advantages. And -- looking outward -- as Big Blue ramps up its Blue Cloud initiatives, content access and management across domains and organizational boundaries takes on a whole new depth and imperative.

And now we can be sure that Microsoft thinks so too. Finally. My question is with all that money, and no qualms about spending lavishly for companies, why doesn't Microsoft do more acquisitions proactively instead of reactively?

Both Microsoft's investors and customers might appreciate it. The reason probably has to do with the how Microsoft manages itself. Perhaps it ought to do more internal searches for the obvious.

Thursday, January 3, 2008

Genuitec's Pulse service provides automated updates across Eclipse, Android, ColdFusion

MyEclipse IDE vendor Genuitec is stepping up the general developer downloads plate to take a swing at the task of automated and managed updates, plug-ins and patches to such widespread tools as Eclipse, Android, ColdFusion.

The free Pulse service helps bring a "single throat to choke" benefit to downloads but without the need to remain dependent on a single commercial vendor (or track all the bits yourself) amid diverse open source or ecology offerings. Fellow independent IT analyst Tony Baer has a piece on Pulse. The service is in beta, with version 1.0 due in early 2008.

Google's Android SDK -- a software stack focused on mobile devices -- and Android Development Tools (ADT) will come preconfigured to run with one click in Pulse's “Popular” profile area, Genuitec announced in December. That shows how quickly new offerings can be added to a Pulse software catalog service. Pulse refresh includes support for developers using Mac, Linux and Windows.

Pulse requires an agent be downloaded to an Eclipse Rich Client Platform (RCP).

The services put Genuitec squarely in the "value as a service" provider role to many types of developers. As we know, developers rely on communities as focal points for knowledge, news, updates, shared experience, code, and other online services.

As we've seen in many cases, a strong community following and sense of shared value among developers often bodes well for related commercial and FOSS products alike. Genuitec is obviously interested in wider use of MyElipse, and is therefore providing community innovation as a channel.

I also expect that Genuitec will move aggressively into "development and deployment as a service" offerings in 2008. There's no reason why a Pulse set of services could not evolve into a general platform for myriad developer resources and increasingly tools/IDEs as a service. Indeed, Genuitec is finding wider acceptance by developers of developing and deploying in the cloud concepts and benefits. Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.

So while Amazon offers to developers runtime, storage, and databases as services -- based on a pay as use and demand increases basis -- the whole question of tools is very interesting. The whole notion of free of very inexpensive means to development and deployment will prove a major trend in 2008, I predict.

Now there is virtually no barriers for developer innovation and entrepreneurial zeal to move from the white board to global exposure and potential use. And that can only be good for users, enterprises, ISVs, and the creativity that unfettered competition often unleashes.