Friday, August 31, 2007

IBM's Cell processor looks like a candidate to power numerous infrastructure appliances

News that IBM has announced an upgraded version of its blade server based on the Cell processor this week has me wondering about the versatile and powerful "systems on a chip"'s use in appliances, too.

We're heard some hinting and "that's a logical outcome" statements from IBM officials in the past few months, so the pairing of Cell and appliances would not come as a complete surprise. When the details emerge, the price-performance and ease of deployment benefits of these high-powered, multi-core appliances could be very impressive.

When IBM announced plans to buy Telelogic, the deal made sense to me more when the use of a specialized Cell-fired appliances was made part of a possible future portfolio. When we had Jim Ricotta on a recent BriefingsDirect SOA Insights Edition roundtable podcast, the IBM general manager of appliances indicated more specialized appliance to come from IBM, though he did not finger Cell specifically.

Appliances from IBM should be expected in more componentized infrastructure roles in the coming months, for sure. They make a great deal of sense for data and content optimization and balancing, for SOA-support functions such as ESB, registry/repository, and as discrete services support stacks in a box (a business service appliance).

Those specifying services or functions will not need to consider the underlying platforms or inherent low-level integration issues, just focus on the largely standards-based interoperability characteristics for these functional units. Appliances allow greater exploitation of open source efficiencies by the vendor, with less complexity for the end user, and a better margin for the seller (more than just service and support).

Indeed, we may see some sort of a face-off in terms of total cost and performance between virtualization approaches and appliances approaches. Why not use both? I expect that appliances may very well be filling up larger list of new requirements for enterprise architects over the next two years.

The multi-core attributes of Cell, plus the proprietary 'Synergistic Processing Elements' (SPEs) for the chips provide the means to exploit parallelism and finely tune each box for the specific functionality at hand. The fact that these specialized and closed functional components (hardware, software, integration, optimization) require much less set-up and life-time support, appeals to architects (if not integrators). They may also help on energy use and heat-dispersion issues as well.

The ability to scale by virtue of adding (or subtracting) boxes, plus the ease of swapping and redundancy -- all bode well for more appliances-driven architecture (ADA [... sorry]) for SOA and high-performance yet specialized computing at the best total cost. These attributes will be of interest to hosters, service providers and telecommunications providers.

IBM's Ricotta told our analysts that appliances can cut costs by half, compared to traditional deployment approaches. When you take such economic common sense and toss in the technological secret sauce of optimized and specialized Cell chip-sets ... the balance of Power could well shift toward appliances in the most competitive datacenters.

Wednesday, August 29, 2007

SaaS now ready to succeed where ASPs failed -- especially for smaller businesses

Listen to the podcast. Read a full transcript.

There is an entire universe of suppliers and vendors that support the delivery of applications as on-demand services. Indeed, the Software as a Service (SaaS) model is attracting more than end users who acquire their IT via user-per-month service subscriptions. Also attracted to the SaaS market are those vendors creating the means to produce and deliver such services well and efficiently.

That's because the timing is now right for small businesses and ISVs to reach each other through SaaS, with the Web as a platform, and with compelling economics. We're also seeing more Services Oriented Architecture (SOA) support vendors focus their sales on SaaS providers and hosts, with the understanding that SOA may well emerge in the SaaS universe first, and then extend to enterprises more generally.

To help understand the SaaS market, what SaaS providers want and what those seeking to support those providers can deliver, I recently spoke with Colleen Smith, managing director of Software as a Service for Progress Software.

The resulting podcast offers some great insights and better appreciation of the swelling ecology of vendors and providers devoted to SaaS delivery.

Here are some excerpts:
Progress Software had started to look at the application service provider (ASP) model back in the early 2000-2001 time frame to figure out whether there was an opportunity for some of the small ISVs who were using the Progress technology to become more of an application service provider. ... I was basically asked to figure out how to build more of a SaaS partner program and look at ways in which we could work with our partners.

[We looked at] the technology enablement and how to build applications to go to market with SaaS. We also added a couple of other things, because we felt that one of the biggest challenges traditional software vendors had was around the business model, the go-to-market strategy, sales enablement, and figuring out ways in which we could actually help them to be more successful in this new business model. We were thinking of it more as a business model and not just as a technology.

Sure, there are the technical components of multi-tenancy, being able to have a Web-based access, and being able to drive policy configuration and personalization. ... On the software side of it, there is much more of a focus on business-process automation, and the people who are building, deploying, and running those applications have a good, solid knowledge of the business itself. The second thing is that the applications are now architected specifically to be able to run for multiple customers, and it’s not a separate implementation for each customer.

The economy of scale is what killed a lot of hosting providers back in the ASP days and ran them out of business. They were just doing an implementation for every customer, as opposed to a single implementation that can now be used by multiple customers -- personalized and managed. The people who use the application run and use it differently, but the implementation is pretty much the same for all customers.

More importantly, we work with a lot with our partners or these ISVs to make sure they realize that this requires different marketing. It requires a different sales and business model, because clearly there are financial implications in terms of cash flows. There are also a lot of things they need to think about in terms of who is the target market.

We've helped them focus on looking at new markets and going down-market. Our partners have always focused very much on the mid-market, but SaaS has enabled them to target some very niche verticals and go down into the "S" of SMB (small and medium business).

I think the timing is right. There are a bunch of reasons why. Number one, the Web is finally viewed as a business platform. Seven or 10 years ago, the Web wasn't viewed as the way in which business applications were going to be run and managed. ... [Before, SMBs] couldn’t afford the dedicated IT staff to manage and maintain the applications. They didn’t necessarily have the infrastructure and the technology to run these business applications. A lot of business applications are much too complex and require too much manpower to manage and maintain the apps.

ISVs [now] realize there’s a whole new market. There’s that long tail, if you will, of the software market that allows them to be able to go after new people. In the past, software just wasn’t accessible to them, and now there’s a whole new market opportunity.

We stress to our ISVs, "You can continue to be in the traditional software business for your core market and the market that you’ve been going after, but there’s a whole new opportunity for you to look at new markets, whether they be the low-end of your current market, adjacent markets, or even new geographic territories."

Throughout South America, Africa, and Asia-Pacific, what we’re finding is tremendous growth opportunity for ISVs to look at these as new markets and to go into those new markets with a new business model. That new business model is SaaS.

On the supply side of how these ISVs can deliver, there’s a new support ecology available to them. They don’t have to create their own data centers themselves. They can find partners. We’ve heard a lot about Amazon, for example, and there are others, of course. These ISVs can focus on what they do well, which is their software, their logic, and then also take advantage of some hosting.

Back in the ASP days, it was all about hosting. I’m not saying that in the SaaS world hosting isn’t important, because it absolutely is. What has changed over the last 7 to 10 years is that now you look at it in terms more of an ecosystem.

You’ve got your infrastructure providers, your application providers, and your hosting and managed-service providers. The biggest change that I have seen now is that each realizes they have a role to play, they have a core expertise, and that through building of this ecosystem and through partnerships you can be much more successful in being able to lower your deployment cost, but still being able to target and go after these new markets.

The SaaS market, in general, is really still in its nascence, and there are a lot of things that have yet to happen. But, the good news is this isn’t just a fad. We see a fundamental change in terms of the business model. ... The only way that the end customer is going to win in this is if we get into a business model where there is that shared risk and shared reward, but the customer pays for only what they need to use.

It's going to come down to pricing models. It still has to come down to some building of ecosystems out there, where everybody knows their role and plays that role, but doesn’t necessarily try to do the other person’s role. There are still a lot of things happening.

I believe it’s going to be vertically focused. I don’t think this is going to be a horizontal play. We’ve seen a lot of success in vertical business expertise. There's going to be content, business applications, data, and services. If all of those can be offered in a single environment through a single service provider, the customer will end up winning.
Listen to the podcast. Read a full transcript. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production.

Ruling expressly denies Express Logic its copyrighted API logic

Express Logic cried foul in June 2006 when Green Hills Software appeared to have a competitor in its embedded microkernel RTOS micro-velOSity product that looked a little too much like what Express Logic had already been delivering to the market (and partnering with Green Hills on).

In seeking a remedy, Express Logic called for an injunction on the market delivery of Green Hills' micro-velOSity (which was denied), and also sought arbitration over its position that Green Hills copied the ThreadX API C source code contained in Express Logic’s tx_api.h header file. Express Logic said that Green Hills had tread on its copyrights when Green Hills created micro-velOSity as an alternative to Express Logic's ThreadX.

Well, now the arbitration panel has sided largely against Express Logic. Green Hills feels vindicated. Express Logic would like to differ, if not in the case's outcome, than in the hardships facing the industry.

“We’re shocked that copying of source code and using it to compete with our copyrighted work was not found to be infringement,” said William E. Lamie, author of ThreadX and president of Express Logic, in a release. “We believe that the basis upon which the arbitrators determined that this copying was not infringing would put all software code at risk of being copied without infringement. After all, what software is not made up of ‘words and short phrases?’ As for the ‘functional requirements for compatibility,’ why should anyone be able to copy source code under the guise of compatibility but not use it for that purpose? This ruling seems illogical to us, and would put all software at risk if this reasoning were to be applied in other cases.”

From Green Hills: "We are vindicated by this judgment,” said Dan O’Dowd, CEO of Green Hills Software, in a release. “Express Logic’s campaign to instill fear, uncertainty, and doubt about micro-velOSity has failed. We regret any inconvenience this litigation has
caused our customers. This final arbitration award ensures that embedded developers can continue to use micro-velOSity with confidence.”

The issues around APIs and compatibility and when source code can and can not be copied are still a fuzz ball. The panel that ruled on this case does not set legal precedent, and a similar lawsuit may be filed (although not in this instance) some day.

Indeed, spats between software companies are nothing new, but the concepts around copyright and code (and even patents!) remain treacherous for many companies. You stay in this business for more than a few months and you'll hear of weird lawsuits and claims around patents, copyrights, and licenses. It's a vipers pit out there, for many.

Unfortunately, that may not change much any time soon. Even largely sensible revisions to the patents process that could clarify the role patents play in software are bogged down in bureaucracy and, yes, Virginia ... politics.

Express Logic may not be able to do more than complain in sales meetings that Green Hills has violated its intellectual property. Express Logic also claimed that Green Hills engages in unfair business practices ... well, that too is now more for the court of public opinion to decide. "Unfair" is a tough term to qualify in the world of software. Has anyone claimed that Silicon Valley or Route 128 are the bastions of the fairness and truth? Not since Rogers Rangers tangled with the Abenakis.

Did Express Logic bend, like a pretzel, the concept of APIs in seeking a legal remedy for its alleged victimization? Perhaps. Are the distinctions in code sharing for compatibility testing and for intellectual property protection murky? Probably. Does Green Hills care about its partners much when its own interests are involved? Probably not.

So there are some lessons in here. APIs are not a good way to assert intellectual property claims. Another is be careful who you partner with when murky software definitions are involved.

Saturday, August 18, 2007

Lotus Notes 8 brings unified collaboration to mashupable clients

IBM made two announcements Friday that should help smooth the way for bringing unifed Web 2.0-style applications to the Notes/Domino-installed enterprise.

The new Lotus Notes 8 and Lotus Domino 8 releases merge collaboration, communication and productivity features into a single desktop environment, giving users integrated access to such things as RSS feeds and search, along with email, instant messaging, presence, word processing, spreadsheets, and presentation software.

At the same time, IBM announced Expeditor 6.1.1, an Eclipse-based mashups tool that forms the underpinings of Notes/Domino 8 and thereby allows mashups via managed clients to reach desktops, laptops, and mobile devices.

Regular readers of BriefingsDirect know that search is an emerging enterprise strategy of growing importance and that RSS feeds will provide a powerful tool for distributing and managing content and data. Combining them onto a single desktop with other Web 2.0 technologies and productivity applications is certainly a step in the best-of-all-worlds direction.

Incidentally, the means of bringing mashups to the enterprise is evolving along varying lines. IBM likes managed clients, no surprise there. But Serena Software will soon be providing an on-demand platform for mashups that's also intended for enterprise use.

It's no secret that the Lotus Notes UI has been, shall we say ... cumbersome over the years (since Notes 5?). IBM hopes to have cleared that hurdle with a new interface, featuring a sidebar that summarizes all the user's tools in one place, including the RSS feeds.

In fact, it's the new interface that's gotten the most positive feedback from customer tests, according to Ed Brill, Business Unit Executive, Worldwide Lotus Notes/Domino Sales, IBM Software Group (nice title, Ed; go for brevity, I always say). The new release has been in development for more than two years.

The addition of productivity tools, according to Brill, comes from the observation that the principal reason many users in the past have left the Notes application was to use a spreadsheet or word processing (Since, like ... 1989). With the new release, users will be able to do that without leaving Notes.

Imagine, just imagine, if SmartSuite had been natively integrated into Notes in, say, 1994. Things might have panned out a little differently. Oh, well.

One surprising outcome of the customer testing, Brill says, is the level of interest in customers wanting a Notes client for Linux. Nearly 20 percent of downloads during testing have been for Linux. Hint, hint! [How about a full virtualized desktop service based on Linux/Domino with mashups galore! Maybe some appliances along those lines. Works for me.]

Built on Eclipse, Lotus Expeditor 6.1.1, is designed to allow integrated mashups independent of the client technology. Among the key features of Expeditor are:

  • A server-managed composite platform to integrate and aggregate applications and information.
  • Integration with real-time collaboration.
  • Integration with IBM WebSphere Portlet Factory and IBM WebSphere Portal Express.
  • End-to-end government-grade mobile security.
  • The ability to transform Microsoft Visual Basic applications.
There will be those Web 2.0 purists who will smirk at the way IBM is bringing these functions to the market. But consider that enterprises do more integrated collaboration via Notes/Domino than just about any other system. And, importantly, it's a lot easier to bring Web 2.0 functionality into an existing enterprise IT icon, than to bring a Web 2.0 functionality into the enterprise all on its barely surviving greenfield start-up own.

Friday, August 17, 2007

Serena fills cracks between SOA, ALM and SaaS with process-centric mashup-as-a-service platform

ALM vendor Serena Software is taking a walk on the wild services side in September when it announces the beta release of a platform for mashing up business process applications.

Unlike on-demand tools for content- and data-centric mashups, Serena with "Project Vail" has its sights on the visual tools-driven business-centric process and logic functions that IT departments just can't ever get to. We're talking HR, CRM, and supply chain applications -- the nuts and bolts of large enterprise logical functions. Because who said mashups were just for kids?

The notion is that business users have needs for small and often one-off web applications or widgets that support a small or modest process. They could go to their IT departments and ask for it, and they probably will be told "no," or that it will take months, and at staggering cost. IT is putting out fires, not adding small incremental efficiencies to department-level business stuff, right?

But like the long tail for media, there's a long tail for applications functionality, too. The trick is how to allow non-developers to mashup business services and processes, but also make such activities ultimately okay with IT. Can there be a rogue services development and deployment ecology inside enterprises that IT can live with? How can we ignite "innovation without permission" but not burn the house down?

Serena believes they can define and maintain such balances, and offer business process mashups via purely visual tools either on-premises or in the cloud. Serena, when it unveils Project Vail on Sept. 10, expects to produce a compelling and free mashup tool that MS Office power uses can quickly relate to. An Excel developer should actually produce business-centric mashups (no pivot tables required), so they say.

By making the tool free and virally distributed, Serena expects to seed the need for the on-premies servers they will license, or to become in-cloud host to the mashups. Serena will provide an all-online platform-as-a-service approach that they will charge for on a subscription basis, probably on a per-user/per month basis and perhaps also on a mashup by mashup basis. Imagine app dev from your HR department's petty cash account!

Now we're already seeing a lot of data mashups. And we've seen development as a service. But the notion business processes as a service has some interesting implications. And with a configuration management and ALM company -- which knows enterprise IT, as well as development efficiencies and applications/services lifecycles (not to mention how to leverage open source) -- well ... Serena might just be able to pull this off as well as anyone.

They will need to bridge the challenging uncharted waters between users, developers, IT as well as among and between SaaS, SOA, and packaged business applications. No easy task. But new Serena CEO Jeremy Burton has always struck me as a thought leader, never one to let his environment limit his gait.

If Serena can carve out leadership on this, there's a huge opportunity. IT departments -- as long as they are not held responsible for that over which they have little authority -- might actually like the notion of avoiding small custom applications chores. Having more on-demand or SaaS services might also grease the skids toward more use (and reuse) of SOA virtues. Allowing creative flexibility for line-of-business personnel to experiment with new approaches without involving developers could bear powerful fruit.

For example, if the mashups are solid and productive, well just use them as blueprints and requirements for more "sanctioned" and official application projects. These mashups could actually become incubators and laboratories for the rest of the developers to pick and choose from. Cool.

It will take a progressive CIO to see the value in this. This will have to be done with low risk to the enterprise. But being able to associate IT ALM with on-demand ALM will go a long way to making the Wild West of mashups seem more like the Midwest of IT creativity. Can be done.

Serena is leveraging its TeamTrack products to build out its new mashup tools, platform, and workflows. The way to create mashups for dummies (sorry) is to exploit workflow and process tools. Serena is working with OpSource for the hosting and multi-tenancy infrastructure.

If all of this works, the Serena platform could also evolve into a business mashup exchange, a solutions ecology for marketing services that could play very well with SOAs across small businesses and up to the largest enterprises and governments. There may even be federation between such an exchange and SOA registries and repositories. Shop till you drop.

The notion of business mashups as a service could become a sleeping tiger. Microsoft should like this a lot, even if they are only experimenting with consumery stuff now like Popfly. I actually think this also aligns with IBM's SOA strategy of vertically focused services and reuse. No reason why a QEDWiki mentality could not grow some Enterprise 2.0 wings.

What's more, Google and Amazon may have good reasons to get into the custom business services mashup arena. If there's a will, there's a way. Build, buy, or partner?

Hey, if Facebook can become the virtual home for scads of widgets and applications for the Web 2.0 crowd, who is going to achieve critical mass for such an environment for business services? Could be Serena.

They just have do do better than "Project Vail" for a name. General availability of the platform will come in the future, but expect to see Serena's free tool on a creative business users desktop near you this fall.

RSS feeds begin to bleed into enterprise applications

You've probably just gotten used to the idea of "mashups" for quickly bringing web services into applications and portals. Well, now get ready for making novel and powerful use of content via RSS feeds in a similar way.

I don't call them mashups, though, I call them "Feed Bleeds." That's because syndicated feeds can be easily bled into one another to form aggregated streams of content. Not only that, the users and/or developers can increasingly control how much of one type of content should be bled in with another.

The use of RSS feeds as conduits for distributing and managing content, data, and media acts as a complement to more programmatic displays of appropriate relational data in applications via such means as ODBC, JDBC and SQL. Whereas these data access protocols target structured content, the RSS and/or Atom feeds open up the spigot to much more information.

What's newly powerful is that nearly any kind of content can be driven through these feeds -- from documents, spreadsheets, and data to video, blogs, podcasts and online HTML instruction manuals. Feed Bleeds allow for human knowledge in natural language to mingle and complement IT-based assets such as data, application services and automated event-driven processes. Think of it as broad integration on the cheap -- and fast.

Unlike programmatic approaches, the developer can hand off to the end users the subscription to and fine-tuning of the content feeds. Users can adjust how much or how little on a subject they want. Businesses can control what feeds are allowed into the network. Get general information on one business subject and highly specific content on another. The work process need determines the right mix of procured feeds.

Indeed, the users can begin to use search in-house or and online directories to find the syndicated content they wish to add to their applications and process views. We're now seeing a lot more custom enterprise applications that contain and exploit RSS-based content. We're seeing enterprises identify more in-house content that they should expose as feeds.

I think the new Feed Bleed benefits are too powerful to ignore. By quickly finding information on almost any topic that's delivered through lightweight syndication, subscriber/aggregators can shape that information flow to help them in their work, then adjust the content qualitatively and quantitatively as needed to best contour the information to the tasks at hand.

Developers can give the users the tools to make content appear in context to business processes. RSS-enabled Windows Vista and many freely available, standalone news readers help, but a cadre of back-end servers with associated APIs also now allow the productive exploitation of feeds, content and services within applications. We're even seeing a conceptual page borrowed from service oriented architecture (SOA) in the form of RSS feed "buses." This really is a case of Web 2.0 leading to Enterprise 2.0, leading to mainstream enterprise IT.

On-premises servers provide the management and integration of feeds. There are also on-demand feed tools. Onsite feed bleed providers include Apatar, Inc., JackBe, Kapow Technologies, RSSBus, and Strikeiron, Inc. These suppliers allow all kinds of content (HTML, XML, PDFs, spreadsheets, CMS, RDBs, SOAP, REST, as well as RSS/Atom) to be bled together, organized, managed and presented. Online mashup tools come from Dapper, OpenKapow, Teqlo, and Yahoo Pipes, among others. Apatar also has a hosted online offering in the works.

What's more, software as a service (SaaS) business applications providers like Salesforce.com (maps and data merge) and Workday are providing more mashups and feeds-based and enhanced services. If it's good for a SaaS provider, it should be good for an enterprise (as it acts as the service provider to its internal and partner users).

The open source world is also a fan of feed bleeds. An increasingly effective lightweight database aggregation approach involves creating specific feeds of data from, say, MySQL data and SugarCRM applications, that are then aggregated into common feeds that provide a single view of a customer, or an order, or a business process. This allows for whole new kinds of workflows, applications, and processes -- but on an agile time-frame. Interfaces are quickly evolving to allow for drag and drop means to create and adapt feeds. Even a business manager could do it!

IBM's vice president of emerging technologies, Rod Smith, is a fan of giving users the ability to finer-tune the content they need for their jobs. IBM itself has produced what it calls a "situational application" tool, a mashup enabler built on Zend Framework called QEDWiki (Quick and Easily Done). Smith recently told me he likes the idea of bringing together the Web 2.0 and enterprise IT communities so they can begin to work together, even if they don't necessarily speak the same language.

As Web 2.0 empowers younger workers to manage content online in new ways, they will want to use similar approaches on the job. Should this de done via an end-run around IT? Or should IT embrace and extend mashups and feed bleeds?

I think it's clear that this one is too good to ignore.

Thursday, August 16, 2007

Apache Camel addresses need for discrete infrastructure for services mediation and routing

Read a full transcript of the discussion. Listen to the podcast. Sponsor: IONA Technologies.

One size fits all has its limits. Most developers prefer to cherry-pick their infrastructure resources, keep them lightweight, and remain as agile as they can.

Taking a clue from this philosophy, the Apache Software Foundation has dozens of open source projects under way to build out the discrete infrastructure assets through community input and involvement, while also providing a needed balance between choice and automation.

One such project, Apache Camel, a sub-set of Apache ActiveMQ, is nearing maturity milestones that will make it a unique approach to middleware services mediation and routing. It's also becoming an essential ingredient to IONA's FUSE offerings.

To learn more about Apache Camel and its value to developers and operators, I recently sat down with James Strachan, technical director of engineering at IONA Technologies, and longtime Apache developer and committer.

Here are some excerpts from the discussion:

The problem we’re trying to address is the routing and mediation problem, which lots of people have. They're taking data from various components and sources -- whether it’s files, databases, message queues, Web services, instant messaging, or other data systems, integrating them together, formatting them, and connecting them to the systems. From a higher level perspective, this could be for legacy integration of systems, for smart routing, performance, monitoring, or testing or monitoring transaction flows.

Apache Camel grew organically from code and ideas from a bunch of other Apache projects, particularly Apache ActiveMQ and Apache ServiceMix. We found that people wanted to create and use patterns from the "Enterprise Integration Patterns" book in many different scenarios.

I definitely recommend people read Gregor Hohpe's book "Enterprise Integration Patterns." He offers a really good patterns catalog of how people should do mediation and integration. Rather like the original Gang of Four "Design Patterns" book, which describes low-level programming things, Gregor’s book describes very well how enterprise integration patterns (EIPs) can work and gives us a language for describing them.

Some people wanted to use these patterns inside an Enterprise Service Bus (ESB), some people wanted to use these patterns inside a message broker, and other people wanted to use these patterns inside an application itself or to talk between messaging providers. Still other people wanted to use them inside a Web services framework or some other communication platform.

What we tried to do with Camel was give it the smallest footprint possible, so that it can be reused anywhere, whether in a servlet, in the Web services stack, inside a full ESB, or a messaging application.

We work very closely with the ServiceMix community at Apache which has created a complete Java Business Integration (JBI)-compliant container of JBI components. Camel can be deployed within the ServiceMix ESB among JBI components, but some people don’t use JBI and they may just use Java Message Service (JMS) or they may just use Web services or they may just be JAX-WS clients or whatever. So, we try to make Camel agnostic to technologies. You can use it within patterns like Spring, or JBI or OSGi, or you can use it within any application.

Across all of IT we’re seeing increased specialization in many different areas, where the specialization helps us solve a problem at a higher level. ... And because we’re doing it an open-source environment where there’s a community involved, there's more likelihood that this will be applied across many other different types of platforms and technology.

Camel is unique in a number of ways. What we’re doing with Camel is defining a high-level language to describe EIPs, which I don’t think anybody else has done before. The other thing that’s unique is that this language very closely maps to components that work inside Spring 2. ... What we’re doing is raising the abstraction level to make things very simple, reducing the amount of XML we have to write, but still exposing the wire-level access if you need to do the really hard stuff and roll your sleeves up and get down and dirty.
People don’t have to worry about the low-level details of how to use JMS, how to use JBI, or how to wire together the Spring components correctly, and so forth. We're giving people a nice, simple, high-level abstraction, but yet we are exposing all the power of frameworks like Spring and still exposing the low-level details if you need them.

People really want small and simple-to-use components that solve the problems they have. I've seen that throughout all of our customers. People ask for very specific solutions. They don’t say, "Give me a SOA." They say, "I need a message router," "I need a message bus," "I need an ESB," or "I need a services framework." Often, people have very specific requirements and are very much cherry-picking the best-of-breed components from the open-source tool set

If you go to http://open.iona.com, there is a whole raft of documentation online forums, a wiki, and so forth.

Read a full transcript of the discussion. Listen to the podcast. Sponsor: IONA Technologies. Produced by Interarbor Solutions: analysis, consulting and rich new-media content production.

Wednesday, August 15, 2007

Citrix makes bold virtualization move with XenSource acquisition

Citrix Systems Inc. today roared full throttle into the ever-expanding desktop virtualization arena, when it announced its intention to acquire XenSource, Inc. of Palo Alto, Calif. The news comes right on the heels of VMWare's huge IPO pop.

The $500-million acquisition will provide the Fort Lauderdale, Fla.-based Citrix with a major piece of the virtualization puzzle, adding XenSource's infrastructure solutions, based on the open-source Xen hypervisor, to Citrix's existing application and presentation technologies. This will add the vital OS component to their virtualization engine.

Citrix has said it expects the virtualization market to grow by $5 billion over the next four years. Today's move will put the company right in line for a piece of that pie. It had better, given the rich price Citrix is paying for XenSource.

The acquisition also sets the stage for Citrix to move boldly into the desktop as a service business, from the applications serving side of things. We've already seen the provider space for desktops as a service heat up with the recent arrival of venture-backed Desktone. One has to wonder whether Citrix will protect Windows by virtualizing the desktop competition, or threaten Windows by the reverse.

The acquisition also piggybacks on XenSource's release of XenEnterprise V4, which added new management, availability, and ease-of-use features to the company's flagship product. That release earned high marks in a head-to-head product comparison published this week by Computer Reseller News (CRN). XenSource also said its installed base has doubled to over 650 customers in the last 90 days.

Fellow ZDNet blogger Dan Kusnetzky has some good thoughts on the news.
Historically, the attempts to meld technology companies having dissimilar management styles and cultures have not turned out very well. It doesn’t take long to find disasters. Some examples are IBM’s acquisition of Rholm or CA’s acquisition of Ingres are really good reference points. In both cases, the key talent that made the companies what they were left quickly after their home was acquired. Both IBM and CA were left holding very expensive shells of what had been thriving, innovative companies. It’s hard to imagine how Citrix will be able to meld an open source company into their heavily Microsoft-focused environment.

The move further cements an already strong relation with Microsoft on the part of Citrix, but complicates the picture when it comes to open source. XenSource has worked with Microsoft to ensure interoperability between XenSource products and the upcoming Windows hypervisor, code-named "Viridian." But Citrix had worked with Microsoft much longer and more deeply in the Windows application delivery, application networking, and branch office infrastructure markets.

Indeed, few companies have straddled the Microsoft co-opetition vacuum as well as Citrix. Interestingly, both companies have thrived by each other, even while on a strategic level one could easily project potential discord ... some day.

While Citrix has had a strong presence in user-tier virtualization, the XenSource acquisition will extend the company's reach into the logic and data tier, extending virtualization to the servers that run the business logic of applications and the storage system that manage applications data.

Citrix said today it intends to distribute the XenEnterprise product line through more than 5,000 channel partners with expertise in datacenter solutions, and to work with server and datacenter infrastructure partners to create additional routes to market through OEM channels.

When it comes to the desktop, Citrix says the combination of its Desktop Server with XenEnterprise v4 will create comprehensive desktop solutions, and Citrix intends to incorporate such other Citrix technologies as:

  • EdgeSight -- for end-user experience monitoring
  • Access gateway -- for secure application access
  • WANScaler - for accelerated delivery to branch office users; and
  • GoToAssist -- for remote desktop support.

The deal, includes the assumption by Citrix of approximately $107 million in unvested stock options, has already been approved by the boards of directors of both firms, and now requires regulatory approval and the approval of XenSource stockholders.

The deal wasn't a total surprise, and was predicted by Dennis Simson and Philip Winslow writing at DABCC last week. Their take on the acquisition was generally upbeat:

While these companies’ virtual infrastructure management tools are more immature versus more-established vendors, if Citrix can develop robust management software through increased R&D while leveraging the open source Xen hypervisor, Citrix could establish itself as a strong competitor in both desktop and server virtualization within two to three years.

ZDNet bloggers Dan Farber and Larry Dignan see the move as an opening gambit in a virtualization land grab that got underway with VMWare's IPO.

Not everyone is jumping with joy over the acquisition. CRN found some customers who expressed dismay that joining with Citrix would diminish XenSource's agility and turn it into just another commodity product. What people think is more important among the community development crowd, a place Citrix has not had much experience to date.