Thursday, August 9, 2007

SOA Insights analysts explore SOA appliances, BPEL4People and GPL v3

Read a full transcript of the discussion. Listen to the podcast.

Appliances for IT infrastructure have evolved to include everything from email servers to network routers to XML accelerators. Some would argue that "appliances" can be hardware, software, or both. Purists have a more strict definition that they say will make the locked-down and all-inclusive devices of great appeal to growing legions of IT operators and SOA architects.

For the latest BriefingsDirect SOA Insights Edition podcast, our panel of IT analysts and experts are joined by someone who knows appliances inside and out, Jim Ricotta, vice president and general manager of appliances within IBM’s software group. Jim offers some hints that IBM is betting big on appliances across more aspects of IT solutions.

Our expert panel digs into this and other recent trends in SOA and enterprise IT architecture in the latest BriefingsDirect SOA Insights Edition, volume 21. Our group also examines the emerging BPEL4People specification for making humans and SOA better, if only loosely, coupled. The discussion ends with a look at the GPL v3 and the importance, or not, of the Apple iPhone.

So join noted IT industry analysts and experts Tony Baer, Jim Kobielus, Brad Shimmin, and Todd Biske for our latest SOA podcast discussion, hosted and moderated by yours truly.

Here are some excerpts:
... On IT infrastructure appliances and SOA

The basic concept of an appliance is to allow customers to get their projects going more quickly, experience lower total cost of ownership (TCO), etc. ... We have a broader remit and we are looking at a number of different appliance efforts for different parts of the IBM product set.

The idea with an appliance is that the clients don’t care what’s inside. They care about the functions that the device does. The way we have architected our product, we do have lots of choices. We can pick the right processors.

[But] it’s really much more ... than performance. ... it’s about TCO and then "time to solution" and "time to deployment."

I’ve heard big global IT organizations, when they do their TCO calculation, say a router is $100 a month to support, a server is $500, and a DataPower SOA appliance is maybe $200 to $250. Those are the kind of ranges I hear.

So, we are talking a potential 50 percent reduction in total cost? Yes.

Our customers say, “Geez. We could do what your box does with software running on a server, but the operations folks tell us it would be two times or four times more expensive to maintain, because we have to patch all the different things that are on there. It’s not the same everywhere in the world in our infrastructure. Whereas with your box, we configure it; we load a firmware image, and it’s always the same wherever it exists.”

So, our view is an appliance is three things that the customer buys at the same time: They buy hardware, software, and support, and it’s all together. That’s really what we think is the core value proposition.

A manager I worked for had the term "Dial-tone Infrastructure." You want to plug it in, pick it up, and it works. That’s the model that everybody is trying to get to with their solutions. But, when you're dealing with an appliance, you have to have that level of integration between the hardware and the software, so that you're getting the absolute best you can out of the underlying physical infrastructure that you have it on.

Any software-based approach that’s on a commodity hardware is not going to be optimized to the extent that it can be. You look at where you can leverage hardware appropriately and tune this thing to get every last ounce of performance out of it that you can.

[SOA and appliances] dovetail because the very concept of an appliance is something that’s loosely coupled. It’s a basic, discrete component of functionality that is loosely coupled from other components. You can swap it out independently from other components in your architecture, and independently scale it up or scale it out, as your traffic volume grows, and as your needs grow. So, once again, an appliance is a tangible service.

SOA has its own version of an ISO stack with the WS-Standards and the layers from things like BPEL, all the way down to XML and the basics. That’s what enabled this approach of putting together a device that supports a bunch of these standards and can fit right into anybody’s SOA architecture, no matter what they are doing with SOA.

We see ESB as a key part of any SOA architecture and deployment. If you do it properly ... you tend to get a performance solution. You’ve done optimization. You’ve done a pruning back of all the potential functions. ... You tend to have good performance from, as well as the other benefits I pointed to, easy deployment and low TCO. So, given that ESB is the core of SOA, in many ways having an appliance alternative is important.

A lot of this space in the middle in SOA is all about what I would call a "configure-not-code" approach. Appliances, by definition, are something you configure, not something that you are going to be developing code for. So, it’s really tuned for an operational model, and not for a developer having to go in and tinker around with it.

And that’s really where a lot of the savings can come in the total cost of ownership. It isn’t how much work you have to go through it to actually make a change to the policies that are being enforced by this software appliance or device, and there are big differences between the products out there.

An appliance can act as an enabler for other pieces of software in terms of providing that level of performance and scalability that those pieces can't do on their own. Such as we are seeing with ESBs and other areas. Those pieces of software need desperately some piece of hardware somewhere that can get them the information need in any timely manner.

We [at IBM] have some discussions underway with network providers that have big corporate clients who are now launching their first B2B Web services, and they are basically utilizing SOA-type functions between organizations across Wide Area Networks. These carriers are looking at how to provide a value-added service, a value-added network to this growing volume of XML, SOA-type traffic. We see that as a trend in the next couple of years.

On BPEL4People for SOA ...


The BPEL4People specification came to fruition this week. ... It’s interesting that they made both spec proposals separate. But, it’s not any type of surprise. IBM and SAP have been talking about this for about 18 months to two years, if I recall. What was a little interesting was that Oracle originally dissented from this, and now Oracle is part of that team.

The SOA folks have looked at BPEL and find something interesting. It does well with machine-to-machine, or at least with designed-for-automated processes to trigger other automated processes based on various conditions and scenarios, and do it dynamically. But, the one piece that was missing was most processes are not 100 percent automated. There’s going to be some human input somewhere. It was pointed out that this is a major shortcoming of the BPEL spec.

So, IBM, SAP, Oracle, BEA, Adobe and Active Endpoints have put together a proposal to patch this gap, and they’re going to submit it to OASIS ... BPEL4People. We’re going to add a stopping point to say, "Put a human task here." That’s essentially BPEL4People. It’s a little more than that, but essentially boils down to that.

Where I tend to see the value in this is that invoking a human task as a service is not necessarily a call for relationship with orchestration. You don’t necessarily have to orchestrate in order to invoke a human task.

I think that we definitely need this. There's a constant tension with trying to take a business-process approach within IT when developing solutions. If you look at the products that are out there, you have one class of products that are typically called "workflow products" that deal with the human task management, and then you have these BPM products or ESBs with orchestration in them that deal with the automated processes. Neither one, on their own, gives you the full view of the business process.

As a result, there’s always this awkward hand-off that has to occur between what the business user is defining as the business process and what IT has to turn around and actually cobble together as a solution around that. Finally getting to a point where we’re saying, "Okay, let’s come up with something that actually describes the true business process in the business definition of it," is really important.
Read the full transcript for more IT analysis and SOA insights. Listen to the podcast. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production.

Tuesday, August 7, 2007

Sybase demos swift use of iPhone as mobile client to corporate email, calendar, PIM

For those who think the Apple iPhone will not be a corporate mobile client any time soon, think again.

Sybase demonstrated today a straightforward way to use an Apple iPhone to access such enterprise email stalwarts as Microsoft Exchange and IBM Lotus Domino servers. Not only was the email and associated attachments available via the iPhone's native client email software, but real-time access to the business user's corporate calendar and address book were there too.

Today's demonstration, before a group of industry and financial analysts at an annual Sybase user conference in Las Vegas, also showed unified communications functions, including click-to-call on the iPhone from the online corporate directory. Sybase says its capability to provide such integration is unique among mobile infrastructure vendors.

For the demo, Sybase used its Information Anywhere Suite infrastructure from its iAnywhere product line to deliver the corporate messaging goods to the iPhone client. The messaging integration via Information Anywhere is secure by using SSL, does not require IMAP, and connects through existing ports.

That means that corporate IT personnel can accommodate business users who want to use iPhones to access their core corporate communications without a lot of IT overhead. It takes five minutes to set up a user, following the same basic steps as setting up a Windows Mobile connection, said Sybase.

The enterprise email-to-iPhone support service is not yet publicly available, but it soon could be. Already many enterprises in the U.S. are asking Sybase and its partners for ways to use the iPhone for corporate messaging. Such inquiries are also coming from Europe, where the iPhone is not even yet available.

No details were forthcoming on availability of the iPhone connectivity services, thought Sybase certainly seems to like the idea of working closely with Apple to make the capability a commercial reality.

"We will do some work with Apple to make this a very powerful experience," said Terry Stepien, president of Sybase's iAnywhere division.

While many observers have pegged the iPhone as a consumer device, Sybase, in Dublin, CA, examined the device and found that the existing Apple OS X-based APIs are strong enough for enterprise messaging use. Recognizing that IT messaging administrators resist IMAP standards due to security concerns, Sybase made the iPhone a corporate client without using IMAP.

Quite a bit more could be done, however. Stepien said that the native calendar client on iPhone could be exploited if APIs for that were available.

From where I sat watching the demo, an Apple-Sybase solution to satisfy those who want to add the iPhone to other sanctioned corporate mobile clients is a no-brainer. This is a development to keep an eye on, and may bring iPhone to an influential class of business user sooner than most thought possible.

Monday, August 6, 2007

Look for more Linux-based mobile devices in a Palm near you

Linux on mobile devices got a boost this week with a slew of announcements out of LinuxWorld in San Francisco.

One telling announcement came with Palm's decision to go with Wind River Systems' Linux as the platform for the upcoming Palm Foleo, the sub-compact companion for smartphones. While the Foleo was designed to be Linux-based from the get-go, the decision went to go with Wind River's device-optimized version of Linux.

Wind River will also provide its development suite, professional services, and customer support in bringing the Foleo to market. The Foleo is billed as a smart-phone adjunct that allows users to view and work on phone-based email with a larger screen and full-size keyboard. It also allows Web surfing, editing, and Power Point presentations.

Without having made a formal debut, the Foleo is receiving mixed notices from the reviewer community. ZDNet's George Ou, thinks that, while it needs some tweaking, it poses a threat to the laptop. On the other side of that fence, Alice Hill from Real Tech News thinks it's going to bomb, and gives five reasons why it will fail.

As with most new devices, only time will tell. A key to success will be bulk purchases by enterprises for their edge and remote workers. Not sure the pricey iPhone makes sense there (yet).

Meanwhile, the LiMo Foundation, which is dedicated to the adoption of Linux in the mobile device community, announced what it called "a significant membership surge," with the addition of five new core members and eight associate members.

The core members, who will participate on the board, include Aplix, Celunite, LG Electronics, McAfee, and Wind River. Associate members include ARM, Broadcom, Ericsson, Innopath, KTF, MontaVista Software, and NXP B.V.

LiMo's goal is to create the world's first globally competitive, Linux-based software platform for mobile devices, and organizers expect to see the first handsets supporting the LiMo platform on the market in the first half of 2008.

In other Linux-mobile news, Motorola and Wind River has formed a strategic alliance designed to provide integrated Advanced TCA(R) and Micro TCATM communication platforms with Carrier Grade Linux and VxWorks runtimes. This is aimed at providing bundled hardware and software solutions for telecom, military, aerospace, medical, and industrial automation.

Disclosure: Wind River has been a sponsor of BriefngsDirect podcasts, which I produce and moderate.

Friday, August 3, 2007

IBM adds to 'information on demand' drive with Princeton Softech acquisition

IBM is beefing up its "information on demand" initiative with the announcement today of its intention to acquire Princeton Softech, Inc.

Princeton's Optim cross-platform data management software will provide a big boost in meeting the needs of data governance, as well as controlling costs from an increase in data volumes. This becomes a primary corporate concern in light of estimates that storage management may soon represent nearly 50 percent of an annual IT budget.

As organizations are required to retain data longer for auditing, cost becomes an issue if archival data remains on operational systems, eating up storage capacity and degrading performance. Princeton's archiving offerings helps remove the data from those systems, while allowing it to still be accessible and usable.

The other prong of regulatory requirements comes with security of data, especially customer information that is deemed private. In addition to the costs of maintaining huge amount of historical data on operational systems, the potential penalties from exposing private customer data can be daunting. Princeton's data-masking capability is designed to preserve data integrity and efficient archiving.

Princeton also provides test data management software that creates test databases, in which sensitive customer data can be masked and the underlying data protected from corruption during the tests.

The acquisition is one of a long string of smaller, often private companies that IBM has been buying to fill out its data lifecycle offerings. As we've said before, getting your data act together is an essential aspect of being able to move to SOA. This purchase seems to buttress that approach.

Princeton Softech, with 240 employees, is privately held, and has been in operation since 1989. No financial details were disclosed for the deal, which needs regulatory approval. Both companies hope the acquisition will be complete within the next two months.

OpenSpan report card: Plays well with others

Quick question: What is the most-used technology for integrating apps on the desktop? If you said "copy-and-paste," then you'd be right, and it probably means you've been listening to Francis Carden, CEO of OpenSpan Inc.


Carden uses the copy-and-paste statistic to emphasize how little integration has advanced in the industry, despite all the effort of the last two decades.

OpenSpan of Alpharetta, Ga., offers what it claims is a new and unique way to integrate the multitude of currently siloed apps on which many operations rely today. How OpenSpan works is that it identifies the objects that interact with the operating system in any program -- whether a Windows app, a Web page, a Java application, or a legacy green screen program -- exposes those objects and normalizes them, effectively breaking down the walls between applications.

The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.

For example, the integration can be designed to take information from legacy applications, pass it to an Internet search engine, perform the search, and pass the result on to a third application. All this can be done transparently to the end user with a single mouse click. The operation can run with the applications involved open on the desktop, or they can run in the background while the integration runs in a dashboard.

Setting up such an integration in OpenSpan can take anywhere from a few minutes to a few days, depending on the complexity of the operation and the number of programs involved.

What happens if the objects in one of the source programs changes, something that could happen frequently if third-party Web pages are involved? According to Carden, it's a simple matter of re-interrogating the affected objects on the revised page and replacing them in original workflow, using the OpenSpan studio.

While others are trying to do what OpenSpan does, Carden says that the others do it in different ways and that his company's approach is unique. It does not require specific programming knowledge, nor does it require access to the source code of the underlying programs or recompiling those codes.

"I like to say were the 'last mile to SOA,'" Carden said. "We can take a 25-year-old application and make it consume a Web service."

According to Carden, the idea of digging into applications and objects at the operating system level was something that was always able to be done by what he calls "rocket scientists." The only problem, he says, is that it was a time-consuming process and was basically a one-off effort. OpenSpan is a way to productize the process and make it available to non-rocket scientists.

For some companies, integrating applications is critical to performance and agility. Carden tells of one client, a large bank, where workers had to deal with 1,600 applications -- although that's a little extreme. The average is about eight.

OpenSpan is currently riding high with an infusion of venture capital from Sigma Partners and Matrix Partners and an infusion of talent with the addition of four key players who were formerly executives with JBoss.

Microsoft's 'service-enabled' approach is really only sorta-SOA

A new article on Redmondmag.com examines the fascinating topic of Microsoft and SOA. Microsoft's strategy -- pivoting on the strengths of Windows Communications Foundation (WCF) as well as (huh?) applications virtualization -- is different from earlier "embrace and extend" market assault campaigns.

Microsoft's SOA strategy amounts to embrace and contract. They want to allow services-enablement using the newest (read: massive upgrades) Microsoft infrastructure to offer a modest embrace of heterogeneity. However, the high-level rationale for SOA is to make heterogeneity a long-term asset, rather than a liability. You should not have to massively upgrade to .NET Framework 3.0 (and tools, and server platforms, and runtimes) to leverage SOA. As we know, SOA is a style and conceptual computing framework, not a reason to upgrade.

I'm liberally quoted in this article, so you can gather the gist of my observations in it. What is curious is the response Microsoft proffers to any suggestion that Windows Everywhere is not somehow synonymous with SOA.

I'm especially betwixt by the assertion that virtualizing a monolithic Windows application, so that it can be hosted (on Windows servers) and accessed via a browser as a service, allows it to "... be part of an SOA-like strategy."

Sorry, dudes, that doesn't even come close to a SOA-enablement of that application. Now, being able to deconstruct said application into a bevy of independent services that can be easily mixed and matched with others (regardless of origin, runtime, framework or operational standards) gets a wee bit closer to the notion.

How about delivering those legacy Windows monolithic server stack applications as deconstructed loosely coupled services on virtualized Windows runtime instances on a hot-swappable Linux on multicore x86 blade farms? Yeee-ha!

It's also curious to see the rationale for Microsoft's sorta-SOA through the lens of needing to gain "... the ability to rapidly evolve an application because you need to change things in near-real time," said Steve Martin, director of product management for Microsoft's Connected Systems Division, as quoted in the article.

This may be RAD or Agile or even Lean -- but it really isn't quite SOA. What we're thinking on the SOA business value plane is the ability to "rapidly evolve" business processes, to actually not have to muck around with the underlying applications and data (nor necessarily upgrade their environments) but instead extract and extend their value into a higher abstraction where transactions, services, data, metadata, and logic can be agilely related and universally governed.

Not sure if BizTalk Server has gained that feature set yet. Visio? Well, obviously you have not yet upgraded sufficiently, dear reader.

It's plain from the article that Microsoft will define SOA to suit its strengths and sidestep it's weaknesses. At least on that count Microsoft is adhering to the standard industry approach. But the risks for Microsoft as it seeks a higher profile in the core datacenters of the global 2000 of not applying SOA at its higher business value is significant. Better stated, they may win some battles but lose the war.

Already, competitors such as IBM, Oracle and BEA are moving toward offerings that reduce the emphasis on applications (virtualized or not) and instead places the emphasis on business processes maneuverability. And just like with SaaS, it's not about the technology -- but is everything about the business paybacks (top-line and bottom-line) and overall IT risk reduction. The business model of both the vendor and the customer need to match up.

IBM's strategies around reuse
and its focus on creating pools of services that can be applied not generally to IT but specifically to business verticals, industries, and even individual enterprises (think of its as the long tail wags SOA), is ultimately an aligning influence between IBM and its partners and customers. What runs IT beneath is more optional, so you might as well seek the best cost-benefits approach, which will include open source infrastructure, SaaS, and all the other options in the market.

If you save big on business agility, the cost differences of the underlying infrastructure (say between IBM or BEA or Microsoft) is a rounding error -- but you have to deliver business agility.

IT needs to move holistically at the speed of businesses. Microsoft seems to be saying with sorta-SOA that, "Hey, it will cost you less to virtualize your Windows applications so use us, even though you gain less total agility." Does not compute, Will Robinson.

Microsoft's sorta-SOA approach may have short-term benefits for Redmond, but medium- to long-term it divides Microsoft from its customers and prospects and their needed goals for IT. I've said it before, what's good for Microsoft is not necessarily good for its customers, their balance sheets, or their business agility. And only Microsoft can change that.

An enterprise that embraces sorta-SOA from Microsoft only will compete with an enterprise that embraces and extends SOA liberally, openly, leveraging all the options in the global market of labor, information, infrastructure, services, and both business and hosting models flexibility. As inclusive IT agility becomes the primary business enabler globally and specifically, the choices will be fairly stark fairly quickly. The choices will be more stark and more quickly for ISVs, service providers, and telcos.

At some point the savvy IT leaders may ask if dragging Windows Everywhere along for the ride makes sense. Why not just use sorta-SOA to expose all the existing Microsoft stuff, cut bait and move on for the real differentiating IT functionality and productivity?

Will Microsoft's vision of Windows Everywhere "plus services" do better for their clients than enterprises that just move to services everywhere? This is the ultimate question.

What's most likely about Microsoft's current SOA strategy is that it keeps them on the nuts and bolts level of good tools to make good applications and good services -- but at a price. Nowadays, making the services themselves, however, does not hijack the decisions on infrastructure, as it did in the past.

In a loosely coupled, virtualized world -- where hosting and deployment options abound -- there is nothing that locks in the applications as services to any platform. This is true on both the client and server, and more true at the higher abstractions of metadata and business processes.

And, sure, there will be new choke points, such as governance, policy, semantical symmetry -- but none of them for SOAs seem to carry the added tax of necessary, routine, massive upgrades of specific essential components up and down a pre-configured stack to work. Sorta.

The playing field is a bit more level. May the best IT approaches to total business agility win.

Disclosure: I am a paid columnist for Redmond Developer News, a sister publication to Redmond Magazine.

Tuesday, July 31, 2007

Red Hat ramps up virtualization drive for RHEL 5

The good news for virtualization just keeps on coming. Monday, we reported that start-up Desktone had gotten an infusion of venture capital and that Cisco was buying a chunk of VMWare.


Now, Red Hat's Emerging Technologies Team has a blog posting that shows how customers are using virtualization for fun and productivity in Red Hat's Enterprise Linux 5.

The team's blog pushes the idea of "para virtualization," a technology that offers high performance, but doesn't require special processor capabilities. The team feels this will help drive the adoption of virtualization more pervasively, so that para-virtualization will become the default deployment for Linux 4 or 5 applications.

The blog includes a quick-hit bullet list of the main points and, for those with more time and interest, a longer discussion, as well as a case study.

Monday, July 30, 2007

SOA Insights analysts on Web 3.0, Google's role in semantics, and the future of UDDI

Listen to the entire podcast, or read a full transcript of the discussion.

The notion of a world wide web that anticipates a user's needs, and adds a more human touch to mere surfing and searching, has long been a desire and goal. Yet how closer are we to a more "semantic" web? Will such improvements cross over into how enterprises manage semantic data and content?

Our expert panel digs into this and other recent trends in SOA and enterprise IT architecture in the latest BriefingsDirect SOA Insights Edition, volume 17. Our group also examines Adobe's open source moves around Flex, and how UDDI is becoming more about politics than policy.

So join noted IT industry analysts Joe McKendrick, Jim Kobielus, Dave Linthicum and Todd Biske for our latest SOA podcast discussion, hosted and moderated by yours truly.

Here are some excerpts:
I saw one recent article where [the semantic web] was called Web 3.0, and I thought, “Oh, my Lord, we haven’t even decided that we are all in agreement on the notion of Web 2.0.”

[But] there is activity at the World Wide Web Consortium that’s been going on for a few years now to define various underlying standards and specifications, things like OWL and Sparkle and the whole RDF and Ontologies, and so forth.

So, what is the Semantic Web? Well, to a great degree, it refers to some super-magical metadata description and policy layer that can somehow enable universal interoperability on a machine-to-machine basis, etc. It more or less makes the meanings manifest throughout the Web through some self-description capability.

You can look at semantic interoperability as being the global oceanic concern. Wouldn’t be great if every single application, data base, or file that was ever posted by anybody anywhere on the Internet somehow, magically is able to declare its full structure, behavior, and expectations?

Then you can look at semantic interoperability in a well-contained way as being specific to a particular application environment within an intranet or within a B2B environment. ... The whole notion of a "semantic Web," to the extent that we can all agree on a definition, won’t really come to the fore until there is substantial deployment inside of enterprises.

Conceivably, the enterprise information integration (EII) vendors are providing a core piece of infrastructure that could be used to realize this notion of a Semantic Web, a way of harmonizing and providing a logical unified view of heterogeneous data sources.

Red Hat, one of the leading open source players, is very geared to SOA and building an SOA suite. Now, they are acquiring an EII vendor, which itself is very SOA focused. So, you’ve got SOA; you’ve got open source; you’ve got this notion of a semantic layer, and so forth. To me, it’s like, you’ve stirred it all together in the broth here.

That sounds like the beginnings of a Semantic Web that conceivably could be universal or “unversalizable,” because as I said, it’s open source first and foremost.

If we build on this, it does solve a lot of key problems. You end up dealing with universal semantics, how that relates to B2B domains, and how that relates to the enterprise domains.

As I'm deploying and building SOAs out there in my client base, semantic mediation ultimately is a key problem we’re looking to solve.

The average developer is still focused on the functionality of the business solution that they're providing. They know that they may have data in two different formats and they view it in a point-to-point fashion. They do what they have to do to make it work, and then go back to focusing on the functionality, not really seeing the broader semantic issues that come up when you take that approach.

One thing that’s going to happen with the influence of something like Google, which is having a ton of a push in the business right now, is that ultimately these guys are exposing APIs as services. ... They're coming to the realization that the developers that leverage these APIs need to have a shared semantic understanding out on the Web. Once that starts to emerge, you're going to see a push down on the enterprise, if that becomes the de-facto standard that Google is driving.

In fact, they may be in a unique position to create the first semantic clearing house for all these APIs and applications that are out there, and they are certainly willing to participate in that, as long as they can get the hits, and, therefore, get the advertising revenue that’s driving the model.

[Google] is in the API business and they are in the services business. When you're in for a penny, you're in for a pound. ... You start providing access to services, and rudimentary on-demand governance systems to account for the services and test for rogue services, and all those sorts of things. Then you ultimately get into semantics, security, and lots of other different areas they probably didn’t anticipate that they'd get into, but will be pushed into, based on the model they are moving into.

... Perhaps Google or others need to come into the market with a gateway appliance that would allow for policy, privilege, and governance. This would allow certain information from inside the organization that has been indexed in an appliance, say from Google, to then be accessed outside. Who is going to be in the best position to manage that gateway of content on a finely-grained basis? Google.
Listen to the entire podcast, or read the full transcript for more IT analysis and SOA insights. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production.