Wednesday, October 24, 2007

Oracle users enjoy open source benefits but shy away from databases -- for now

An early Texas settler claimed that the Rio Grande was "a mile wide and a foot deep." A recent survey among Oracle database users seems to offer the same sentiment about the prevalence of open source in the enterprise.

Sponsored by MySQL and conducted by Unisphere Research, the study among members of the Independent Oracle Users Group (IOUG) found that, despite a heavy use of open source in some areas of operation, organizations running over half their applications on open-source software increased from 9 percent in 2006 to 13 percent in 2007. However, only one-third of the 226 respondents said they had an open-source database in production.

While slightly more than half of the respondents said they plan to increase their adoption of open source in the next year, fewer than 10 percent reported application portfolios that are supported by or interact with open-source systems.

Respondents also indicated that cost indeed seems to be a driver, with two-thirds reporting that the adoption of open source was spurred by cost savings. This compares with 57 percent citing cost as a factor in a similar study last year.

However, cost savings come at a price. Support and security continues to be concerns and seem to act as roadblocks to wider adoption. Just over half of the respondents felt that support wasn't as robust as commercial applications, and more than one-third saw security as a major stumbling block.

Tony Baer at OnStrategies has an interesting take -- and, in the process, paints a picture that will have me looking over my shoulder at Thanksgiving dinner:

By the way, did we neglect to mention that this open source survey of Oracle database customers was sponsored by MySQL? It conjures up an image of a mouse sneaking into a kitchen during Thanksgiving dinner and feasting on the scraps. In fact, that’s exactly the picture that was painted by the survey.

Open source use is wide but not terribly deep. Roughly 90% of respondents said they used open source software or were planning to, but it’s mostly for the commodity stuff sitting below the application layer where most organizations imbed their real value-add. Only 4% said they used open-source-based enterprise apps, like SugarCRM. Not surprisingly, the most popular open source offerings were the Apache web server, which happens to underlie most J2EE middle tier products like IBM WebSphere; and of course, Linux. In essence, customers look to open source for cheap plumbing that simply works.

And that certainly applies to databases. This being a survey of Oracle database users, it’s obvious that nobody’s replacing Oracle with MySQL or any of its open source cousins. But if you’ve got a satellite web app, there’s little risk – or cost – in using MySQL. Significantly, 20% of Oracle users surveyed reported having open source databases larger than 50 GBytes. That 20% is kind of a funny figure. If you’re an optimist, you’ll point to it as proof positive that open source databases are getting ready for prime time; if you’re a cynic, you’ll claim that the figure proves that they will never rise higher than supporting roles.

... Obviously, nobody dismisses the viability of open source for basic commodity tasks, but when it comes to mission critical systems, Oracle users still know whose throat they really want to choke.

Like all surveys, it represents a few answers -- 226 to be exact -- from a small niche of the IT market. That would seem to indicate caution in extrapolating the results to the entire industry.

Incidentally, the same settler who made the remark about the Rio Grande also said it was "too thin to plow and too thick to drink." How much of these results you want to swallow is up to you. The 21-page executive summary of the recent deep-and-wide study is available to members on the IOUG Web site.

In any event, I agree with Tony that open source databases are ripe for rapid growth and expand use-case scenarios. As more applications are served up as services, those service providers will be doing a lot of custom distributed infrastructure development, leveraging open source, and rolling their own functionally targeted stacks. Think of Google, Amazon, eBay and Yahoo as examples. Are they running Oracle or DB2 or Microsoft SQL Server, or are they taking a more commoditized view on databases?

We'll see more of these mega service providers using more open source databases, I suspect, though they won't talk about it. There will be more instances of database caches and subsets hither and yon, and these too will be increasingly open source. They will be tuned for their purposes, and not general purpose not enterprise-oriented. Scale, speed and cost rule.

Therefore the actual numbers of open source database licenses might be small and hard to measure, but the impact will be felt as more applications and services move to service provider models and more infrastructure customization as differentiation is layered on top of the database itself. Some folks swear by Ingres as a fine data environment, albeit open source.

And because databases are so mission critical, once comfort using open source varieties of these bedrocks of infrastructure components are reached, then a tipping point may be at hand. This may also be accelerated by moves toward Web-Oriented Architecture (WOA) and so-called Guerrilla SOA, where instances of services are virtualized on discrete runtime stacks.

Virtualization using open source hypervisors and open source databases to produce combined dynamic data serving stacks (create data capacity as you need it) also makes a lot of sense. It does mean more than what Oracle does with RAC and striping.

I made the pitch a while back on why IBM ought to buy into open source databases to spur on sales of other IBM infrastructure. It may have been premature, but the logic still has a nice ring to it. When a big provider like IBM makes open source databases strategic, as a bludgeon to its competitors (Oracle and Microsoft) and a loss-leader to other revenues, then all bets are off.

Tuesday, October 23, 2007

Sybase ushers in iPhone as secure client for mainstream corporate email

When I saw the demo last summer it was impressive. Sybase used its Information Anywhere suite as a go-between to allow such corporate email stalwarts as IBM Lotus Domino and Microsoft Exchange to integrate with a mobile Apple iPhone for email and PIM.

Now the demo is set to become a commercial reality. Sybase today at Mobile Business Expo in New York announced that it will begin supporting the iPhone as a wireless client for Domino and Exchange email and PIM/address book (including corporate directory look-up) early next year.

The iAnywhere approach comes with full connectivity to the native iPhone email application, not via webmail in the Safari browser. The email is therefore also available for offline use.

While the Sybase announcement comes soon after Apple's publicly declared intention to allow third-party developers and ISVs to write native apps for the iPhone, Sybase said the announcements are unrelated to the forthcoming SDK.

"We took guidance from Apple" on the project to include iPhone as a client among some 200 others that Information Anywhere suite connects (such as Windows Mobile, Symbian and Palm-based devices), but there is no formal relationship between Sybase and Apple, said Senthil Krishnapillai, Sybase product manager for iAnywhere.

The Information Anywhere suite connects mobile clients to email systems using standards, but not IMAP, which many email administrators shun do to potential unfettered exposure of email traffic to the Internet. Those using the Sybase solution for making the iPhone a corporate email client will be able to use their mobile networks to securely synchronize and replicate their emails, said Krishnapillai.

The Sybase approach will work with any iPhone and supports all Domino versions from R6 through the new version 8, as well as Exchange 2000 through Exchange 2007. The solution will require the iAnywhere suite 5.5, however. The iPhone-iAnywhere solution is expected in Q1 2008.

We should also expect that Sybase will enable unified communications functions, including click-to-call, on the iPhone from the online corporate directory. Sybase says its capability to provide such integration is unique among mobile infrastructure vendors.

What's more, it should take about five minutes to set up a user, following the same basic steps as setting up a Windows Mobile connection, said Sybase. This should make email administrators breathe easier as iPhone users request connectivity privileges.

Sybase said that many enterprises in the U.S. are asking Sybase and its partners for ways to use the iPhone for corporate messaging. Such inquiries are also coming from Europe, where the iPhone will soon be available in several markets.

Quite a bit more integration could be done between iPhone and corporate email. Microsoft might not be too keen on it, but IBM should be.

If you're a Domino shop, send an email to your IBM support staff and ask if Big Blue will use the forthcoming iPhone SDK to provide more native integration, perhaps between the Domino/Notes calendar and the native calendar client on iPhone. Web access could work in the meantime, I suppose.

But wait ... how about running a Notes client directly on the iPhone? Hey, how about running Outlook on the iPhone? These would be some killer apps should users clamor enough for them (and/or hackers make up the difference). I won't hold my breathe on Outlook, but maybe one of the open source Outlook knock-offs, eh?

If I were IBM, however, I'd think very seriously about a native Notes client for iPhone (and for all the other iPhone wannabe converged devices that are making their way to the market). A Notes client, of course, would allow the mobile iPhone users to get a lot more to their fingertips than email and calendar -- there are many thousands of Domino applications and data views that would make the iPhone a very handsome corporate endpoint.

Come on, IBM and Apple how about it? Sybase has shown the way, now take the ball and run with it. Notes and iPhone is match made in heaven.

Should IBM and Apple work together to bring a Notes client to the iPhone?

Monday, October 22, 2007

Citrix's end-to-end virtualization powerhouse hastens the massive disruption of PC applications as we know them

Citrix Systems is moving aggressively to desktop virtualization with today's announcement of the new Citrix XenDesktop 2.0 products. Combined with other recent Citrix strategic moves, the world of PCs and applications delivered as services is soon to be flipped.

Heads or tails, both end users and those seeking to make a good living delivering business and consumer applications as services should win.

The slew of announcements come at Citrix's iForum user conference in Las Vegas, and quickly builds on the now-final acquisition of open source virtualization vendor XenSource, which Citrix picked up for $500 million in August.

Citrix XenDesktop combines Citrix Desktop Server, which uses the Citrix ICA (Independent Computing Architecture) protocol, with a virtual infrastructure for hosting virtual desktops in the data center based on Citrix XenServer.

The combo exploits dynamic provisioning to stream desktop images on demand from network storage based on the Citrix Provisioning Server (acquired with Ardence early in 2007). Citrix XenDesktop is due in the first half of 2008.

Using these technologies and approaches, entire PC desktops en masse can reside in efficient datacenters. And these are datacenters that can leverage and exploit: open source, virtualization instances of server runtimes and discretely supported applications, low-cost blades on standard hardware, automated provisioning and fail-over, and tightly managed and centralized operations. You'll get nice BI on how the apps and data are used, too.

In short, you get datacenters as the means to lifecycle delivery of apps, media, and web services dramatically lower TCO. It means a virtualized back-end utility-grid of delivery resources supports more of what has been a massive client-server money pit for going on 20 years. It means an applications delivery infrastructure that's actually under control, with declining total costs on energy and labor, that is flexibly able to deliver just about any environment, desktop, application, media and services.

When you combine virtualization benefits up and down the applications lifecycle -- with such functionality as back-end automated server instance provisioning -- you get excellent cost controls. You get excellent management, security and code controls. And you marry two of the hottest trends going -- powerfully low TCO for serving applications at scale with radically simpler and managed delivery via optimized WANs (NetScaler Web application accelerator) of those applications to the edge device.

A new type of ROI is now up for grabs, when you factor in datacenter consolidation, applications and middleware modernization, savings on labor, energy and real estate. And, golly, you'll be virtualizing Linux and Windows instances and serving up those platforms' applications as services right beside each other, running efficiently on the same highly utilized metal. See more on the cost and management benefits of virtualization runtime instances in a recent BriefingsDirect SOA Insights Edition podcast.

Incidentally, all of this augers really well for SOA -- discrete services can be supported and delivered this way too. And so they should be straight-forwardly composed and reused to build out flexible business processes. More on that another day.

For now, this end-to-end virtualization value that Citrix is quite close to assembling disrupts beyond the support cost-benefits analysis to include adoption and exploitation of new business models, such as subscription and targeted advertising for making such desktop and applications services very inexpensive or even free to those accessing them via a provider.

And the traditional channel is going to be shaken up, too. XenSource will reportedly soon announce an OEM deal with Dell and a resale support agreement with HP, says Internetnews.com.

Indeed, I expect that the Citrix solution set to begin to sell more among providers -- either outside of enterprises or internally with shared services and charge-back-based managed services bureaus -- than traditional IT departments. Citrix used to amount to a value of wrappering traditional apps with presentation services delivery to ease complexity and dealing with "problem" applications. Now, the value is about rethinking applications and their deployment lifecycles entirely -- and working toward the dual-necessity of improving the applications experience for users while dramatically cutting costs via simplified runtime environments and innovative economics.

Also on the disruption front, Citrix is now offering serious alternatives to virtualization market leader VMware, while also reming close (for the time being) to Microsoft and its forthcoming Viridian hypervisor. See more on the increasingly complex relationship in a recent BriefingsDirect SOA Insights Edition podcast.

So to take a step back and consider what Citrix is providing this week (and others will no doubt ned to step up to the plate with, too). We have the back-end and delivery benefits, catalyzed by virtualization. We have the managed delivery via Citrix's presentation, WAN optimization, and security services, etc.

Yet because we're centralizing the delivery, we can also see how those services can be metered out on a per-user and per-service basis. So this enables the ecology of providers to offer comprehensive desktops and apps, as well as -- at the same time -- gives these service providers -- internal or external -- an economic means for charge-backs, managed services P&Ls, and subscriptions. You can make money. You can save money. You can do both.

Now, let's take it a step further. We can also inject and manage advertisements, training, knowledge-sharing, targeted links and content -- just about any relevant information in any media -- right into the actual presentation UI of the apps, media, services, content, etc.

Remember the BI benefits? By being centralized, the meta data on each user and apps use is there for the analysis and algorithmic associations. As users -- and their employers -- see the benefits of targeted content associated within applications and processes -- via display ads, links, RSS feeds, etc. -- they can empower the users, while also subsidizing the entire cost structure of providing the applications and services lifecycles.

This new monetization scheme would no doubt work differently for enterprises, small business, and consumers (and mobile users), but it could work very well through a variety of models. Again, you can make money. You can save money. You can do both. When you have centralized and managed serving of all the elements of work and play PC activities, the world is your oyster. You can innovate wildly.

All of this raises Citrix's profile dramatically, and makes for some interesting blue-sky "what ifs."

Think about "what if" Google had entree to this entire end-to-end apps delivery portfolio (and desktop virtualization jazz) and added it to its already heady SaaS offerings and massively effective targeted advertising arsenal. You could do Google web services, or Windows apps, or Linux apps (or green-screen mainframe apps!), all on a rich client or off the wire -- or whatever combo works best in the immediate circumstances. All of it is (or part of it) ad or subscription supported; all complementary with what's inside enterprises, and what's best acquired as services from outside.

So think about if Google were to partner closely (or even acquire) Citrix. Think about whether Microsoft could have the stomach for that. Imagine a bidding war between Google and Microsoft for Citrix solutions OEM deals (or the company itself). Or image Citrix remaining independent and playing to two nicely off of one another. Imagine that IBM might cotton to this as a way of getting in on the SaaS and ad-based models, while being applicable and amenable to the large enterprises.

Sure, let Microsoft continue to dominate the applications development and deployment environment. And then use Citrix to provide for those applications and services simple, low-cost host and delivery alternatives (and multiple business models). All those VB developers are beavering away to create apps that you can better and more cheaply delivery via your virtualized and centrally provisioned environments. Microsoft subsidizes the applications creation, in effect, but only gets a portion of the pay-off on the deployments side in the form of Windows licenses for the virtualized server runtime instances. (But Microsoft begins to lose on PCs OS license, the Office license, and the cash cows go on a diet. Ouch!)

Meanwhile, let Google broker ads that can be injected (with permission) into the Citrix-powered services streams for all those applications. The cost savings for providing the apps goes closer to the degree of free for the business, subsidized by the cash from the targeted ads and hopefully useful content.

The myriad services providers and Internet providers adopt this all as the way to provide applications, desktops, content, media, and services off of the wire to small business, enterprise and home users at a compelling per month per user subscription rate. Those subscriptions can be baked into the triple or quad play of Internet, telephone, cable TV, mobile, and of all the needed or desired PC functions and applications. Talk about share of wallet!

I suppose I'd call that "Everyplay." Users can check off what they want, and its provisioned on the back end and readily delivered to the device (a low-cost converged device like the iPhone perhaps). And a home or small business could probably get all of what they need for less than $150 per month, with adds ons for beyond-basic services, (ringtones!), of course. Sounds like a business to me.

Yes, the Citrix strategy bears careful monitoring. The implications are really quite staggering. And this could happen sooner than you think.

Saturday, October 20, 2007

Sun to swap out ME for SE on mobile devices -- high risk alert!

Looks like Sun will slowly dump, ie wither on the vine, Java mobile and try to drive Java SE down into the class of mobile converged devices like an iPhone. As I said earlier, iPhone has ushered in the need to do something about mobile Java fast -- but this move with SE is fraught with risk.

From the CNET.news.com story:

Java Standard Edition (SE), geared for desktop computers, will gradually supplant Java Micro Edition (ME) as technology improvements let more computing power be packed into smaller devices, said James Gosling, the Sun vice president often called the father of Java.

... Sun's Java expectation dovetails with recent trends, most notably Apple's iPhone, which architecturally is much more an Apple computer writ small than a mobile phone writ large. In particular, Apple uses a version of its regular Safari Web browser so users will have as much of the desktop Internet experience as possible. ... But much of the rich Internet application action is happening with software such as Ajax, the Adobe Integrated Runtime (nee Apollo) and Microsoft's Silverlight and Google Gears.

My questions are: Does an iPhone-like device need Java (mobile, standard, bastard [Java BE]) at all, and for what? Are there better ways to cross the mobile cloud "wire" than applets?

Are not RIAs supplanting some of Java's UI virtues? Hasn't Adobe Flash and Air made write-freely, run-freely a better way for rich media Web delivery writ large? Why not write once for the Web via the RIAs approach and have all of that easily driven down to converged mobile devices like an iPhone? Oh, and use it on PCs too. Televisions any day now.

Okay, and then there are the JVM values. You may want to virtualize on these devices to bring even more apps on down. But why not use a tight little hypervisor and not an SE JVM? How about Boot Camp for mobile on the iPhone? Not so far fetched, especially if the apps are written with mobile Web distribution in mind. Nope, don't need Java for that.

And if the device is not an iPhone, how about using good old Embedded Linux to natively accomplish, yet again, much of what should have been a Java ME value -- fewer customizations on the mobile platform for running more applications better. Makes sense to write the apps in Linux and have them run just about everywhere, eh?

These are key questions, and there are few assurances -- and certainly high risk -- in Sun trying to swap out ME for SE on the mobile converged device class, and across the global markets for mobile services, content and applications. This needs to be executed on flawlessly. There's embedded Linux squeezing on one side, RIAs on the other, WOA in general, and always the Windows Mobile thing tricking along the rosy VB developer path.

The poor performance Sun has had with Java on the full PC client is now coming back to haunt them on the mobile client. If there had been a fuller Java applications community for the PC, perhaps that would have ushered in all those apps (and ISVs) to the converged classes of devices now. But, alas, Java on the client did not storm the world. And ME is too fragmented. And bringing an SE version down to the mobile class is the answer, huh?

Now the notion of end-to-end Java has always held a certain fascination for me. However, didn't they design SE to work as a lightweight server stack, to cross the chasm between a PC and a server? Use the same stack, run the same apps, be lightweight (or at least the right weight)? That was good. Many folks prefer to use SE over EE.

And now we're to expect the SE class of Java-this and Java-that to run everywhere. It might work ... if the applications are there to bring this chicken home to sit on the eggs and they quickly hatch. It will be an interesting, albeit short, period of time to see if this all flies.

On the other hand, once again, Sun may have fumbled the Java ball, this time with ME, one of the hithertofore bright spots for JAVA. Did the market move faster than Java? Or did Java mis-time the whole bloody thing? Either way, I thought the community process was designed to prevent that sort of thing.

Thursday, October 18, 2007

Progress Software extends SOA reach with new deployment manager offering

Progress Software Corp. is offering developers a leg up in clearing one of the last hurdles in service-oriented architecture (SOA) -- deployment. Its Sonic Deployment Manager (SDM), which it announced this week, is designed to allow enterprises to model all aspects of a deployment and test production environments before roll-out.

SOA can provide many benefits to an enterprise -- agility and lower cost come to mind -- but it's not without its challenges. Once a company has decided on a SOA strategy, put an infrastructure in place, and tackled such issues as data access and governance, it's still faced with the daunting task of deployment, rolling out applications across the enterprise and across the lifecycle of each component.

Progress says the two key uses for SDM are lifecycle management and large-scale deployments. SDM can create a reproducible package of all components and configurations of a given deployment instance, which would allow precise rollback and recreation of a given environment, which would enable configuration management, auditing, and regulatory compliance.

Among the features of SDM, which is now available for $15,000, are:

  • Rapid, large scale deployment to automate installation and configuration on a large number of target systems.
  • Support for fast, iterative development to streamline migration from development through QA and to production.
  • Remote domain and site support for upgrading over a network.
  • Automated installation and configuration for tailored configuration.
  • Model-driven functionality that allows developers to model the installation independent of the machine parameters.
Last July, I had a lengthy podcast discussion about Software as a Service (SaaS) with Colleen Smith, managing director of Saas for Progress. You can listen to the podcast here.

Wednesday, October 17, 2007

BriefingsDirect SOA Insights analysts on virtualization trends and role of IT operations efficiency for SOA

Listen to the podcast. Or read a full transcript.

The latest BriefingsDirect SOA Insights Edition, Vol. 24, provides a roundtable discussion and dissection of Services Oriented Architecture (SOA)-related news and events, with a panel of IT analysts. In this episode, our experts examine virtualization trends through the acquisition this summer of XenSource by Citrix. We also discuss the $1.6-billion purchase of Opsware by HP as a way of analyzing IT management, automation and operations, as well as the impact on SOA.

Join noted IT industry analysts and experts Jim Kobielus, principal analyst at Current Analysis; Neil Macehiter, research director at Macehiter Ward-Dutton; Dan Kusnetzky, principal analyst and president of the Kusnetzky Group; Brad Shimmin, principal analyst at Current Analysis; Todd Biske, practicing enterprise architect for a Fortune 500 firm (formerly with MomentumSI); JP Morgenthal, CEO of Avorcor; and Tony Baer, principal of onStrategies. Our discussion is hosted and moderated by me, Dana Gardner.

Here are some excerpts:
On Virtualization and Citrix-XenSource

Virtualization is definitely something that organizations are looking at right now. For the clients I've worked with, it's been a mix. Some are really trying to embrace it on the server side and make use of it right now. Others are looking at possibly using it on the desktop for developers, when they need to get a specific development environment, but it’s definitely in people’s minds today. So, I would definitely classify it as in the list of strategic initiatives that companies are looking at and determining how to use appropriately.

The interesting thing about XenSource is that it’s been considered to be the leading, emerging alternative to VMware. It essentially virtualizes the machine to a slightly more native approach than VMware. It's a very interesting acquisition because Xen has had a relationship with Microsoft, where it gets access to Microsoft's virtualization technology, and it also fills a key gap for Citrix. ... Microsoft will still need some way of interoperating its hypervisor with the Linux environment. So, even though the relationships may change somewhat in the long run, there will still be some sort of technology sharing here.

The XenSource acquisition by Citrix is good for Microsoft, because it allows Microsoft to buy some time until Viridian is ready. A year from now, Microsoft can say, “Oh yeah, we don’t have Viridian ready just yet, but look at this. Two of our primary virtualization partners have gotten together to field an ever more comprehensive virtualization product portfolio, which is integrated or will be integrated fairly tightly with Viridian when that comes out." So, we'll hear Microsoft saying, “We don’t have it all together today, but we have partners who can give you a fuller virtualization portfolio to compete with EMC/VMware."

If we look at Citrix's portfolio, every single piece, service, or product offering is matched by something Microsoft is pushing now. That, in essence, means that Microsoft is trying to acquire the business that Citrix has and slowly remove Citrix from the limelight and off to the sidelines. ... [Citrix] needed a broader strategy, one that wasn’t focused solely on access mechanisms. The acquisition of XenSource gives them a broader story.

If we look at just the idea of what XenSource was doing with their processing virtualization management security, particularly their recent announcement of partnership with Symantec for the Veritas Storage Foundation software to be included in XenEnterprise, you could see that Citrix starts to have a more top-to-bottom virtualization story than they every had before. So, from a product portfolio view, this acquisition appears to make some sense.

We've been dealing with issues of the Microsoft platform for a long time around resource management, where we're fighting with SQL Server and other applications or resources, and each one has different memory requirements. This virtualized environment allows us to focus on giving our application 100 percent of the resources, and thereby never running out of things like TCP/IP sockets or having memory thrashing errors slowing down wireless communications, which is critical to Web services doing their job. So, it’s having a profound effect on the production environment.

This is a really interesting acquisition that will help XenSource at least get more mind share in the enterprise. Companies obviously have lots of Microsoft investments on all their desktops. There's a good chance that major enterprises have significant investments in Citrix as well, if they've got any need for remote access for their systems, terminal services, etc. It will open it up to a few more environments to add in this virtualization capability for organizations that were still unsure about what to do with open source. It’s a good thing from that perspective.

On the datacenter side, the promise there is better utilization of resources. As I said, if you really want to get into it, you could find a way to tune Microsoft. But I have a sys admin working in one of my clients who is fighting with this 3GB initialization parameter. When he puts it in one way, one app gets hit when he puts it in there. When he doesn’t put it in, other apps get hit. But mine [virtualized] works fine.

This is a clear case of where you go out and get an additional operating system license and put this into each application and in its own virtual machine running on a four-way or eight-way Intel Duo Core 2 machine -- they are running a storage area network (SAN) -- and you have one of the cleanest, most high performing environments I've ever seen.

SOA is, in effect, heightening this issue, because of the need for discrete services running with their own horses and their own power. ... What virtualization does is let you set up that verified stack on your server and not have to worry about breaking it down the road, because it's sitting in it's own virtual environment.

On HP-Opsware and SOA Operational Efficiency


If you start getting into the automation space, the HP-Opsware deal is obviously the more interesting one. There's a natural connection between the virtualization space and some of the movements in the management space.

When you really embrace SOA, you're going to wind up with more moving parts for a given solution. And in doing so, you could create this management challenge of how to allocate resources for each of these individual services that have their own life cycle. There's a natural potential to move towards server virtualization to do that, so you can get your arms around that whole management concept. Where I've been disappointed in the management space, however, was that we really haven’t seen anything from the large systems management vendors to start tackling this problem.

So, if we are creating lots and lots of services -- you may now have 500 or 1,000 services -- you have to look at that and say, "I have a bigger management problem." There's no reason we can’t take the concepts of SOA and apply them to the management environment.

So, whether it's automated provisioning of solutions, automated policy management, a need to change SOA’s or enable more resources for a particular consumer, there is no reason that I shouldn’t be able to have my management systems call a service to do that. I may want to set up custom orchestrations for how to manage my infrastructure. I may want it all automated out of the box and just push a switch and have it happen.

In order to get there, we've got to have management services on all of our infrastructure, and that’s where there's a huge gap. Everything is still intended to be used by a person. Maybe with some creative scripting, people are able to do it, but you can compare it to the days of Web-enabling mainframes, where the technique was to screen scrape off the green screens. You almost have to do the same thing from the management side now. Look at these user-facing consoles, figure out what glue you can put in front of it to script it, and automate it.

I want to see my SOA installations able to speak to and hear from my datacenter systems management solutions. And right now, for that closed loop you were talking about, everything we see is the basic SNMP traps that may get read by Tivoli’s program. That lets you say, "Okay, there is an alert that one of my servers is overrun on memory. I’ve got these applications running on it, I am going to need to do something, and I see an alert that I can drill down on, and do some basic root-cause analysis."

That’s not enough for a true Business Technology Optimization (BTO) and being able to utilize the resources you are trying to marshal for a SOA environment. I want my Tivoli Application Manager to fully automate that process, look at the variables and the event stream coming from my systems, correlate that, and put it into some sort of context with my applications that are running on it.

I had a conversation with Paul Preiss, who heads up the International Association of Software Architects (IASA) about this very thing. He has raised a point and is trying to drive attention toward the exponential growth of SOA, as people start to add services and services become dependent upon services and organizations. ... From a product perspective, when you deliver a product as a SOA, you're delivering the architecture, and then you are delivering the implementation of that architecture. Therefore, you have a very controlled instance in which you can manage very easily without needing large governance controls, because you're providing all the infrastructure for management of that SOA as part of your application.

[However] when you have a lot of legacy infrastructure and legacy investment, and people without the sophisticated SOA architecture experience on staff start developing services, you open yourself up for potential disaster. In those instances, the organization has to ask itself, "Are we ready to invest here?" Every organization obviously wants to take advantage of the latest technologies. This is one of them that can really end up biting them if they are not careful. So they need to step back and think about what it takes to invest in SOA and start to wrap their legacy systems and make them available.

Opsware is very good at automating provisioning in the lifecycle, but it’s around the infrastructure that’s running the services, not the services themselves. That’s where the linkage needs to come. ... The vendors in the space -- the BMCs, IBMs, HPs, all of them -- have really missed this, and they’ve been lacking in explaining how they're really going to manage the services, because they're so fine-grained. Historically, managing an instance of an SAP application server is very coarse-grained, and that’s comparatively straightforward. But, when you're talking about disaggregating that and having application components everywhere, you have to disaggregate the way you manage as well.
Listen to the podcast. Or read a full transcript.

Tuesday, October 16, 2007

BEA-Oracle products assimilation roadmap analyzed, but what about the sales forces?

Here's the third-day story on the BEA-Oracle merger development: How would the various products line up?

Rich Seeley talks with several analysts, myself included, in this in-depth story about the ways in which tools, app servers, portals, transaction monitors, SOA components, and other integration middleware like ESBs would or would not mesh.

It's a well-rounded look at the landscape from the product perspective. Good reading, if I don't mind saying so.

I suppose the next interesting assimilation concern would be the sales organizations. BEA has a large direct sales force, though not as large as Oracle's. Should a deal go through and the redundancies are, err ... managed, that would leave a lot of enterprise IT sales talent looking for work.

Some of the BEA sales veterans may be miffed at the financial restatements and market unease at the stock options backdating issues at the firm. That might complicate their exits and/or assimilation into the Oracle fold. The Oracle sales force has a distinct, err ... culture, that may not be for everyone.

What's more, many of the smaller SOA market vendor entrants are often in a difficult position when it comes to the high early-on costs of assembling a global (or even regional) direct sales force with the knowledge, experience, and contacts to effectively sell such complex and costly products. There are therefore opportunities for these vendors to now bulk up on direct sales veterans in the enterprise software and infrastructure spaces, once the BEA salesforce is in play. Get those resumes polished. And time to re-negotiate the non-competes and severance terms!

The mashup of BEA and Oracle could offer some opportunities for other vendors to come in and make some stock and innovative compensation offers these salespeople can't refuse. I should think the headhunters in the space are smiling today as they make their calls and check in on the golf games and family matters of many sales professionals in the BEA camp.

IBM, Microsoft and SAP recruiters will be sniffing about too, no doubt. Change, as they say, is good.