Tuesday, September 30, 2008

Improved insights and analysis from IT systems logs helps reduce complexity risks from virtualization

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Read complete transcript of the discussion.

Virtualization has certainly taken off, but less attention gets directed to how to better manage virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation of virtualized environments -- but without the pitfalls of complexity and confusion.

We seem to be at a tipping point in terms of everyone doing virtualization, or wanting to do it, or wanting to do even more. IT managers experimenting with virtualization are seeking to reduce costs, to improve the efficiency in use of their assets, or for using virtualization to address the issues they might have with energy cost, energy capacity or sometimes even space capacity in the data center. But the paybacks from virtualization can be lost or mitigated when management and complexity are not matched. Poorly run or mismanaged virtualized environments are a huge missed opportunity.

Now's the time when virtualization best practices are being formed. The ways to control and fully exploit virtualization are in demand, along with the tools to gain analysis and insights into how systems are performing in a dynamic, virtualized state.

To help learn about new ways that systems log tools and analysis are aiding the ramp-up to virtualization use, I recently spoke with Charu Chaubal, senior architect for technical marketing, at VMware; Chris Hoff, chief security architect at Unisys, and Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic.

Here are some excerpts:
The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.

Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.

As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.

Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.

In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.

The benefits of virtualization today ... is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.

When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.

They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.

Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.

The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."

We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.

All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.

It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Monday, September 29, 2008

Oracle and HP explain history, role and future for new Exadata Server and Database Machine

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The sidewalks were still jammed around San Francisco's Moscone Center and the wonderment of an Oracle hardware announcement was still palpable across the IT infrastructure universe late last week. I sat down with two executives, from Hewlett-Packard and Oracle, to get the early deep-dive briefing on the duo's Exadata appliance shocker.

Oracle Chairman and CEO Larry Ellison caught the Oracle OpenWorld conference audience by surprise the day before by rolling out the Exadata line of two hardware-software configurations. The integrated servers re-architect the relationship between Oracle's 11g database and high-performance storage. Exadata, in essence, gives new meaning to "attached" storage for Oracle databases. It mimics the close pairing of data and logic execution that such cloud providers as Google use with MapReduce technologies. Ellison referred to the storage servers as "programmable."

Exadata also re-architects the HP-Oracle relationship, making HP an Oracle storage partner extraordinaire -- thereby upsetting the status quo of the world's of IT storage, databases and data warehouses markets.

Furthermore, Exadata leverages parallelism and high-performance industry standard hardware to bring "extreme business intelligence" to more enterprises, all in a neat standalone package that's forklift-ready. Beyond 10 terabytes and into the petabyte range was how HP and Oracle designers describe the scale and 10x to 72x typical performance gains from the high-end Exadata "Machine."

The unveiling clearly deserves more detail, more understanding. Listen then as I interview Rich Palmer, director of technology and strategy for the industry standard servers group at HP, along with Willie Hardie, vice president Oracle database product marketing, on the inside story on Exadata.

The interview comes as part of a series of sponsored discussions with IT executives I've done from the Oracle OpenWorld conference. See the full list of podcasts and interviews.

Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Greenplum pushes envelope with MapReduce and parallelism enhancements to its extreme-scale data offering

Greenplum has delivered on its promise to wrap MapReduce into the newest version of its data solutions. The announcement from the data warehousing and analytics supplier comes to a fast-changing landscape, given last week's HP-Oracle Exadata announcements.

It seems that data infrastructure vendors are rushing to the realization that older database architectures have hit a wall in terms of scale and performance. The general solution favors exploiting parallelism to the hilt and aligning database and logic functions in close proximity, while also exploiting MapReduce approaches to provide super-scale data delivery and analytics performance.

Greenplum's Database 3.2 takes on all three, but makes signigficant headway in embedding the MapReduce parallel-processing data-analysis technique pioneered by Google. The capability is accompanied by new tooling to extend the reach of using the technology. The result is Web-scale analytics and performance for enterprises and carriers -- or cloud compute data models for the masses. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

The newest offering from the San Mateo, Calif.-based Greenplum provides users new capabilities for analytics, as well as in-database compression, and programmable parallel analytic tools.

With the new functionality, users can combine SQL queries and MapReduce programs into unified tasks executed in parallel across thousands of cores. The in-database compression, Greenplum says, can increase performance and reduce storage requirements dramatically.

The programmable analytics allow mathematicians and statisticians to use the statistical language R or build custom functions using linear algebra and machine learning primitives and run them in parallel directly against the database.

Greenplum's massively parallel, shared-nothing architecture fully utilizes each core, with linear scalability to thousands of processors. This means that Greenplum's open source-powered database software can scale to support the demands of petabyte data warehousing. The company's standards-based approach enables customers to build high-performance data warehousing systems on low-cost commodity hardware.

Database 3.2 offers a new GUI and infrastructure for monitoring database performance and usage. These seamlessly gather, store, and present comprehensive details about database usage and current and historical queries internals, down to the iterator level, making this ideal for profiling queries and managing system utilization.

Now that HP and Oracle have taken the plunge and integrated hardware and software, we can expect that other hardware makers will be seeking software partners. Obviously IBM has DB2, Sun Microsystems has MySQL, but Dell, Hitachi, EDS and a slew of other hardware and storage providers may need to respond to the HP-Oracle challenge.

On Greenplum's blog, Ben Werther, director, Professional Services & Product Management at Greenplum, says: "Oracle has been getting beat badly in the high-end warehousing space ... Once you cut through the marketing, this is really about swapping out EMC storage for HP commodity gear, taking money from EMC's pocket and putting it in Oracle's."

It will also be interesting to watch as bedfellows and evaluated from Microsoft/DatAllegro, what happens with Ingres, whether Sun with MySQL can enter this higher end data performance echelon. This could mean that players like Greenplum and Aster Data Systems get some calling cards from a variety of suitors. The Sun-Greenplum match-up makes sense at a variety of levels.

Stay tuned. This market is clearly heating up.

Thursday, September 25, 2008

Interview: From OpenWorld, HP's John Santaferraro on latest BI Modernization strategies

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Leading up to HP and Oracle's blockbuster announcement Sept. 24 of record-breaking data warehouse appliance performance, the business value of these infrastructure breakthroughs was the topic of a BriefingsDirect interview with John Santaferraro, director of marketing for HP's Business Intelligence Portfolio.

Now that the optimized hardware and software are available to produce the means to analyze and query huge data sets in near real-time, the focus moves to how to best leverage these capabilities. Soon, business executives will have among the most powerful IT tools ever developed at their disposal to deeply and widely analyze vast seas of data and content in near real time to help them run their business better, and to steer clear of risks.

Think of it as business intelligence (BI) on steroids.

At the Oracle OpenWorld unveiling, HP Chairman and CEO Mark Hurd called the new HP Oracle Database Machine a “data warehouse appliance.” It leverages the architecture improvements in the Exadata Programmable Storage Server, but at the much larger scale and with other optimization benefits.

The reason for the 10x to 72x performance improvements cited by Oracle Chairman and CEO Larry Ellison have do to bringing the “intelligence” closer to the data, that is bringing the Exadata Programmable Storage Server appliance into close proximity to the Oracle database servers, and then connecting them through InfiniBand connections. In essence, this architecture mimics some of the performance value created by cloud computing environments like Google, with its MapReduce technology.

To better understand how such technologies fit into the Oracle-HP alliance, with an emphasis on professional services and methodologies, I asked HP's Santaferraro about how BI is changing and how enterprises can best take advantage of such new and productive concepts as "operational BI" and "BI Modernization."

The Santaferraro interview, moderated by your’s truly from San Francisco, comes as part of a series of discussions with IT executives I’ll be doing this week from the Oracle OpenWorld conference. See the full list of podcasts and interviews.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Wednesday, September 24, 2008

HP and Oracle team up on 'data warehouse appliances' that re-architect database-storage landscape

Oracle CEO Larry Ellison today introduced the company's first hardware products, a joint effort with Hewlett-Packard, to re-architect large database and storage configurations and gain whopping data warehouse and business intelligence performance improvements from the largest data sets.

The Exadata Programmable Storage Server appliance and the HP Oracle Database Machine, a black and red refrigerator-size full database, storage and network data center on wheels, made their debt at the Oracle OpenWorld conference in San Francisco. Ellison called the Machine the fastest database in the world.

HP Chairman and CEO Mark Hurd called the HP Oracle Database Machine a "data warehouse appliance." It leverages the architecture improvements in the Exadata Programmable Storage Server, but at the much larger scale and with other optimization benefits. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The hardware-software tag team also means Oracle is shifting its relationships with storage array vendors, including EMC, Netezza, NetApp and Terradata. The disk array market has been hot, but the HP-Oracle appliance may upset the high end of the market, and then bring the price-performance story down market, across more platforms.

I think we can safely say that HP is a preferred Oracle storage partner, and that Oracle wants, along with HP, some of those high-growth storage market profits for their own. There's no reason to not expect a wider portfolio of Exadata appliances and more configurations like the HP Oracle Database Machine to suit a variety of market segments.

"We needed radical new thinking to deliver high performance," said Ellison of the new hardware configurations, comparing the effort to the innovative design for his controversial America's Cup boat. "We need much more performance out of databases than what we get."

This barnburner announcement may also mark a market shift to combined and optimized forklift data warehouses, forcing the other storage suppliers to find database partners. IBM will no doubt have to respond as well.

The reason for the 10x to 72x performance improvements cited by Ellison are do to bringing the "intelligence" closer to the data, that is bringing the Exadata Programmable Storage Server appliance into close proximity to the Oracle database servers, and then connecting them through InfiniBand connections. In essence, this architecture mimics some of the performance value created by cloud computing environments like Google, with its MapReduce technology.

Ellison said that rather than large data sets moving between storage and database servers, which can slow up performance at 1TB and larger databases, the new Exadata-driven configuration moves only the query information across the networks. The current versions of these optimized boxes use Intel dual-core technology, but they will soon also be fired up by six-way Intel multi-core processors.

Talk about speeds and feeds .... But the market driver in these moves is massive data sets that need to be producing near real-time analytics paybacks. We're seeing more and more data, and varyinf kinds of data, brought into data warehouses and being banged on by queries of applications and BI servers from a variety of business users across the enterprise.

HP and Oracle share some 150,000 joint customers worldwide, said HP Executive Vice President, Technology Solutions Group Ann Livermore. That means that these database boxes will have an army of sales and support personnel. HP will support the Machine hardware, Oracle the software. Both will sell it.

Hey, you, get onto my cloud!

We're very early in the private cloud business -- which is precisely why such large and influential vendors as Oracle, Intel, HP, VMware, Citrix and Red Hat are jumping into the market with initiatives and pledges for standards and support. We're seeing some whoppers here at Oracle OpenWorld, from Oracle, Intel and HP in particular.

Why? The early birds that can establish de facto standards on data portability and resources governance -- minding the boundaries between the private and public clouds and their digital condensates -- will be in a position to define the next abstraction of meta operating system (for lack of a better term).

In just the last two weeks, VMware, Citrix and now Oracle have pledged to come to market with the infrastructure that enterprises and service providers alike will want. The cloud wanters need cloud makers, the picks and shovels, to build out on the vision of next-generation data center fabrics -- of dynamic resource pools of infrastructure, platform, data applications and management services.

How these services are supported, and how they are managed to inter-relate with each other and the services-abstracted older IT assets, forms the new uber platform -- the new target through which to attract developers, architects, partners and users -- lots and lots of users all feeding off of huge clouds of dynamic, low-cost services.

Yes, a market critical-mass cloud platform standard implementation could create yet a new way to lock in huge multi-billion-dollar markets to ... need. To need, and to want, and to buy, and to have a heck of a hard time stopping that needing. The picks and shovels. The lock-in, the black hole-pull of the infrastructure, hard to resist, and then ... impossible.

Such a prize! And just like in the past, the crass business interests side of the vendors will want to own, dominate and lock-in to their proprietary platform implementations. Opposing forces, also inside the same vendors, will opine on the need (correctly) for openness and standards to provide the real value the users and ecology players demand. The new lock-in, they will say (correctly) is not technical but in terms of convenience, simplicity, power, and cost. Seduce them, don't force them, might be the mantra.

So seduce or lock-in, early-days private cloud platform definitions require the best management of two sets of boundaries -- one that properly falls between the pubic-facing clouds, and the nascent "private" or on-premises or enterprise clouds. The pay-off comes not just from operating efficiencies but on how well the services generated from either types of cloud can interoperate and play well in supporting extended enterprise and B2C processes.

This need to cross boundaries well will also prompt the handful of public cloud providers (Amazon, Google, Yahoo, Microsoft, Apple, etc.) to embrace sufficient levels of standards-based interoperability. Think of it as mass markets balancing interests ... like globalization ... where economics more than proprietary technologies wins the day.

The second boundary to to be defined properly is between the legacy systems, SOAs, business applications and middleware -- and the private cloud fabrics that will increasingly be where new applications/services are "natively" deployed, and where the integrations to the old stuff occurs. We can really have two kinds of clouds -- one for IT and one for consumers. There needs to be one cloud that suits all of the digital universe, within certain (as yet undefined) parameters. They really need to bet this boundary right so that B2E and B2B is also B2C.

Clouds will, of course, be highly virtualized, and so they will be able to support many of the older proprietary and standard-based IT systems and development environments. But why virtualize the new stuff, too? Why have B2E/B2B old and separately B2B/B2C new? We should want one cloud approach that newer apps and services can target directly, and then virtualize al the older stuff.

The question then is what constitutes the new "native" platform that is of, for, and by the standard cloud. If there is a fairly well-defined, standards-based approach to cloud computing that manages all these boundaries -- between public and private, between the old and the new of IT -- and which can serve as the target for all the new apps, services, data abstractions, modeling tools, workflow/policy/governance/ESBs and development needs -- well that's a business worth shooting for.

Who cares how the lock-in occurs, this is the next $100 billion company business. In other words, getting this right is a very big deal. The time is nigh for defining IT for at least a decade, maybe longer.

But like the Unix wars of old (and the app server wars of not-so-old) there will be jockeying for cloud implementation supremacy, brinkmanship over whose this or that is better, and the high-stakes race for who gets the definitions of the boundaries correct best for the users, developers, channel, and partners. Who can woo the best?

What is different this time, in cloud time, is that there are few players that can play this game, less of a channel to be concerned about, and fewer developer communities to woo. Far more than in the past, developers can use the tools and frameworks of their choice, and the clouds will support them. Users also have new choices -- not between a Mac and a PC, between Unix and x86, between Java and .Net, between Linux and Windows -- but between cloud ecologies of vast services providers. The better the bundle of services (and therefore interop and cooperation), the better the customer attraction and loyalty. The seduction, the lock in, comes from packaging and execution on the services delivery.

More important than in past vendor sporting events, the business model rules. The cloud model that wins is the "preferred cloud model" that gives IT shops in enterprises high performance at manageable complexity and dramatically lower total costs. That same "preferred" cloud attracts the platform as a service developer crowd, allows mashups galore, allows for pay-as-you-use subscription fees. Viral adoption on a global scale. Oh, and the winning cloud also best plays out the subsidy from online advertising in all its forms and permutations.

Yes, we can expect several fruitful years of jockeying among the major vendors, the rain makers for the cloud providers -- and see gathering clouds of alliances among some, and against others. We're only seeing the very beginning of the next chapter of IT in the last few weeks of IT vendor news.

The cloud wars, however, won't be won on technical merits alone, it will be a real beauty pageant too. It will be more of a seduction and an election, less of a slight of hand and leveraging of incumbency ... and that will be a big switch.

From OpenWorld, Oracle and HP align forces to modernize legacy apps and spur IT transformation

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

The avenues to IT transformation are many, but the end result must include modernization of data, applications, systems, and operational best practices. It's no surprise then that the partnership of Oracle and Hewlett-Packard gained new ground Sept. 24 at Oracle OpenWorld in San Francisco.

The companies are providing products and services that holistically support the many required variables to successfully implement IT transformation. HP hardware and storage systems have been tuned to support Oracle databases, applications and software infrastructure for many years, and the partnership continues to expand in the age of SOA, legacy modernization, and cloud computing.

To learn more about how HP and Oracle will continue to act in concert, especially as enterprises seek the highest data center performance at the lowest cost, BriefingsDirect interviewed Paul Evans, worldwide marketing lead for IT transformation solutions at HP, and Lance Knowlton, vice president for modernization at Oracle. The discussion took place Sept. 23, 2008 at the Oracle OpenWorld conference.

The application modernization and IT transformation interview, moderated by yours truly from San Francisco, comes as part of a series of discussions with IT executives I’ll be doing this week from the Oracle OpenWorld conference. See the full list of podcasts and interviews.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Tuesday, September 23, 2008

Amid financial sector turmoil, combined HP-EDS solutions uniquely span public-private divide

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the conversation.

As we witness unprecedented turmoil throughout the world's financial trading centers, the question in IT circles is: How will this impact the providers of systems, software and services? Not all vendors will fare the same, and those that possess the solutions -- and have the track record and experienced personnel in place -- will be more likely to become part of the new high finance landscape, and the new public-private solutions.

The timing of Wall Street facing some of its darker days comes as HP and the newly acquired EDS unit are combining forces in unique ways. Between them, EDS and HP have been servicing the financial and government sectors for decades. Combined, HP and EDS are uniquely positioned to assist potentially massive transitions and unprecedented public-private interactions.

To learn more about how HP and EDS will newly align, especially amid financial sector turmoil, BriefingsDirect interviewed Maria Allen, vice president of Global Financial Services Industry solutions at EDS. The discussion took place Sept. 22, 2008 at the Oracle OpenWorld conference.

The Allen interview, moderated by your’s truly from San Francisco, comes as part of a series of discussions with IT executives I’ll be doing this week from the conference. See the full list of podcasts and interviews.

Read a full transcript of the conversation.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Oracle's Beehive push portends a rethinking of the economics and methods of enterprise messaging

You have to give Oracle credit for persistence. The software giant has been trying to build out its groupware business for neary 10 years, and has as yet modest success.

Now, with Beehive, the next generation of its collaboration suite, Oracle may be sniffing some fresh and meaningful blood in the enterprise messaging waters.

The investment Oracle is making in Beehive, announced this week at the massive Oracle OpenWorld conference in San Francisco, signals an opportunity born more by the shifting sands beneath Microsoft Exchange and Outlook, than in any new-found performance breakthroughs from Oracle's developers.

Here's why: Economics and technology improvements, particularly around virtualization, are bringing more IT functionality generally back to the servers and off of the client PCs. As a result, the client-server relationship between Microsoft Exchange Server and the Outlook client -- and all those massive and costly (albeit risky) .pst files on each PC -- is being broken.

The new relationship is server to browser, or server to thin-client ICA-fed receiver. Here's what the CIO of Bechtel told a group of analysts recently: ""Spend your [IT] money on the backend, not on the front end."

The cost, security risks, and lack of extension of the data inside of Exchange, and on all those end device hard drives, is a non-sustainable IT millstone. Messaging times, the are a changin. Sure, some will ust keep Exchange and deliver the client as Outlook Web Access, or via terminal services.

But what I hear from those CIOs now leverging virtualization and evaluating VDI is that the Exchange-Outlook-SharePoint trifecta for Microsoft is near the top of their list of first strikes to slash costs and move this messaging beast onto the server resources pool where it can be wrestled to the ground and re-architected in an SOA. They have similar thoughts about client-side spread sheets like Excel, to, but that's another blog.

Yep, Exchange and its cotierie is widely acklowledged as coming with an agilty deficit and at a premium TCO -- but with commodity-priced features and functions. For all intents and purposes, email, calendar, files foldering, and even unified messaging functions are free, or at least low-cost features of larger applications function sets or suites.

Enterprises are paying gold for copper, when it comes to messaging and groupware. And then they have to integrate it.

Oracle recognizes that as enterprises move from high-cost, low-flexibity client-server Exchange to services-based server-based messaging -- increasingly extending messaging services in the context of SOA, network sevices like Cisco's SONA, web services, and cloud services -- they will be looking beyond Exchange.

Enterprises over the next several years will be undertaking a rethinking of messaging, from a paradigm, cost and feature set perspective. A big, honking expensive client-server approach will give way to something cheaper, more flexible, able to integrate better, more likey to play well in an on-premises cloud, where the data files are not messaging-system specific. Exchange is a Model T in a Thunderbird world.

Oracle, IBM, Google, Yahoo ... they all have their sights set on poaching and chipping away at the massive and vulnerable global Exchange franchise (just like MSFT did to Lotus and GroupWare). And that pulls out yet another tumbler from Microsoft's enterprise lock-in.

It won't happen overnight, but it will happen. Oracle is betting on it.

Sybase moves to spur process modeling agility with latest PowerDesigner

Sybase today announced a new version of its PowerDesigner tools, a model-driven approach to crafting and implementing business processes.

PowerDesigner 15 provides modeling and metadata management through a Link and Synch technology, helping to increase impact analysis and providing greater visibility for business analysts.

The main goal, according to Sybase, is to create greater agility by breaking down the silos that currently wall off the various IT elements from each other and from the business goals. See my thoughts on my CEP is stepping up to the plate on similar values. And we've seen a lot of action on improving business process modeling lately.

Key features of PowerDesigner 15 include:
  • The Link and Synch technology, which captures the intersections between all architectural layers and perspectives of the enterprise.
  • An impact analysis diagram that allows visualization of the cascading impact of change and the management of time and costs associated with changes.
  • Customizable support for homemade or industry standards.
  • A repository Web viewer that allows sharing EA metadata with all stakeholders.
PowerDesigner 15 is currently scheduled to be available on Oct. 31 and ranges in price from $7,495 to $11,495 per developer seat. More information is available at the PowerDesigner Web site.

Monday, September 22, 2008

Complex Event Processing goes mainstream with a boost from TIBCO's latest solution

We often hear a lot about how IT helps business with their "outcomes," and then we're shown a flow chart diagram with a lot of arrows and boxes ... that ultimately points to business "agility" in flashing lights.

Sometimes the dots connect, and sometimes there's a required leap of faith that IT spending X will translate into business benefits Y.

But a new box on the flow chart these days, Complex Event Processing (CEP), really does close the loop between what IT does and what businesses want to do. CEP actually builds on what business intelligence (BI), services oriented architecture (SOA), cloud computing, business process modeling (BPM), and a few other assorted acronyms, provide.

CEP is a great way for all the myriad old and new investments in IT to be more fully leveraged to accommodate the business needs of automating processes, managing complexity, reducing risk, and capturing excellence for repeated use.

Based on its proven heritage in financial services, CEP has a lot of value to offer many other kinds of companies as they seek to extract "business outcomes" from the IT departments' raft of services. That's why I think CEP's value should be directed at CEOs, line of business managers, COOs, CSOs, and CMOs -- not just the database administrators and other mandarins of IT.

That's because modern IT has elevated many aspects of data resources into services that support "events." So the place to mine for patterns of efficiency or waste -- to uncover excellence or risk -- is in the interactions of the complex events. And once you done that, not only can you capture those good and bad events, you can execute on them to reduce the risks or to capture and excellence and instantiate it as repeatable processes.

And its in this ability to execute within the domain of CEP that TIBCO Software has introduced today TIBCO BusinessEvents 3.0. The latest version of this CEP harness solution builds on the esoteric CEP capabilities that program traders have used and makes them more mainstream, said TIBCO. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Making CEP mainstream through BusinessEvents 3.0 has required some enhancements, including:
  • Decision Manager, a new business user Interface that helps business users write rules and queries that into tap the power of CEP in their domain of expertise.
  • Events Stream Processing, a BusinessEvents query language that allows SQL-like queries to target event streams in real-time, which also allows immediate action to be taken on patterns of interest.
  • Distributed BusinessEvents, a distributed cache and rules engine that provides massive scaling of events monitoring, as much as twice the magnitude of events monitoring previously possible.
TIBCO claims that its CEP software comprises over 40 percent of the market share, more than twice the closest competitor. And that's in the context of 52 percent year over year CEP solutions growth, according to a recent IDC Study.

I think that CEP offers the ability to extract real and appreciated business value from a long history of IT improvements. If companies like BI, and they do, then CEP takes off where BI leaves off, and the combination of strong capabilities in BI and CEP is exactly what enterprises need now to provide innovation and efficiency in complex and distributed undertakings.

And TIBCO's products are pointing up how now to take the insights of CEP into the realm of near real-time responses and ability to identify and repeat effective patterns of business behaviors. Dare I say, "agility"?

Saturday, September 20, 2008

LogLogic updates search and analysis tools for conquering IT systems management complexity

Insight into operations has been a hallmark of modern business improvements, from integrated back-office applications to business intelligence (BI) to balanced scorecards and management portals.

But what does the IT executive have to gain similar insight into the systems operations that support the business operations? Well, they have reams of disparate logs and systems analytics data that pour forth every second from all their network and infrastructure devices. Making sense of the data and leveraging the analytics to reduce risk of failure therefore becomes the equivalent of BI for IT.

Now a major BI for IT provider, LogLogic, has beefed up its flagship products with the announcement of LogLogic 4.6. By putting more data together in ways that can be quickly acted on helps companies gain critical visibility into their increasingly complex IT operations, while gaining ease of regulatory compliance along with improved security. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

The latest version of the log management tools from San Jose, Calif.-based LogLogic includes new features that help give enterprises a 360-degree view of how business operations are running, including dynamic range selection, graphical trending, and real-time reporting. Among the improvements are:
  • Index search user interface, including clustering by source, dynamic range selection, trending over time and graphical representation of search results
  • Search history, which automatically saves search criteria for later reuse
  • Forensics clipboard to annotate, organize, record and save up to 1000 messages per clipboard – up to 100 clipboards per user
  • Enhanced security via complex password creation
  • Enhanced backup/restore and failover, including incremental backup support and "backup now" capability.
The latest release provides improved search for IT intelligence, forensics workflow and advanced secure remote access control. LogLogic 4.6 will be rolled out for the company's family of LX, ST, and MX products, helping large- and mid-sized companies to capture, search and store their log data to improve business operations, monitor user activity, and meet industry standards for security and compliance.

I have talked extensively to the folks at LogLogic about the log-centered approach to dealing with IT's growing complexity, as systems and services multiply and are spurred on by the virtualization wildfire. Last week I posted a podcast, in which LogLogic CEO Pat Sueltz explained how log-management aids in visibility and creates a favorable return on investment (ROI) for enterprises.

LogLogic 4.6 will be available later this month as a free upgrade to current customers under Support contract. For new customers, pricing will start at $14,995 for the LX appliance, $53,995 for the ST appliance and $37,500 for the MX appliance.

Genuitec expands Pulse provisioning system beyond tools to Eclipse distros, eyes larger software management role

Genuitec, one of the founders of the Eclipse Foundation, has expanded the reach of its Pulse software provisioning system with the announcement of the Pulse "Private Label," designed to give companies control over their internal and external software distributions.

Until now, Pulse was designed for managing and standardizing software development tools in the Eclipse environment. With Private Label, enterprises can manage full enterprise software delivery for any Eclipse-based product or application suite.

Plans call for subsequently expanding Private Label into a full lifecycle management system for software beyond Eclipse. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]

Private Label, which can be tailored to customer specifications, can be hosted either by Genuitec or within a corporate firewall to integrate with existing infrastructure. Customers also control the number of software catalogs, as well as their content. Other features include full custom branding and messaging, reporting of software usage, and control over the ability for end-users to customize their software profiles, if desired.

Last month, I sat down for a podcast with Todd Williams, vice president of technology at Genuitec, and we discussed the role of Pulse as a simple, intuitive way to install, update, and share custom configurations with Eclipse-based tools.

Coinciding with the release of Pulse Private Label is the release of Pulse 2.3 for Community Edition and Freelance users. Upgrades include performance improvements and catalog expansion. Pulse 2.3 Community Edition is a free service. Pulse 2.3 Freelance is a value-add service priced at $6 per month per user or $60/year. Pulse Private Label pricing is based on individual requirements.

More information is available at the Pulse site.

Wednesday, September 17, 2008

iTKO's SOA testing and validation role supports increasingly complex integration lifecycles

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.

Read a full transcript of the discussion.

The real value of IT comes not from the systems themselves, but from managed and agile business processes in real-world use. Yet growing integration complexity, and the need to support process-level orchestrations of the old applications and new services, makes quality assurance at the SOA level challenging.

SOA, enterprise integration, virtualization and cloud computing place a premium on validating orchestrations at the process level before -- not after -- implementation and refinement. Process-level testing and validation also needs to help IT organizations reduce their labor and maintenance costs, while harmonizing the relationship between development and deployment functions.

iTKO, through it's LISA product and solutions methods, has created a continuous validation framework for SOA and middleware integrations to address these issues. The goal is to make sure all of the expected outcomes in SOA-supported activities occur in a controlled test phase, not in a trail-and-error production phase that undercuts IT's credibility.

To learn more about performance and quality assurance issues around enterprise integration, middleware, and SOA, I recently interviewed John Michelsen, chief architect and founder of iTKO. [See additional background and solutions.]

Here are some excerpts from our discussion:
Folks who are using agile development principles and faster iterations of development are throwing services up fairly quickly -- and then changing them on a fairly regular basis. That also throws a monkey wrench into how that impacts the rest of the services that are being integrated.

That’s right, and we’re doing that on purpose. We like the fact that we’re changing systems more frequently. We’re not doing that because we want chaos. We’re doing it because it’s helping the businesses get to market faster, achieving regulatory compliance faster, and all of those good things. We like the fact that we’re changing, and that we have more tightly componentized the architecture. We’re not changing huge applications, but we’re just changing pieces of applications -- all good things.

Yet if my application is dependent upon your application, and you change it out from under me, your lifecycle impacts mine, and we have a “testable event” -- even though I’m not in a test mode at the moment. What are we going to do about this? We have to rethink the way that we do services lifecycles. We have to rethink the way we do integration and deployment.

If the world were as simple as we wanted it to be, we could have one vendor produce that system that is completely self-contained, self-managed, very visible or very "monitorable," if you will. That’s great, but that becomes one box of the dozens on the white board. The challenge is that not every box comes from that same vendor.

So we end up in this challenge where we’ve got to get that same kind of visibility and monitoring management across all of the boxes. Yet that’s not something that you just buy and that you get out of the box.

In a nutshell, we’ve got to be able to touch, from the testing point of view, all these different technologies. We have to be able to create some collaboration across all these teams, and then we have to do continuous validation of these business processes over time, even when we are not in lifecycles.

I can’t tell you how many times I’ve seen a customer who has said, “Well, we've run out and bought this ESB and now we’re trying to figure out how to use it.” I've said, “Whoa! You first should have figured out you needed it, and in what ways you would use it that would cause you to then buy it.”

We can’t build stuff, throw it over the wall into the production system to see if it works, and then have a BAM-type tool tell us -- once it gets into the statistics -- "By the way, they’re not actually catching orders. You’re not actually updating inventory or your account. Your customer accounts aren’t actually seeing an increase in their credit balance when orders are being placed."

That’s why we’ll start with the best practices, even though we’re not a large services firm. Then, we’ll come in with product, as we see the approach get defined. ... When you’re going down this kind of path, you’re going down a path to interconnect your systems in this same kind of ways. Call it service orientation or call it a large integration effort, either way, the outcome from a system’s point of view is the same.

What they’re doing is adopting these best practices on a team level so that each of these individual components is getting their own tests and validation. That helps them establish some visibility and predictability. It’s just good, old-fashioned automated test coverage at the component level. ... So this is why, as a part of lifecycles, we have to do this kind of activity. In doing so, we drive into value, we get something for having done our work.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.

Read a full transcript of the discussion.

Monday, September 15, 2008

Desktone, Wyse bring Flash content to desktop virtualization delivery

Desktone hopes to overcome two major roadblocks to the adoption of virtual desktop infrastructure (VDI) with today's announcement of a partnership that will bring rich media to virtual desktops and a try-before-you-buy program.

In a bid to bring a multimedia support to thin clients, Desktone of Chelmsford, Mass., and Wyse Technology, San Jose, Calif., announced at WMworld in Las Vegas that they are integrating Desktone dtFlash with Wyse TCX Multimedia, allowing companies to use Flash in a virtual desktop environment to think client devices.

Adobe's Flash technology is becoming more widespread for enterprises and consumers today, for video and rich Internet application interfaces alike. A lack of Flash support on thin clients and for applications and desktops delivery via VDI has potentially delayed adoption of desktop virtualization.

Word has it that Citrix will also offer Flash support for its VDI offerings before the end of the year. It's essential that VDI providers knock down each and every excuse not to use them, to do everything that full-running PCs do, only from the servers. Flash is a big item to fix.

Introduced last year, Wyse TCX Multimedia delivers rich PC-quality multimedia to virtual desktop users. It works with the RDP and ICA protocols that connect the virtual machines on the server to the client, accelerating and balancing workload to display rich multimedia on the client, often offloading the task from the server entirely.

Desktone dtFlash, introduced today, resides in the host virtual machine and acts as the interface between the Flash player and Wyse TCX. Together they allow users to run wide-ranging multimedia applications, including Flash, on their virtual desktops.

Another roadblock to virtualization is that many companies are hesitant to move to VDI because it requires substantial commitment of resources, and the companies are unsure of the benefits. To overcome this hesitancy, Desktone also announced a Desktop as a Service (DaaS) Pilot that will allow companies to explore the benefits of virtualization without having to build the environment themselves.

With pricing for the pilot starting at $7,500, enterprises use their own images and applications in a proof-of-concept that includes up to 50 virtual desktops. Desktone uses its proven DaaS best practices to jump-start the pilot, enabling customers to quickly ramp up. The physical infrastructure for this 30-day pilot is hosted by HP Flexible Computing Services, one of Desktone’s service provider partners.

This news joins last week's moves by SIMtone to bring more virtualization services to cloud providers. Citrix today also has some big news on moving its solutions to a cloud provider value.

SIMtone races to provide cloud-deployed offerings, including wireless device support

SIMtone advanced its cloud-computing offerings last week with a three-pronged approach that includes a universal cloud-computing platform, a virtual service platform (VSP), and a cloud-computing wireless-ready terminal.

The combined offerings from the privately held SIMtone, Durham, N.C., helps pave the way for multiple cloud services to be created, managed and hosted centrally and securely in any data center, while allowing end users to access the services on the fly, through virtually any connected device, with a single user ID, the company said.

The SIMtone Universal Cloud Computing Platform enables network operators and customers to build and deliver multiple cloud services -- virtual desktops, desktop as a service (DaaS), software as a service (SaaS), or Web services.

The SIMtone VSP lets service providers of many stripes transform existing application and desktop infrastructure into cloud-computing infrastructures. SIMtone VSP supports any combination of VMware Server and ESX, Windows XP, Vista and Terminal Server hosts, multi-zone network security, and offers automated, user-activity driven, peak capacity-based guest machine management, load balancing, and failure recovery, radically reducing virtual data center real-estate and power requirements.

Pulling the effort together is the SNAPbook, a wireless-ready portable terminal that can access any services powered by the SIMtone platform. Based on Asus Eee PC solid state hardware, the SNAPbook operates without any local operating system or processing, with all computing tasks performed 100 percent in the cloud.

What was a virtualization value by these vendors -- at multiple levels, including desktop and apps virtualization -- has not struck the chord of "picks and shovels" for cloud providers. Citrix is this week extending the reach of its virtualization Delivery Center solutions to cloud providers as well.

This marks a shifts in the market. Until now, most if not all "cloud providers" like Google, Amazon, Yahoo, et al, have built their own infrastructures and worked out virtualization on their own, often based on the open source Xen hypervisor. They keep these formulas for data center and cloud development and deployment as closely guarded secrets.

But SIMtone and Citrix -- and we should expect others like Desktone, Red Hat, HP and VMware to move fast too -- are creating what they hope will become de facto standards for cloud delivery of virtualized services. Google may not remake its cloud based on third-party vendors, but carriers, service providers and enterprises may just.

The winner of the "picks and shovels" for cloud infrastructure may well end up the next billion-dollar company in the software space. It should be an intense next few years for these players, especially as other larger software vendors (like Microsoft) also build, buy or partner their way in.

Indeed, just as Microsoft is bringing its Hyper-V hypervisor to market, the value has moved up a notch to the management and desktop delivery level. The company that manages virtualization best and fastes for the nascent cloud infrastructure market may well get snatched up before long.

Thursday, September 11, 2008

Systems log analytics offers operators performance insights that set stage for IT transformation

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: LogLogic.

Read a full transcript of the discussion.

Despite growing complexity, IT organizations need to reduce operations costs, increase security and provide more insight, clarity, and transparency across multiple IT systems -- even virtualized systems. A number of new tools and approaches are available for gaining contextual information and visibility into what goes on within IT infrastructure.

IT systems information gushes forth from an increasing variety of devices, as well as networks, databases, and lots of physical and virtual servers and blades. Putting this information all in one place, to be analyzed and exploited, far outweighs manual, often paper-based examination. The automated log forensics solutions that capture all the available systems information and aggregate and centralize that information are becoming essential to efficient IT management.

To learn more about systems logs analytics, I recently moderated a sponsored BriefingsDirect panel discussion podcast with Pat Sueltz, the CEO at LogLogic; Jian Zhen, senior director of product management at LogLogic, and Pete Boergermann, technical support manager at Citizens & Northern Bank.

Here are some excerpts:
When I think of the state of the art in terms of reducing IT costs, I look at for solutions that can solve multiple problems at one time. One of the reasons that I find this interesting is that, first of all, you've got to be focused not just on IT operations, but also adjunct operations the firm offers out.

For example, security operations and controls, because of their focus areas, frequently look like they are in different organizations, but in fact, they draw from the same data. The same goes as you start looking at things like compliance or regulatory pieces.

When technologies get started, they tend to start in a disaggregated way, but as technology -- and certainly data centers -- have matured, you see that you have to be able to not only address the decentralization, but you have to be able to bring it all together in one point ... [This] undergirds the need for a product or solution to be able to work in both environments, in the standalone environment, and also in the consolidated environment.

There are a lot of logs and server systems sitting out in the various locations. One of the biggest issues is being able to have a solution to capture all that information and aggregate and centralize all that information. ... Approximately 30 percent of the data in the data centers is just log data, information that's being spewed out by our devices applications and servers.

We have the Log Data Warehouse that basically can suck information from networks, databases, systems, users, or applications, you name it. Anything that can produce a log, we can get started with, and then and store it forever, if a customer desires, either because of regulatory or because of a compliance issues with industry mandates and such.

[But then] how do you bring operational intelligence out and give the CIOs the picture that they need to see in order to make the right business decisions? ... People have been doing a lot of integration, taking essentially LogLogics information, and integrating it into their portals to show a more holistic view of what's happening, combining information from system monitoring, as well as log management, and putting it into single view, which allows them to troubleshoot things a lot faster.

We have so many pieces of network gear out there, and a lot of that gear doesn't get touched for months on end. We have no idea what's going on, on the port-level with some of that equipment. Are the ports acting up? Are there PCs that are not configured correctly? The time it takes to log into each one of those devices and gather that information is simply overwhelming.

Reviewing those logs is an enormous task, because there's so much data there. Looking at that information is not fun to begin with, and you really want to get to the root of the problem as quickly as possible. ... Weeding out some of the frivolous and extra information and then alerting on the information that you do want to know about is -- I just can't explain in enough words how important that is to helping us get our jobs done a lot quicker.

I think of taking control of the information lifecycle. And, not just gathering pieces, but looking at it in terms of the flow of the way we do business and when we are running IT systems. ... You've got to know what’s known and unknown, and then be able to assess that analysis -- what's happening in real-time, what's happening historically. Then, of course, you've got to be able to apply that with what's going on and retain it. ... We've also got to be able to work with [analytics] just as the systems administrators and the IT and the CSOs want to see it.

I like to use the term "operational intelligence," because that's really intelligence for the IT operations. Bringing that front and center, and allowing CIOs to make the right decisions is extremely critical for us.

It's all about getting that improved service delivery, so that we can eliminate downtime due to, for example, mis-configured infrastructure. That's what I think of in terms of the value.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: LogLogic.

Read a full transcript of the discussion.

Tuesday, September 9, 2008

ActiveVOS 6.0 helps extend SOA investments to the level of business-process outcomes

Active Endpoints has propelled business-process services to the forefront with the general availability release today of ActiveVOS 6.0, an integrated solution designed to free companies and developers from complexity and fragmentation in assembling business processes.

ActiveVOS, a standards-based orchestration and business process management system, permits developers, business analysts, and architects to collaborate across IT and business boundaries through an integrated visual tool.

The latest product from the Waltham, Mass., company includes integrated process modeling; a design, testing, debugging, and deployment environment; reporting and consoling; and a tightly integrated complex event processing (CEP) engine.

CEP helps extend services-based applications, but until now, it has required users to integrate yet another server into their applications and to manage the complexity of integrating the application with the CEP engine. ActiveVOS eliminates this challenge by providing a fully capable CEP engine.

Users select which events generated by the execution engine should trigger CEP events. In addition, these selections are made at deployment time, meaning that developers can easily add or modify CEP capabilities to running applications at will.

Standards implemented in ActiveVOS 6.0 include Business Process Modeling Notation (BPMN), Business Process Execution Language (BPEL) and human task management via the BPEL4People and WS-Human Task specifications.

Analysts can import models and documentation of existing applications, including Microsoft Visio drawings, directly into the graphical BPMN designer to document existing business processes and create new processes using a standards-based designer.

BPMN models are automatically transformed to executable BPEL, allowing developers to provide the implementation details necessary to turn the logical model into a running application. BPEL processes can also be transformed into BPMN, allowing the developer to document existing processes for analysts.

ActiveVOS permits developers to reuse plain old Java objects (POJOs) as native web services, and processes can be thoroughly tested and simulated, even when there are no actual services available during the testing phase. Because ActiveVOS is standards-based, it can go from design to execution without the need for custom code at execution time.

Dashboards, integrated reporting, and a universal console support the needs of operations staff and management.

Active Endpoints' latest packaging and integration, along with the emphasis on the business analyst-level process and visualization tools, strikes me as what the market is looking for at this stage of SOA and BPM.

The way they package and their tools helps reduce complexity in a unique way. I'd say that they have a fuller package as a solution than what I've seen elsewhere. And the depth of ActiveVOS OEM use testifies to the technical capabilities and adherence to standards.

ActiveVOS 6.0 is available for download, and has a free, 30-day trial. Pricing is set at $12,000 per CPU socket for deployment licensees. Development licenses are priced at $5,000 per CPU socket.

Friday, September 5, 2008

Red Hat buys Qumranet, adds gasoline to the spreading VDI bonfire

Open-source giant Red Hat has upped the ante in the PC desktop virtualization market with its acquisition of Qumranet, Inc. in a $107-million deal announced this week.

This acquisition clearly ups the ante in the race for Desktop Virtualization Infrastructure (VDI) solutions. I used to call VDI "desktop as a service (DaaS)," and still think that works pretty well. Anyay, the Red Hat purchase comes on the heels of HP's major virtualization push announced this week, which includes a large VDI component. [See a sponsored podcast on HP's virtualization solutions.]

The Red Hat purchase of Sunnyvale, Calif.-based Qumranet's kernel-based virtual machine (KVM) platform and SolidICE VDI solution is targeted at enterprise customers seeking to cut the total cost of providing applications, web access and runtime features to the client edge.

The acquisition of Qumranet gives the Raleigh, N.C.-based Red Hat a more comprehensive portfolio of virtualization offerings, including:
  • An open-source operating system with built-in virtualization.

  • An embedded hypervisor that supports major operating systems.

  • A consistent management platform for both virtual and physical systems.

  • A cloud and grid management solution.

  • Advanced, high-speed inter-application messaging.

  • An integrated security infrastructure.
SolidICE debuted in April, just weeks before Citrix unveiled its updated XenDesktop, putting Qumranet -- and now Red Hat -- head-to-head with Citrix and VMWare in the desktop virtualization arena. Microsoft may well take is forthcoming Hyper-V in a VDI direction, but for now seems content on partnering with Citrix on VDI. Sun Microsystems should own this market, but opted to hand over Java to the world and buy a tape drive company instead.

SolidICE is a high-performance, scalable virtualization solution built specifically for desktops, and not, Red Hat says, as a retrofit from server virtualization (slap!). It is based on the Simple Protocol for Independent Computing Environments (SPICE) and enables a Windows or Linux desktop to run in a virtual machine hosted on a central server or datacenter.

Virtualization has been around for decades, mostly on mainframes. It's foray into the desktop market was originally hampered by reliability and security issues. However, recent technological advances have ramped up interest and given virtualization a new head of steam. Such vendors as HP are seemingly confident that the performance issues are no longer an inhibitor, just as the economic drivers for virtualization (like energy conservation) are mounting fast.

Red Hat says that it doesn't expect the acquisition to contribute any substantial to its bottom line in the fiscal year that ends February 29, 2009, but after that the company is looking at $20 million in added revenue the following year.

In a nutshell, Qumranet and VDI fit Red Hat to a "T" -- with the service and maintance of centralized server-based clients just gravy on the already robust Red Hat infrastructure support business. VDI allows Red Hat to take its model to the PC, without leaving the datacenter. And it allows the promulagation of Linux for the client OS in much more expedient fashion than taking on Redmond on the desktop.

As I told NewsFactor Network, the market for VDI could be in store for a large growth spurt. VDI simply solves too many problems while providing very little disruption for end users to be ignored.

VDI, somewhat ironically, may also work well for market mover Microsoft as it seeks to slow the momentum to outright web-based and OSS/LAMP-supported applications and services for large businesses. Microsoft must realize that enterprises have had it with the high cost of maintaining and managing the traditional Windows OS in all its client-side permutations.

Not even a $300 million ad campaign for Vista can stop the addition and subtraction that spells this fact out. The math simply does not lie. Help desk costs to fix user config-type and malware issues are killing IT budgets.

Yet (just in time!) VDI allows Microsoft to keep the apps as Windows apps, retains the desktop OS license fees -- even if they are virtualized and server-based -- and VDI on Windows keeps developers and ISVs writing new and updating old apps to run on ... Windows. VDI allows converting client-server apps into Windows Server apps, without turning them into web apps.

Essentially, at the same time, virtualized and server-based VDI delivery of Windows apps and Windows desktop functionality allows enterprises to cut total costs, reuse aging desktop hardware, streamline updates and migrations, and slash security and privacy/control concerns (by maintaining management at the datacenter).

Help desks can actually be pared back, folks. Sorry, Ashish. Data can be kept safe on servers, not out in the strange world of lost hard drives and corporate espionage. Indeed, the U.S. Dept. of Defense (DoD) and other three-acronym spy agencies use VDI extensively. Nothing on the client but chips and dips. If you can do it there, you can do it anywhere.

Now, as Red Hat (and it's partner IBM?) seek to enter the VDI space aggressively and perhaps add Linux as the spolier runtime, Microsoft will need to accelerate its VDI initiatives. I expect MSFT to become the leader in VDI (perhaps via major acquisitions), as a hedge against Google, Red Hat, FOSS, the web, compute clouds, Amazon, IBM, and the far too high cost of traditional Windows clients.

Speaking of IBM, VDI offers Big Blue a way to play to all its global strengths -- infrastructure and services (green IT) -- while moving back into the client solutions (and end-to-end) value business in a potentially Big, Big, way. There's no reason why HP and IBM won't be huge beneficiaries of VDI, even as Microsoft makes it easier for them based on its own need to move quickly in this direction.

Here's a dark horse thought: If you can inject search- and web-based ads into web/SaaS apps, why could you not inject them into VDI-delivered apps? There could well be an additional business model of VDI-delivered desktops and apps supported by targeted ads. Telcos, cable providers, and service providers might (if the were smart) give away the PC/MID hardware, include the VDI/DaaS as part of triple-play connection or premium service fees, and monetize it all through relevant ads embedded intelligently in virtualized apps delivery. Nawwww!

Trust me, keep an eye on VDI, it has the potential to rock the IT market every way as much as Google/Yahoo/Amazon/SalesForce.com/SaaS -- only this trend hits the enterprise directly and fully. Incidentally, cloud computing as a private enterprise endeavor hugely supports the viability and economic rationale for VDI.

It's nice when IT megatrends align so well.

Tuesday, September 2, 2008

Interview: HP's virtualization services honcho John Bennett on 'rethinking virtualization'

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Hewlett-Packard announced a series of wide-ranging virtualization products, services and initiatives on Sept. 2. The drive indicates a global and long-term surge by HP on managing solutions for virtualization, but in the context of business outcomes and in a management framework that includes larger IT transformation strategies.

I conducted an earlier panel discussion on the HP announcements and vision, but also decided to go the top and interview the visionary behind the virtualization vision, John Bennett, the worldwide director of HP's data center transformation solutions and also the HP Technology Solutions Group (TSG) lead for virtualization.

Here are some excerpts from our chat:
We see large numbers of customers, certainly well over half, who have actively deployed virtualization projects. We seem to be at a tipping point in terms of everyone doing it, wanting to do it, or wanting to do even more. ... We see virtualization being driven more as tactical or specific types of IT projects.

It's not uncommon to see customers starting out, either to just reduce costs, to improve the efficiency in utilization of the assets they have, or using virtualization to address the issues they might have with energy cost, energy capacity or sometimes even space capacity in the data center. But, it's very much focused around IT projects and IT benefits.

The interesting thing is that as customers get engaged in these projects, their eyes start to open up in terms of what else they can do with virtualization. For customers who've already done some virtualization work, they realize their interesting manageability and flexibility options for IT. "I can provision servers or server assets more quickly. I can be a little more responsive to the needs of the business. I can do things a little more quickly than I could before." And, those clearly have benefits to IT with the right value to the business.

Then, they start to see that there are interesting benefits around availability, being able to reduce or eliminate planned downtime, and also to respond much more quickly and expeditiously to unplanned downtime. That then lends itself to the conversation around disaster recovery, and into business continuity, if not continuous computing and disaster tolerance.

It's a very interesting evolution of things with increasing value to the business, but it's very much stepwise, and today tends to be focused around IT benefits. We think that's kind of missing the opportunity. ... The real business value to virtualization comes in many other areas that are much more critically important to the business.

One of the first is having an IT organization that is able to respond to dynamically changing needs in real-time, increasing demands for particular applications or business services, being able to throw additional capacity very quickly where it's needed, whether that's driven by seasonal factors or whether it's driven by just systemic growth in the business.

We see people looking at virtualization to improve the organization's ability to roll out new applications in business services much more quickly. We also see that they're gaining some real value in terms of agility and flexibility in having an IT organization that can be highly responsive to whatever is going on in the business, short term and long term.

Yes, we see both pitfalls, i.e., problems that arise from not taking a comprehensive approach, and we see missed opportunities, which is probably the bigger loss for an organization. They could see what the potential of virtualization was, but they weren't able to realize it, because their implementation path didn't take into account everything they had to in order to be successful.

This is where we introduce the idea of rethinking virtualization, and we describe it as rethinking virtualization in business terms. It means looking at maximizing your business impact first by taking a business view of virtualization. Then, it maximizes the IT impact by taking a comprehensive view of virtualization in the data center. Then, it maximizes the value to the organization by leveraging virtualization for client implementations, where it makes sense.

But, it's always driven from a business perspective -- what is the benefit to the business, both quantitative and qualitative -- and then drilling down. ... I want to be able to drill down from the business processes and the business service management and automation tools into the infrastructure management, which in turn drills down into the infrastructure itself.

Is the infrastructure designed to be run and operated in a virtualized environment? Is it designed to be managed from an energy control point of view for example? Is it designed to be able to move virtual resources from one physical server to another, without requiring an army of people?

Part of the onus is on HP in this case to make sure that we're integrating and implementing support for virtualization into all of the components in the data center, so that it works and we can take advantage of it. But, it's up to the customer also to take this business and data center view of virtualization and look at it from an integration point of view.

If you do virtualization as point projects, what we've seen is that you end up with management tools and processes that are outside of the domain of the historical investments you've made. ... We see virtual environments that are disconnected from the insight and controls and governance and policy procedures put in for IT. This means that if something happens at a business-services level, you don't quite know how to go about fixing it, because you can't locate it.

That's why you really want to take this integrated view from a business-service's point of view, from an infrastructure and infrastructure management point of view, and also in terms of your client architectures.

Enterprises can lower the cost of IT operations implicitly by reducing the complexity of it and explicitly by having very standardized and simple procedures covering virtual and physical resources, which in conjunction with the other cost savings, frees up people to work on other projects and activities. Those all also contribute to reduce costs for the business, although they are secondary effects in many cases.

We see customers being able to improve the quality of service. They're able to virtually eliminate unplanned downtime, especially where it's associated with the base hardware or with the operating environments themselves. They're able to reduce unplanned downtime, because if you have an incident, you are not stuck to a particular server and trying to get it back up and running. You can restart the image on another device, on another virtual machine, restore those services, and then you have the time to deal with the diagnosis and repair at your convenience. It's a much saner environment for IT.

We see a large number of customers spending less than 30 percent of their IT budget on business priorities, and growth initiatives, and 70 percent or more on management and maintenance. With virtualization and with these broader transformational initiative, you can really flip the ratio around.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.