Monday, September 22, 2008

Complex Event Processing goes mainstream with a boost from TIBCO's latest solution

We often hear a lot about how IT helps business with their "outcomes," and then we're shown a flow chart diagram with a lot of arrows and boxes ... that ultimately points to business "agility" in flashing lights.

Sometimes the dots connect, and sometimes there's a required leap of faith that IT spending X will translate into business benefits Y.

But a new box on the flow chart these days, Complex Event Processing (CEP), really does close the loop between what IT does and what businesses want to do. CEP actually builds on what business intelligence (BI), services oriented architecture (SOA), cloud computing, business process modeling (BPM), and a few other assorted acronyms, provide.

CEP is a great way for all the myriad old and new investments in IT to be more fully leveraged to accommodate the business needs of automating processes, managing complexity, reducing risk, and capturing excellence for repeated use.

Based on its proven heritage in financial services, CEP has a lot of value to offer many other kinds of companies as they seek to extract "business outcomes" from the IT departments' raft of services. That's why I think CEP's value should be directed at CEOs, line of business managers, COOs, CSOs, and CMOs -- not just the database administrators and other mandarins of IT.

That's because modern IT has elevated many aspects of data resources into services that support "events." So the place to mine for patterns of efficiency or waste -- to uncover excellence or risk -- is in the interactions of the complex events. And once you done that, not only can you capture those good and bad events, you can execute on them to reduce the risks or to capture and excellence and instantiate it as repeatable processes.

And its in this ability to execute within the domain of CEP that TIBCO Software has introduced today TIBCO BusinessEvents 3.0. The latest version of this CEP harness solution builds on the esoteric CEP capabilities that program traders have used and makes them more mainstream, said TIBCO. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Making CEP mainstream through BusinessEvents 3.0 has required some enhancements, including:
  • Decision Manager, a new business user Interface that helps business users write rules and queries that into tap the power of CEP in their domain of expertise.
  • Events Stream Processing, a BusinessEvents query language that allows SQL-like queries to target event streams in real-time, which also allows immediate action to be taken on patterns of interest.
  • Distributed BusinessEvents, a distributed cache and rules engine that provides massive scaling of events monitoring, as much as twice the magnitude of events monitoring previously possible.
TIBCO claims that its CEP software comprises over 40 percent of the market share, more than twice the closest competitor. And that's in the context of 52 percent year over year CEP solutions growth, according to a recent IDC Study.

I think that CEP offers the ability to extract real and appreciated business value from a long history of IT improvements. If companies like BI, and they do, then CEP takes off where BI leaves off, and the combination of strong capabilities in BI and CEP is exactly what enterprises need now to provide innovation and efficiency in complex and distributed undertakings.

And TIBCO's products are pointing up how now to take the insights of CEP into the realm of near real-time responses and ability to identify and repeat effective patterns of business behaviors. Dare I say, "agility"?

Saturday, September 20, 2008

LogLogic updates search and analysis tools for conquering IT systems management complexity

Insight into operations has been a hallmark of modern business improvements, from integrated back-office applications to business intelligence (BI) to balanced scorecards and management portals.

But what does the IT executive have to gain similar insight into the systems operations that support the business operations? Well, they have reams of disparate logs and systems analytics data that pour forth every second from all their network and infrastructure devices. Making sense of the data and leveraging the analytics to reduce risk of failure therefore becomes the equivalent of BI for IT.

Now a major BI for IT provider, LogLogic, has beefed up its flagship products with the announcement of LogLogic 4.6. By putting more data together in ways that can be quickly acted on helps companies gain critical visibility into their increasingly complex IT operations, while gaining ease of regulatory compliance along with improved security. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

The latest version of the log management tools from San Jose, Calif.-based LogLogic includes new features that help give enterprises a 360-degree view of how business operations are running, including dynamic range selection, graphical trending, and real-time reporting. Among the improvements are:
  • Index search user interface, including clustering by source, dynamic range selection, trending over time and graphical representation of search results
  • Search history, which automatically saves search criteria for later reuse
  • Forensics clipboard to annotate, organize, record and save up to 1000 messages per clipboard – up to 100 clipboards per user
  • Enhanced security via complex password creation
  • Enhanced backup/restore and failover, including incremental backup support and "backup now" capability.
The latest release provides improved search for IT intelligence, forensics workflow and advanced secure remote access control. LogLogic 4.6 will be rolled out for the company's family of LX, ST, and MX products, helping large- and mid-sized companies to capture, search and store their log data to improve business operations, monitor user activity, and meet industry standards for security and compliance.

I have talked extensively to the folks at LogLogic about the log-centered approach to dealing with IT's growing complexity, as systems and services multiply and are spurred on by the virtualization wildfire. Last week I posted a podcast, in which LogLogic CEO Pat Sueltz explained how log-management aids in visibility and creates a favorable return on investment (ROI) for enterprises.

LogLogic 4.6 will be available later this month as a free upgrade to current customers under Support contract. For new customers, pricing will start at $14,995 for the LX appliance, $53,995 for the ST appliance and $37,500 for the MX appliance.

Genuitec expands Pulse provisioning system beyond tools to Eclipse distros, eyes larger software management role

Genuitec, one of the founders of the Eclipse Foundation, has expanded the reach of its Pulse software provisioning system with the announcement of the Pulse "Private Label," designed to give companies control over their internal and external software distributions.

Until now, Pulse was designed for managing and standardizing software development tools in the Eclipse environment. With Private Label, enterprises can manage full enterprise software delivery for any Eclipse-based product or application suite.

Plans call for subsequently expanding Private Label into a full lifecycle management system for software beyond Eclipse. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]

Private Label, which can be tailored to customer specifications, can be hosted either by Genuitec or within a corporate firewall to integrate with existing infrastructure. Customers also control the number of software catalogs, as well as their content. Other features include full custom branding and messaging, reporting of software usage, and control over the ability for end-users to customize their software profiles, if desired.

Last month, I sat down for a podcast with Todd Williams, vice president of technology at Genuitec, and we discussed the role of Pulse as a simple, intuitive way to install, update, and share custom configurations with Eclipse-based tools.

Coinciding with the release of Pulse Private Label is the release of Pulse 2.3 for Community Edition and Freelance users. Upgrades include performance improvements and catalog expansion. Pulse 2.3 Community Edition is a free service. Pulse 2.3 Freelance is a value-add service priced at $6 per month per user or $60/year. Pulse Private Label pricing is based on individual requirements.

More information is available at the Pulse site.

Wednesday, September 17, 2008

iTKO's SOA testing and validation role supports increasingly complex integration lifecycles

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.

Read a full transcript of the discussion.

The real value of IT comes not from the systems themselves, but from managed and agile business processes in real-world use. Yet growing integration complexity, and the need to support process-level orchestrations of the old applications and new services, makes quality assurance at the SOA level challenging.

SOA, enterprise integration, virtualization and cloud computing place a premium on validating orchestrations at the process level before -- not after -- implementation and refinement. Process-level testing and validation also needs to help IT organizations reduce their labor and maintenance costs, while harmonizing the relationship between development and deployment functions.

iTKO, through it's LISA product and solutions methods, has created a continuous validation framework for SOA and middleware integrations to address these issues. The goal is to make sure all of the expected outcomes in SOA-supported activities occur in a controlled test phase, not in a trail-and-error production phase that undercuts IT's credibility.

To learn more about performance and quality assurance issues around enterprise integration, middleware, and SOA, I recently interviewed John Michelsen, chief architect and founder of iTKO. [See additional background and solutions.]

Here are some excerpts from our discussion:
Folks who are using agile development principles and faster iterations of development are throwing services up fairly quickly -- and then changing them on a fairly regular basis. That also throws a monkey wrench into how that impacts the rest of the services that are being integrated.

That’s right, and we’re doing that on purpose. We like the fact that we’re changing systems more frequently. We’re not doing that because we want chaos. We’re doing it because it’s helping the businesses get to market faster, achieving regulatory compliance faster, and all of those good things. We like the fact that we’re changing, and that we have more tightly componentized the architecture. We’re not changing huge applications, but we’re just changing pieces of applications -- all good things.

Yet if my application is dependent upon your application, and you change it out from under me, your lifecycle impacts mine, and we have a “testable event” -- even though I’m not in a test mode at the moment. What are we going to do about this? We have to rethink the way that we do services lifecycles. We have to rethink the way we do integration and deployment.

If the world were as simple as we wanted it to be, we could have one vendor produce that system that is completely self-contained, self-managed, very visible or very "monitorable," if you will. That’s great, but that becomes one box of the dozens on the white board. The challenge is that not every box comes from that same vendor.

So we end up in this challenge where we’ve got to get that same kind of visibility and monitoring management across all of the boxes. Yet that’s not something that you just buy and that you get out of the box.

In a nutshell, we’ve got to be able to touch, from the testing point of view, all these different technologies. We have to be able to create some collaboration across all these teams, and then we have to do continuous validation of these business processes over time, even when we are not in lifecycles.

I can’t tell you how many times I’ve seen a customer who has said, “Well, we've run out and bought this ESB and now we’re trying to figure out how to use it.” I've said, “Whoa! You first should have figured out you needed it, and in what ways you would use it that would cause you to then buy it.”

We can’t build stuff, throw it over the wall into the production system to see if it works, and then have a BAM-type tool tell us -- once it gets into the statistics -- "By the way, they’re not actually catching orders. You’re not actually updating inventory or your account. Your customer accounts aren’t actually seeing an increase in their credit balance when orders are being placed."

That’s why we’ll start with the best practices, even though we’re not a large services firm. Then, we’ll come in with product, as we see the approach get defined. ... When you’re going down this kind of path, you’re going down a path to interconnect your systems in this same kind of ways. Call it service orientation or call it a large integration effort, either way, the outcome from a system’s point of view is the same.

What they’re doing is adopting these best practices on a team level so that each of these individual components is getting their own tests and validation. That helps them establish some visibility and predictability. It’s just good, old-fashioned automated test coverage at the component level. ... So this is why, as a part of lifecycles, we have to do this kind of activity. In doing so, we drive into value, we get something for having done our work.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.

Read a full transcript of the discussion.

Monday, September 15, 2008

Desktone, Wyse bring Flash content to desktop virtualization delivery

Desktone hopes to overcome two major roadblocks to the adoption of virtual desktop infrastructure (VDI) with today's announcement of a partnership that will bring rich media to virtual desktops and a try-before-you-buy program.

In a bid to bring a multimedia support to thin clients, Desktone of Chelmsford, Mass., and Wyse Technology, San Jose, Calif., announced at WMworld in Las Vegas that they are integrating Desktone dtFlash with Wyse TCX Multimedia, allowing companies to use Flash in a virtual desktop environment to think client devices.

Adobe's Flash technology is becoming more widespread for enterprises and consumers today, for video and rich Internet application interfaces alike. A lack of Flash support on thin clients and for applications and desktops delivery via VDI has potentially delayed adoption of desktop virtualization.

Word has it that Citrix will also offer Flash support for its VDI offerings before the end of the year. It's essential that VDI providers knock down each and every excuse not to use them, to do everything that full-running PCs do, only from the servers. Flash is a big item to fix.

Introduced last year, Wyse TCX Multimedia delivers rich PC-quality multimedia to virtual desktop users. It works with the RDP and ICA protocols that connect the virtual machines on the server to the client, accelerating and balancing workload to display rich multimedia on the client, often offloading the task from the server entirely.

Desktone dtFlash, introduced today, resides in the host virtual machine and acts as the interface between the Flash player and Wyse TCX. Together they allow users to run wide-ranging multimedia applications, including Flash, on their virtual desktops.

Another roadblock to virtualization is that many companies are hesitant to move to VDI because it requires substantial commitment of resources, and the companies are unsure of the benefits. To overcome this hesitancy, Desktone also announced a Desktop as a Service (DaaS) Pilot that will allow companies to explore the benefits of virtualization without having to build the environment themselves.

With pricing for the pilot starting at $7,500, enterprises use their own images and applications in a proof-of-concept that includes up to 50 virtual desktops. Desktone uses its proven DaaS best practices to jump-start the pilot, enabling customers to quickly ramp up. The physical infrastructure for this 30-day pilot is hosted by HP Flexible Computing Services, one of Desktone’s service provider partners.

This news joins last week's moves by SIMtone to bring more virtualization services to cloud providers. Citrix today also has some big news on moving its solutions to a cloud provider value.