Thursday, September 11, 2008

Systems log analytics offers operators performance insights that set stage for IT transformation

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: LogLogic.

Read a full transcript of the discussion.

Despite growing complexity, IT organizations need to reduce operations costs, increase security and provide more insight, clarity, and transparency across multiple IT systems -- even virtualized systems. A number of new tools and approaches are available for gaining contextual information and visibility into what goes on within IT infrastructure.

IT systems information gushes forth from an increasing variety of devices, as well as networks, databases, and lots of physical and virtual servers and blades. Putting this information all in one place, to be analyzed and exploited, far outweighs manual, often paper-based examination. The automated log forensics solutions that capture all the available systems information and aggregate and centralize that information are becoming essential to efficient IT management.

To learn more about systems logs analytics, I recently moderated a sponsored BriefingsDirect panel discussion podcast with Pat Sueltz, the CEO at LogLogic; Jian Zhen, senior director of product management at LogLogic, and Pete Boergermann, technical support manager at Citizens & Northern Bank.

Here are some excerpts:
When I think of the state of the art in terms of reducing IT costs, I look at for solutions that can solve multiple problems at one time. One of the reasons that I find this interesting is that, first of all, you've got to be focused not just on IT operations, but also adjunct operations the firm offers out.

For example, security operations and controls, because of their focus areas, frequently look like they are in different organizations, but in fact, they draw from the same data. The same goes as you start looking at things like compliance or regulatory pieces.

When technologies get started, they tend to start in a disaggregated way, but as technology -- and certainly data centers -- have matured, you see that you have to be able to not only address the decentralization, but you have to be able to bring it all together in one point ... [This] undergirds the need for a product or solution to be able to work in both environments, in the standalone environment, and also in the consolidated environment.

There are a lot of logs and server systems sitting out in the various locations. One of the biggest issues is being able to have a solution to capture all that information and aggregate and centralize all that information. ... Approximately 30 percent of the data in the data centers is just log data, information that's being spewed out by our devices applications and servers.

We have the Log Data Warehouse that basically can suck information from networks, databases, systems, users, or applications, you name it. Anything that can produce a log, we can get started with, and then and store it forever, if a customer desires, either because of regulatory or because of a compliance issues with industry mandates and such.

[But then] how do you bring operational intelligence out and give the CIOs the picture that they need to see in order to make the right business decisions? ... People have been doing a lot of integration, taking essentially LogLogics information, and integrating it into their portals to show a more holistic view of what's happening, combining information from system monitoring, as well as log management, and putting it into single view, which allows them to troubleshoot things a lot faster.

We have so many pieces of network gear out there, and a lot of that gear doesn't get touched for months on end. We have no idea what's going on, on the port-level with some of that equipment. Are the ports acting up? Are there PCs that are not configured correctly? The time it takes to log into each one of those devices and gather that information is simply overwhelming.

Reviewing those logs is an enormous task, because there's so much data there. Looking at that information is not fun to begin with, and you really want to get to the root of the problem as quickly as possible. ... Weeding out some of the frivolous and extra information and then alerting on the information that you do want to know about is -- I just can't explain in enough words how important that is to helping us get our jobs done a lot quicker.

I think of taking control of the information lifecycle. And, not just gathering pieces, but looking at it in terms of the flow of the way we do business and when we are running IT systems. ... You've got to know what’s known and unknown, and then be able to assess that analysis -- what's happening in real-time, what's happening historically. Then, of course, you've got to be able to apply that with what's going on and retain it. ... We've also got to be able to work with [analytics] just as the systems administrators and the IT and the CSOs want to see it.

I like to use the term "operational intelligence," because that's really intelligence for the IT operations. Bringing that front and center, and allowing CIOs to make the right decisions is extremely critical for us.

It's all about getting that improved service delivery, so that we can eliminate downtime due to, for example, mis-configured infrastructure. That's what I think of in terms of the value.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: LogLogic.

Read a full transcript of the discussion.

Tuesday, September 9, 2008

ActiveVOS 6.0 helps extend SOA investments to the level of business-process outcomes

Active Endpoints has propelled business-process services to the forefront with the general availability release today of ActiveVOS 6.0, an integrated solution designed to free companies and developers from complexity and fragmentation in assembling business processes.

ActiveVOS, a standards-based orchestration and business process management system, permits developers, business analysts, and architects to collaborate across IT and business boundaries through an integrated visual tool.

The latest product from the Waltham, Mass., company includes integrated process modeling; a design, testing, debugging, and deployment environment; reporting and consoling; and a tightly integrated complex event processing (CEP) engine.

CEP helps extend services-based applications, but until now, it has required users to integrate yet another server into their applications and to manage the complexity of integrating the application with the CEP engine. ActiveVOS eliminates this challenge by providing a fully capable CEP engine.

Users select which events generated by the execution engine should trigger CEP events. In addition, these selections are made at deployment time, meaning that developers can easily add or modify CEP capabilities to running applications at will.

Standards implemented in ActiveVOS 6.0 include Business Process Modeling Notation (BPMN), Business Process Execution Language (BPEL) and human task management via the BPEL4People and WS-Human Task specifications.

Analysts can import models and documentation of existing applications, including Microsoft Visio drawings, directly into the graphical BPMN designer to document existing business processes and create new processes using a standards-based designer.

BPMN models are automatically transformed to executable BPEL, allowing developers to provide the implementation details necessary to turn the logical model into a running application. BPEL processes can also be transformed into BPMN, allowing the developer to document existing processes for analysts.

ActiveVOS permits developers to reuse plain old Java objects (POJOs) as native web services, and processes can be thoroughly tested and simulated, even when there are no actual services available during the testing phase. Because ActiveVOS is standards-based, it can go from design to execution without the need for custom code at execution time.

Dashboards, integrated reporting, and a universal console support the needs of operations staff and management.

Active Endpoints' latest packaging and integration, along with the emphasis on the business analyst-level process and visualization tools, strikes me as what the market is looking for at this stage of SOA and BPM.

The way they package and their tools helps reduce complexity in a unique way. I'd say that they have a fuller package as a solution than what I've seen elsewhere. And the depth of ActiveVOS OEM use testifies to the technical capabilities and adherence to standards.

ActiveVOS 6.0 is available for download, and has a free, 30-day trial. Pricing is set at $12,000 per CPU socket for deployment licensees. Development licenses are priced at $5,000 per CPU socket.