Showing posts with label log management. Show all posts
Showing posts with label log management. Show all posts

Monday, August 24, 2009

IT and log search as SaaS gains operators fast, affordable and deep access to system behaviors

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Complexity of data centers escalates. Managed service providers face daunting performance obligations. And the budget to support the operations of these critical endeavors suffers downward pressure.

In this podcast, we explore how IT search and systems log management as a service provides low-cost IT analytics that harness complexity to improve performance at radically reduced costs. We'll examine how network management, systems analytics, and log search come together, so that IT operators can gain easy access to identify and fix problems deep inside complex distributed environments.

Here to help better understand how systems log management and search work together are Dr. Chris Waters, co-founder and chief technology officer at Paglo, and Jignesh Ruparel, system engineer at Infobond, a value-added reseller (VAR). The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Waters: [Today] there’s just more information flowing, and more information about the IT environment. Search is a great technology for quickly drilling through a lot of noise to get to the exact piece of data that you want, as more and more data flows at you as an IT professional.

One of the other challenges is the distribution of these applications across increasingly distributed companies and applications that are now running out of remote data centers and out of the cloud as well.

When you're trying to monitor applications outside of a data center, you can no longer use software systems that you have installed on your local premises. You have to have something that can reach into that data center. That’s where being able to deliver your IT solution as software-as-a-service (SaaS) or a cloud-based application itself is really important.

You've got this heterogeneity in your IT environments, where you want to bring together solutions from traditional software vendors like Microsoft and cloud providers like Amazon, with their EC2, and it allows you to run things out of the cloud, along with software from open-source providers.

All of the software in these systems and this hardware is generating completely disparate types of information. Being able to pull all that together and use an engine that can suck up all that data in there and help you quickly get to answers is really the only way to be able to have a single system that gives you visibility across every aspect of your IT environment.

And "inventory" here means not just the computers connected to the network, but the structure of the network itself -- the users, the groups that they belong to, and, of course, all of the software and systems that are running on all those machines.

Search allows us to take information from every aspect of IT, from the log files that you have mentioned, but also from information about the structure of the network, the operation of the machines on the network, information about all the users, and every aspect of IT.

We put that into a search index, and then use a familiar paradigm, just as you'd search with Google. You can search in Paglo to find information about the particular error messages, or information about particular machines, or find which machines have certain software installed on them.

We deliver the solution as a SaaS offering. This means that you get to take advantage of our expertise in running our software on our service, and you get to leverage the power of our data centers for the storage and constant monitoring of the IT system itself.

The [open source] Paglo Crawler is a small piece of software that you download and install onto one server in your network. From that one server, the Paglo Crawler then discovers the structure of the rest of the network and all the other computers connected to that network. It logs onto those computers and gathers rich information about the software and operating environment.

That information is then securely sent to the Paglo data center, where it's indexed and stored on the search index. You can then log in to the Paglo service with your Web browser from anywhere in your office, from your iPhone, or from your home and gain visibility into what's happening in real time in the IT environment.

This allows people who are responsible for networks, servers, and workstations to focus on their expertise, which is not maintaining the IT management system, but maintaining those networks, servers, and workstations.

The Crawler needs some access to what’s going on in the network, but any credentials that you provide to the Crawler to log in never leaves the network itself. That’s why we have a piece of software that sits inside the network. So, there are no special firewall holes that need to be opened or compromised in the security with that.

There is another aspect, which is very counterintuitive, and that people don't expect when they think about SaaS. Here at Paglo, we are focused on one thing, which is securely and reliably operating the Paglo service. So, the expertise that we put into those two things is much more focused than you would expect within an IT department, where you are focused on solving many, many different challenges.

Ruparel: For 15 years, we [at Infobond] have been primarily a break-fix organization, moving into managed services, monitoring services. We needed visibility into the networks of the customers we service. For that we needed a tool that would be compatible with the various protocols that are out there to manage the networks -- namely SNMP, WMI, Syslog. We needed to have all of them go into a tool and be able to quickly search for various things.

We found that the technology that Paglo is using is very, very advanced. They aggregate the information and make it very easy for you to search.

You can very quickly create customized dashboards and customized reports based on that data for the end customer, thus providing more of a personal and customized approach to the monitoring for the customers.

Some of the dashboards are a common denominator to various sorts of customers. An example would be a Microsoft Exchange dashboard. Customers would love to have a dashboard that they have on the screen. At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

These are some things that are a common denominator to almost all customers that are moving with the technology, implementing new technologies, such as VMware, the latest Exchange versions, Linux environments for development, and Windows for their end users.

The number of pieces of software and the number of technologies that IT implements is far more than it used to be, and it’s going to get more and more complex as time progresses. With that, you need something like Paglo, where it pulls all the information in one place, and then you can create customized uses for the end customers.

If I go and set things up without Paglo, it would require me to place a server at the customer site. We would have to worry about not only maintenance of the hardware, but the maintenance of the software at the customer site as well, and we would have to do all of this effort.

We would then have to make sure that our systems that those servers communicate to are also maintained and steady 24/7. We would have multiple data centers, where we can get support. In case one data center dies, we have another one that takes over. All of that infrastructure cost would be used as an MSP.

At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

Now, if you were to look at it from a customer's perspective, it's the same situation. You have a software piece that you install on a server. You would probably need a person dedicated for approximately two to three months to get the information into the system and presentable to the point where its useful. With Paglo, I can do that within four hours.

Waters: We have a lot of users who are from small and medium-sized businesses. We also see departments within some very large enterprises, as well, using Paglo, and often that's for managing not just on-premise equipment, but also managing equipment out of their own data centers.

Paglo is ideal for managing data-center environments, because, in that case, the IT people and the hardware are already remote from each other. So, the benefits of SaaS are double there. We also see a lot of MSPs and IT consultants who use Paglo to deliver their own service to their users.

Ruparel: As far as cost is concerned, right now Paglo charges a $1.00 a device. That is unheard of in the industry right now. The cheapest that I have gotten from other vendors, where you would install a big piece of hardware and the software that goes along with it, and the cost associated with that per device is approximately $4-5, and not delivering a central source of information that is accessible from anywhere.

As far as cost, infrastructure cost wise, we save a ton of money. Manpower wise, the number of hours that I have to have engineers working on it, we save tons of time. Number three, after all of that, what I pay to Paglo is still a lot less than it would cost me.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Thursday, July 9, 2009

Paglo SaaS offering provides means to harness untamed collection of log and IT resources data

Paglo, the IT management software-as-a-service (SaaS) company, recently announced a new low-cost service that allows companies to tackle the Herculean task of trying to winnow out a rapidly growing mountain of log data.

With log data piling up in terabyte leaps and increasing regulatory pressure to maintain that data for several years, companies now find themselves in danger of being swamped with information about operational events and the daunting challenge of making sense of it. [Disclosure: Paglo is a sponsor of BriefingsDirect podcasts.]

Paglo, Menlo Park, Calif., has upgraded its SaaS log management application, Paglo Logs, for IT professionals to automatically capture and store their logs and instantly search and analyze them. The expanded service provides a powerful Google-like search capability to enable rapid discovery of key operational events, a platform for meeting compliance requirements, and a way to accelerate the investigation of security incidents.

I was impressed with Paglo when they first came out, and the additional services -- now extending to capture and search of expansive sets of IT assets and other metadata on their performance -- makes it a powerful tool for the cloud era.

How can you be responsible for performance on systems that cross company or provider boundaries? With SaaS offerings like Paglo, you can set up log gathering and search across all the systems that support a business process, regardless of their sourcing. Very cool.

As on-demand and with a "zero footprint" architecture, the Paglo Logs service collects rich systems data from all networked devices and requires no additional software or appliances to use. Paglo Logs allows users to:
  • Accelerate problem resolution by going directly from the logged events to the underlying infrastructure, to view health and performance data or to access a particular machine.

  • Meet the Payment Card Industry (PCI) Data Security Standard (DSS) by tracking all devices, software and configurations, monitoring wireless access, and securing central log collection.

  • Provide both developers and operations the ability to troubleshoot application issues and understand user behavior without logging into the production servers.

  • Improve their security profile and incident response by immediately receiving alerts and using saved searches and dashboards.
To maintain security, each business using Paglo has its own search index that keeps the log and network information separate and private from other subscribers. Setting up requires no appliances, on-site dedicated servers

As I said in the Paglo release on the news, "Companies need to harness and analyze the information explosion coming from all of their computer, server, network and log data. It's a very productive way to improve operating efficiencies, gain a clear understanding of true IT costs, and to meet compliance requirements. As an on-demand service, Paglo helps drop the complexity barriers to quick and effective log search and analytics."

The services come in three flavors, Paglo IT, a more complete offering; Paglo MSP, targeted at managed services providers, and Paglo Logs, for the full search and visualization services (and with a free introductory offer). The services are designed to appeal to security professionals, IT administrators, and developers of on-demand applications and services.

The new Log Management service is available immediately and accounts can be created directly online. A free trial is available at https://app.paglo.com/signup?product=logs. Paid plans start at an aggressive $99 per month.

Tuesday, February 17, 2009

LogLogic delivers integrated suite for securely managing enterprise-wide log data

Companies faced with a tsunami of regulations and compliance requirements could soon find themselves drowning in a sea of log data from their IT systems. LogLogic, the log management provider, today threw these companies a lifeline with a suite of products that form an integrated solution for dealing with audits, compliance, and threats.

The San Jose, Calif. company announced the current and upcoming availability of LogLogic Compliance Manager, LogLogic Security Event Manager, and LogLogic Database Security Manager. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

A typical data center nowadays generates more than a terabyte of log data per day, according to LogLogic. With requirements to archive this data for seven years, a printed version could stretch to the moon and back 10 times. LogLogic's new offerings are designed to aid companies in collecting, storing, and analyzing this growing trove of systems operational data.

Compliance Manager helps automate compliance-approval workflows and review tracking, translating "compliance speak" into more plain language. It also maps compliance reports to specific regulatory control objectives, helps automate the business process associated with compliance review and provides a dashboard overview with an at-a-glance scorecard of an organization's current position.

Security Event Manager, powered by LogLogic partner Exaprotect, performs complex event correlation, threat detection, and security incident management workflow, either across a department or the entire enterprise.

LogLogic's partner Exaprotect, Mountain View, Calif., is a provider of enterprise security management for organizations with large-scale, heterogeneous infrastructures.

The LogLogic combined solution analyzes thousands of events in near real time from security devices, operating systems, databases, and applications and can uncover and prioritize mission-critical security events.

Database Security Manager monitors privileged-user activities and protected data stored within database systems. With granular, policy-based detection, integrated prevention, and real-time virtual patch capabilities, security analysts can independently monitor privileged users and enforce segregation of duties without impacting database performance.

Because of the integrated nature of the products, information can be shared across the log management system. For example, database security events can be send to Compliance Manager for review or to the Security Event Manager for prioritization and escalation.

What intrigues me about log data management is the increased role it will play in governance of services, workflow and business processes -- both inside and outside of an organization's boundaries. Precious few resources exist to correlate the behavior of business services with underlying systems.

By making certain log data available to more players in a distributed business process, the easier it is to detect and provide root cause analysis of faults. The governance benefit can work in a two-way street basis, too. As SLAs and other higher-order governance capabilities point to a need for infrastructure adjustments, the logs data trail offer insight and verification.

In short, managed log data is an essential ingrediant to any services lifecycle management and governance capability. The lifecycle approach becomes more critical as cloud computing, virtualization, SOA, and CEP grow more common and imortant.

Lastly, thanks to such technologies as MapReduce, the ability to scour huge quantities of systems log data fast and furious with "BI for IT" depth benefits -- at a managed cost -- becomes attainable. I expect to see more of these "BI for IT" benefits to be applied to more problems of complexity and governance over the coming years. The cost-benefit analysis is a no-brainer.

Security Event Manager is available immediately. Compliance manager is available to early adopters immediately and will be generally available in March. Database Security Manager will be available in the second quarter of this year.

More information on the new products is available LogLogic's screen casts at http://www.loglogic.com/logpower.

Monday, December 15, 2008

IT systems analytics become more crucial as cloud and SaaS adoption raises complexity bar

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.

Read a full transcript of the discussion.

Software-as-a-service (SaaS) and cloud computing are changing the nature of IT systems' performance requirements and heightening expectations for end users from online applications and services.

Increasingly, an extended level of visibility, management, and performance will apply to those serving up applications as services, regardless of their hosting origins or models. The more the apps and services fulfill a need, the more the users will expect even better results and performance.

In other words, the more these organizations succeed, the more they need to scale, leverage virtualization and cloud infrastructure methods, embark of service oriented architecture (SOA) and then keep all the trains running fast and on time. Using the latest tools and analytics -- the equivalent of business intelligence (BI) for IT -- on the systems and across the gathering complexity becomes essential.

To learn more about how systems log tools and analysis are aiding providers of cloud and SaaS, I recently spoke with fellow blogger Phil Wainewright, an independent analyst and director at Procullux Ventures, and SaaS blogger at ZDNet and ebizQ, as well as with Jian Zhen, senior director of product management at LogLogic.

Here are some excerpts:
One thing that's happening is that the SaaS infrastructure is getting more complicated, because more choice is emerging. In the past people might have gone to one or two SaaS vendors in very isolated environments or isolated use cases. What we're now finding is that people are aggregating different SaaS services. ... We're actually looking at different layers of not just SaaS, but also platform as a service (PaaS), which are customizable applications, rather than the more packaged applications that we saw in the first generation of SaaS. We're seeing more utility and cloud platforms and a whole range of options in between.

That means people are really using different resources and having to keep tabs on all those different resources. Where in the past, all of an IT organizations' resources were under their own control, they now have to operate in this more open environment, where trust and visibility as to what's going on are major factors.

If you're going to take advantage of SaaS properly, then you need to move to more of a SOA internally. That makes it easier to start to aggregate or integrate these different mashups, these different services. At the end of the day, the end users aren't going to be bothered whether the application is delivered from the enhanced data center or from a third-party provider outside the firewall, as long as it works and gives them the business results they're looking for.

You have to worry not only about who is accessing the information within your company firewall, but now you have all this data that's sitting outside of the firewall in another environment. That could be a PaaS, as Phil said, it could be a SaaS, an application that's sitting out there. How do you control that access? How do you monitor that access. That's one of the key issues that IT has to worry about.

Obviously, there are data governance issues and activity monitoring issues. Now, from a performance and operational perspective, you have to worry about, are my systems performing, are these applications that I am renting, or platforms or utilities I am renting, are they performing to my spec? How do I ensure that the service providers can give me the SLAs that I need.

... What SaaS providers have been learning is that they need to get better at giving more information to their customers about what is going wrong when the service is not up or the service is not performing as expected. The SaaS industry is still learning about that. So, there is that element on that side.

On the IT side, the IT people have spent too much time worrying about reasons why they didn't want to deal with SaaS or cloud providers. They've been dealing with issues like what if does go down, or how can I trust the security? Yes, it does go down sometimes, but it's up 99.7 percent of the time or 99.9 percent of the time, which is better than most organizations can afford to do with their own services.

Let's shift the emphasis from, "It's broken, so I won't use it," to a more mature attitude, which says, "It will be up most of the time, but when it does break, how do I make sure that I remain accountable, as the IT manager, the IT Director, or the CIO. How do I remain accountable for those services to my organization, and how do I make sure that I can pinpoint the cause of the problem, and get it rectified as quickly as possible?"

One of the great quotes that we recently got from a customer is, "You can outsource responsibility, but not accountability." So, it fits right into what Phil what was saying about being accountable and about your own environment.

The requirement to comply with government regulations and industry mandates really doesn't change all that much, just because of SaaS or because a company is going into the cloud. What it means is that the end users are still responsible for complying with Sarbanes-Oxley (SOX), payment cared industry (PCI) standards, the Health Insurance Portability and Accountability Act (HIPAA), and other regulations. It also means that these customers will also expect the same type of reports that they get out of their own systems.

BI for IT, or IT intelligence, as I have used the term before, is really about getting more information out of the IT infrastructure; whether it's internal IT infrastructure or external IT infrastructure, such as the cloud.

Traditionally, administrators have always used logs as one of the tools to help them analyze and understand the infrastructure, both from a security and operational perspective. For example, one of the recent reports from Price Waterhouse, I believe, says that the number one method for identifying security incidents and operational problems is through logs.

We can provide them that information, both from an internal and external perspective. We work with a lot of service providers, as you know, companies like SAVVIS, VeriSign, Verizon Business Services, to provide the tools for them to analyze service provider infrastructures as well.

A lot of that information can be gathered into a central location, correlated, and presented as business intelligence or business activity monitoring for the IT infrastructure.

Increasingly, it comes back to IT accountability. If your service provider does go down, and if the logs show that the performance was degrading gradually over a period of time, then you should have known that. You should have been doing the analysis over time, so that you were ahead of that curve and were able to challenge the provider before the system went down.

If it's a good provider, which comes back to the question you asked, then the provider should be on top of that before the customer finds out. Increasingly, we'll see the quality of reporting that providers are doing to customers go up dramatically. The best providers will understand that the more visibility and transparency they provide the customers about the quality of service they are delivering, the more confidence and trust their customers will have in that service.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.

Thursday, November 6, 2008

ITIL requires better log management and analytics to gain IT operational efficiency, accountability

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Read complete transcript of the discussion.

Implementing best practices from the the Information Technology Infrastructure Library (ITIL) has become increasingly popular in IT departments. As managers improve IT operations with an eye to process efficiency, however, they need to gain operational accountability through visibility and analytics into how systems and networks are behaving.

Innovative use of systems log management and analytics -- in the context of entire IT infrastructures -- produces an audit and performance data trail that both helps implement and refine such models as ITIL. Compliance is also a building requirement that can be solved through verification tools such as systems monitoring and analytics in the context of ITIL best practices.

To learn more about how systems log tools and analysis are aiding organizations as they adopt ITIL, I recently spoke with Sean McClean, principal at consultancy KatalystNow, and Sudha Iyer, director of product management at LogLogic.

Here are some excerpts:
IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT. ... We are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business.

Because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things. ... When people look at ITIL, organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way.

ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.

But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.

Our log-management platform ... allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.

All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in. ... Our log management solutions allows [enterprises] to create better control and visibility into what actually is going on in their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on.

You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized." Or, "it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time."

[As] the industry matures, I think we will see ... people looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”

There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Saturday, September 20, 2008

LogLogic updates search and analysis tools for conquering IT systems management complexity

Insight into operations has been a hallmark of modern business improvements, from integrated back-office applications to business intelligence (BI) to balanced scorecards and management portals.

But what does the IT executive have to gain similar insight into the systems operations that support the business operations? Well, they have reams of disparate logs and systems analytics data that pour forth every second from all their network and infrastructure devices. Making sense of the data and leveraging the analytics to reduce risk of failure therefore becomes the equivalent of BI for IT.

Now a major BI for IT provider, LogLogic, has beefed up its flagship products with the announcement of LogLogic 4.6. By putting more data together in ways that can be quickly acted on helps companies gain critical visibility into their increasingly complex IT operations, while gaining ease of regulatory compliance along with improved security. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

The latest version of the log management tools from San Jose, Calif.-based LogLogic includes new features that help give enterprises a 360-degree view of how business operations are running, including dynamic range selection, graphical trending, and real-time reporting. Among the improvements are:
  • Index search user interface, including clustering by source, dynamic range selection, trending over time and graphical representation of search results
  • Search history, which automatically saves search criteria for later reuse
  • Forensics clipboard to annotate, organize, record and save up to 1000 messages per clipboard – up to 100 clipboards per user
  • Enhanced security via complex password creation
  • Enhanced backup/restore and failover, including incremental backup support and "backup now" capability.
The latest release provides improved search for IT intelligence, forensics workflow and advanced secure remote access control. LogLogic 4.6 will be rolled out for the company's family of LX, ST, and MX products, helping large- and mid-sized companies to capture, search and store their log data to improve business operations, monitor user activity, and meet industry standards for security and compliance.

I have talked extensively to the folks at LogLogic about the log-centered approach to dealing with IT's growing complexity, as systems and services multiply and are spurred on by the virtualization wildfire. Last week I posted a podcast, in which LogLogic CEO Pat Sueltz explained how log-management aids in visibility and creates a favorable return on investment (ROI) for enterprises.

LogLogic 4.6 will be available later this month as a free upgrade to current customers under Support contract. For new customers, pricing will start at $14,995 for the LX appliance, $53,995 for the ST appliance and $37,500 for the MX appliance.