Tuesday, November 13, 2012

Thomas Duryea’s journey to the cloud: Part one

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next BriefingsDirect IT leadership discussion focuses on how leading Australian IT services provider Thomas Duryea Consulting made a successful journey to cloud computing as a business.

We'll learn why a cloud-of-clouds approach is providing new types of IT services to Thomas Duryea’s many Asia-Pacific region customers.

Our discussion kicks off a three-part series on how Thomas Duryea (TD) designed, built, and commercialized a vast cloud infrastructure to provide services to their clients. The first part of our series here addresses the rationale and business opportunity for TD to create their cloud-services portfolio built on VMware.

To learn more about implementing the best cloud technology to deliver and commercialize an adaptive and reliable cloud services ecosystem, please join Adam Beavis, General Manager of Cloud Services at Thomas Duryea in Melbourne, Australia. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why cloud services for your consulting and business customers now? Have they been asking for it

Beavis: Certainly, the customers are the big driver while we are moving into cloud services. Being a traditional IT integrator, we've been very successful showing a lot of data-center solutions to our customers, but more and more we're seeing customers finding it harder to get CAPEX and new projects and they are really starting to look at the cloud alternative.

Gardner: Why then have you looked at moving toward cloud services as a commercial offering, rather than going yourself to a public cloud and then availing yourself of their services? Why build it yourself?

Beavis: We reviewed all the possibilities and looked at moving to some of the larger cloud providers, but we've got a strong skill set, a strong heritage, and good relationships with our customers, and they forced our hand in many ways to move down that path.

They were concerned about telcos looking after some of their cloud services. They really wanted to maintain the relationship that they had with us. So we reviewed it and understood that, because of the skill sets we have and the experience in this area, it would work both commercially and then relationship-wise. The best move for us was to leverage the existing relationships we have with the vendors and build out our own cloud.

Gardner: So who are these eager customers? Could you describe them? Do they fall into a particular category, like a small to medium-size business (SMB) type of clientele? Is it a vertical industry? Where is the sweet spot in the market?

No sweet spot

Beavis: That’s probably the one thing that surprised me the most. As we've been out talking to customers and selling the cloud, there really is no sweet spot. Organizations that you talk to will be doing it for different reasons. Some of them might be doing it for environmental insurance reasons, because having their data center in their building is costing them money, and there are now viable opportunity to move it out.

Adam Beavis
But if I were to identify one or two, the first one would be independent software vendors (ISVs). Cloud solutions are bringing to ISVs something they've looked for for a long time, and that’s the ability to run test and development environments. Once they've done that, they can host their applications out of a service provider and not have to worry about the underlying infrastructure, which is something, as a application developer, they're not interested in.

So we're seeing them, and we're working with quite a few. One, an Oracle partner, will actually run their tests in their environments in a cloud, and then be able to deliver those services back to some of their customers. In other cases they'll run up the development in their cloud and then import that to an on-premise cloud afterward.

The other area is with SMBs. We're certainly seeing them, for a financial reasons, want to shift to cloud. It's the same old story of OPEX versus CAPEX, reduced budgets, and trying to do more with less.

The cloud is now in a position where it can offer that to SMB customers. So we're seeing great opportunities appear, where not only are we taking their infrastructure into the cloud, but also adding on top of that managed-service capability, where we will be managing all the way up to the application.
We see us being able to provide it to anyone, from a small reseller to an ISV, someone who develops their own applications.

Gardner: Based on this mixture of different types of uses, it sounds like you're going to be able to grow your offerings right along with what this market demands. Perhaps some of those ISVs might be looking for a platform-as-a-service (PaaS) direction, others more of a managed services, just for specific applications. Was that important for you to have that sort of Swiss Army knife for cloud advancement?

Beavis: Exactly right, Dana. Each one is addressing a different pain point. For example, some of them are coming to us for disaster recovery (DR) as a service, because the cost of renewing their DR site or managing or putting that second site out is too expensive. Others, as you said, are just looking for a platform to develop applications on. So the whole PaaS concept is something near and dear to us on our roadmap.

Each one continues to evolve, and it's usually the customers that start to drive you as a cloud provider to look at your own service catalog. That’s probably something that’s quite exciting -- how quickly you need to evolve as a service provider. Because it's still quite a new area for a lot of people, and customers do ask for varying things that they expect the cloud to be or what a cloud is. We're constantly evolving and looking at new offerings to add into our service catalog.

We see it being more than just one offering in our eyes. We see us being able to provide it to anyone, from a small reseller to an ISV, someone who develops their own applications. Or, it's someone who works specifically with applications and they're not just interested anymore in running their own infrastructure on their site or caring for it. They just want to provide that platform for their developers to be able to work hassle-free.
Gardner: So this means that you've got to come up with an infrastructure that can support many different type of uses, grow, scale, and increase adaptability to the market. What were some of the requirements, when you started looking at the vendors that you were going to partner with to create this cloud offering?

Understanding customer needs

Beavis: The first thing that was important for us was, as you said, understanding our customers’ needs initially and then matching that to what they required. Once we had that, those words you mentioned, scale and everything, had to come into play. Also the cost to build these things certainly doesn’t come cheap. So we had to make sure we could use the existing resources we had.

We really went in the end with the VMware product, because we have existing skill sets in that area. We knew we would have a lot of support, with their being a tier-1 vendor and us being a tier-1 partner for them. We needed someone that could provide us with that support from both a services perspective, sales, marketing, and really come on the journey with us to build that cloud.

And then obviously our other vendors underneath, like EMC, who are also incredibly supportive of us, integrate very well with those products, and Cisco as well.

It had to be something that we could rapidly build, I won't say out of the box, because it’s a lot that goes around building a cloud, but something that we knew had a strong roadmap and was familiar to all our customers as well.

The move to cloud is something that is new to them, it's stressful, and they're wondering how to do it. In Australia, 99 percent of customers have some sort of VMware in their data center. To be able to move to a platform that they were familiar with and had used in the past makes a big difference, rather than saying, "You're moving to cloud, and here is a whole new platform, interface, and something that you've never seen before."
Needless to say, we're very good partners with some of the other providers as well. We did review them all, but it was a maturity thing and also a vision thing.

The story of the hybrid cloud was something we sat down and saw had a lot of legs: The opportunity for people to stick their toe in the water and get used to being in the cloud environment. And VMware’s hybrid cloud model, connecting your on-premise into the public cloud, was also a big win for us. That’s really a very strong go-to-market for us.

Gardner: As a systems integrator for some time, you're very familiar with the other virtualization offerings in the market. Was there anything in particular that led you away from them and more toward VMware?

Beavis: It was definitely a maturity thing. We remember when Paul Maritz got on stage four years ago and defined the cloud operating system. The whole industry followed after that. VMware led in this path. So being a market leader certainly helped.

Needless to say, we're very good partners with some of the other providers as well. We did review them all, but it was a maturity thing and also a vision thing. The vision of a software-defined datacenter really came into play as we were building Cloud 2.0 and that was a big winner for us. That vision that they have now around that is certainly something that we believe in as well.

Gardner: Of course, they've announced new and important additions to their vCloud Suite, and a lot of that seems to focus on folks like yourself who need to create clouds as a business to be able to measure, meter, build, manage access, privacy, and security issues. Was there anything about the vCloud Suite that attractive you in terms of being able to run the cloud as a business itself?

Product integration

Beavis: The fact it was packing stuff as a suite was a big one for us. The integration of the products now is something that’s happening a lot more rapidly, and as a provider, that’s what we like to see. The concept of needing different modules for billings, operations, even going back 12 months ago, made it quite difficult.

In the last 12 months with the Suite, it has come a long way. We've used the component around Chargeback, vCenter Operations Management, and Capacity Management. The concept now of software-defined security, firewalls, and networking, has become very, very exciting for us, to be able to all of a sudden manage that through a single console, rather than having many different point solutions doing different things. As a service provider that’s committed to that VMware product, we find it very, very important.

Gardner: Margins can be a little tricky with this business. As you say, you had a lot of investment in this. How do you know when you are succeeding? Is there a benchmark that you set for yourself that would say, "We know we're doing this well when "blank?" Or is this a bit more of a crawl, walk, run approach to this overall cloud business?

Beavis: Obviously that comes with a lot of the back-end work we're doing. We take a lot of time. It’s probably the most important part. Before we even go and build the cloud, it’s getting all that right. You know your direction. You know what your forecast needs to be. You know what numbers you need to hit. We certainly have numbers and targets in mind.

That’s from a financial perspective, but also customers are coming into the cloud, because just like physical to virtual, people will come, initially, just with small environment and then they'll continue to grow.
If you provide good service within your cloud, and they see that risk reduced, cost reduced, and it’s more comfortable, they will continue to move workloads into your cloud

If you provide good service within your cloud, and they see that risk reduced, cost reduced, and it’s more comfortable, they will continue to move workloads into your cloud, which obviously increases your bottom line.

Initially it’s not just, "Let’s go out and sell as much as we can to one or two customers, whatever it might be." It’s really getting as many logos into the cloud as we can, and then really work on those relationships, building up that trust, and then over time start to migrate more and more workloads into the cloud.

Gardner: Adam, help us understand for those listening who might want to start exploring your services, when do these become available? When are you announcing them, and is there any roadmap that you might be able to tease us with a little bit about what might be coming in the future?

Beavis: We've got Cloud 1.0 running at the moment, which is a cloud where we provide cloud services to customers. We have the automation level that we are putting in Cloud 2.0. Our backup services, where people no longer have to worry about tapes and things on site, backup as a service where they can just point to our data center and backup files, is available now.

Also DR as a service is probably our biggest number one seller cloud service at the moment, where people who don’t want to run those second sites, can just deploy or move those workloads over into our data center, and we can manage their DR for them.

New cloud suite

But there's a big one we're talking about. We're on stage at vForum on Wednesday, Nov. 14, here in Australia, launching our new cloud suite built on VMware vCloud Director 5.1.

Then on the roadmap, the areas that are starting to pop up now are things like desktop as a service. We're exploring quite heavily with big data on the table, business intelligence as a service, and the ability for us to do something with all that data that we're collecting from our customers. When we talk about IT as a service, that's lifting us up to that next level again.

As I said earlier, it's continuously changing and new ideas evolve, and that’s the great thing working with an innovative company. There are always plenty of people around driving new concepts and new ideas into the cloud business.

Gardner: This discussion kicks off a three-part series on how TD designed, built and commercialized an adaptive and reliable cloud services ecosystem. Look for the next installment in our sponsored series when we delve more deeply into the how and what behind Thomas Duryea Consulting's cloud infrastructure journey.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, November 7, 2012

Collaboration-enhanced procurement and AP automation maximize productivity and profit gains in networked economy, says Ariba's Drew Hofler

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba.

When the bottom line needs to grow (even when the top line does not), then businesses must exploit open collaboration advances in procurement and finance to produce new types of productivity benefits, say an industry analyst and Ariba executive.

And the benefits of improved data integration and the process efficiencies of cloud computing are additionally helping companies refine their finances through tighter collaboration with all elements of their procurement and supply chain networks.

To uncover how these trends are fostering improved processes in accounts payable (AP) automation and spend management, BriefingsDirect recently sat down with Drew Hofler, Senior Solutions Marketing Manager of Financial Solutions at Ariba, an SAP company, and Vishal Patel, Research Director and Vice President of Client Services at Ardent Partners. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Today’s landscape for AP and collaborating across business is driving some new processes, new approaches, and you have some new research. Tell us why you did the research now, and what found out.

Patel: We completed this E-Payables 2012 research study in June of this year. It was comprised of approximately 220 AP, finance, and procurement professionals. Our intent was to get a sense of the current state of AP operations, the usage of AP solutions, and to capture some of the key strategies, processes, and performances that these organizations are able to achieve. Also, to determine how best-in-class companies are leveraging AP automation.

Gardner: And what's changed? What's new now or different from say two or three years ago?

Patel: Traditionally, we saw AP as having a very tactical focus. We asked the survey participants, "What do you think AP can do for you?" The responses ranged from payroll and reviewing invoices to responding to supplier inquiries. But in 2012, we're beginning to see a little bit of a shift more toward strategic activities and the introduction of automation in the process.

Vishal Patel
If we compare procurement and AP, AP traditionally is lagging behind procurement in terms of transformation and improvement of performance in their groups. AP is currently at the point where it's trying to improve efficiency and trying to focus staff members on more strategic activities, instead of responding to supplier inquiries.

That's the general trend we've been seeing, and also just being able to connect the various processes within the procure-to-pay cycle.

New efficiencies

Gardner: Drew Hofler, we've seen an emphasis over the past several years, particularly in a tough economy, on seeking out new efficiencies. We've seen that in procurement and supply chain. Is this now AP's day in the sun, so speak, to grow more efficient?

Hofler: I would say that it is. It's probably the last bastion of paper processing in most organizations right now, typically seen, as Vishal mentioned, in the past as a back office tactical organization. They're seeing now that there are benefits that can be had by automating -- and not just automating the process and getting rid of paper -- but automating that on a network platform.

Drew Hofler
That allows visibility into key strategic data that drive decision-making throughout the organization and across their firewall to their suppliers as well. These are things like visibility into shipments, when they're coming in, visibility into line-item invoice data on the procurement side, so that they can do better analysis of their spend.

It's driving more strategic procurement on the supplier’s visibility into invoice status and payment timing, so they can manage their working capital and even access opportunities for getting paid early in exchange for discounts.

All of this stuff flows out of automation, and I think companies are really seeing how AP can now drive some of these strategic activities. So, I think it is their time in the sun.

Gardner: When we actually have an automation across the spectrum of these different activities, it seems to me that we're not going to be just collecting data and be able to proactively seek out new efficiencies or processes. It allows us to have more of an ad hoc, real-time benefit of being adept and even proactive. How is that important now, when you look at this entire spectrum of economic activity?

Hofler: That’s extremely important. Everybody needs to be nimble right now. The big deal is being able to adjust to the circumstances that are just crazy right now. It's having visibility into where you're spending specifically and when you're getting paid. Also, visibility into automating the invoice cycle and the AP process so that now you can do something with that with an early paid invoice that is approved maybe 45 days before it's due.

This opens up working-capital opportunities, where companies are offering early pay discounts to their suppliers. Suppliers who don't have the same access to cash flow that they had pre-2008 are accessing that, saying thank you, and are willingly giving up a discount so that they are lowering their days sales outstanding (DSO).

Buying organizations are getting something for their cash that they're certainly not getting with that cash sitting in bank accounts earning zero percent right now. Both sides are winning, and all of that's really made possible by automation.

Gardner: Vishal, this notion of being nimble, is that something that came up in your recent research and how important is that for companies to once again push the needle on efficiency?

Impact of AP

Patel: It's very important, especially when you start thinking about the impact that AP can have on other parts of the organization like procurement and finance. When you look at the P2P process, it's one transaction that all of these different stakeholders are connected to. But all the stakeholders are not connected to each other necessarily, and that's where automation comes in. That's when you get the added value of collaboration between the P2P cycle.

If you think about the manual environment where you're receiving paper invoices, paper purchase orders (POs). It's a difficult, really tedious work to get the right level of information at the right time, and then make decisions about how to most appropriately utilize cash.
One of the interesting things we found the research was that when we asked the survey participants what some of the biggest drivers are for the AP groups, the top one was improving processing efficiency, which is as expected, and it's been the same way for the last several years.

But the following two were the ones that were surprising. Number two and number three on the list were improving cash and working capital and improving days payable outstanding (DPO). Previously, we wouldn’t even have seen those on the list, but these are much higher on the list in 2012.
Any organization that can have visibility into their opportunities, into their process, and control over that process benefits from this.

Gardner: Drew, we recognize that large companies that are moving lots of goods that have a lot of capital involved are deeply incentivised to do this, but what about smaller organizations? Is this now something that is attainable by them, and are they starting to see benefits there, too?

Hofler: Absolutely. Any organization that can have visibility into their opportunities, into their process, and control over that process benefits from this. Smaller organizations on the buyer side are most definitely seeing the value of this. Lots of smaller organizations on the invoice sending and payment receiving side, what we would traditionally call the supplier side, the seller side, are seeing huge benefits from this.

For example, one of the suppliers on the Ariba network company called Mediafly, invoices with a very large entertainment company. They're a small company, they're a startup, and they're in growth mode. They have a full visibility into when they're paid and their CFO has told us that it's just like gold being able to see that.

So Mediafly has visibility into not only when their invoice is going to get paid, so that they can forecast on that, but also the ability to accelerate that payment on demand. They can literally click a button and get paid when they want.

They have told us that that has allowed them to hire, to accelerate their production of their products by hiring new developers, so that they can actually get a product out the door. They told us an example where they were able to get a new product out the door before they had planned, and they were scheduled to get paid on that original invoice.

Accelerated growth

And so it accelerated their growth. They've been able to avoid using credit lines because they have access to this through this kind of networked economy effect. They're able to see what's going on, and have the capability to make a strategic decision to accelerate cash, and it has really helped them as a small company.

Patel: In general within organizations, collaboration is a theme nowadays, with the workforce being quite diversified in terms of location. People are relying on collaborative efforts to help improve performance overall across the enterprise. And I think that's no different between procurement, AP, and treasury. Their collaborative efforts are going to improve each of their processes and the visibility they all have into the procure-to-pay process.

For example, procurement because of e-invoicing and supplier networks and just the visibility that AP is providing procurement, can improve their monitoring and measurement of supplier performance with invoice accuracy, how the're doing on payments, this helps them understand the total cost of working with a supplier.

That's one example of how procurement and AP can work together. But with treasury being able to understand what invoices are coming due, when they're coming due, when is the best time to make a payment, AP is able to deliver this kind of information in an accurate and real-time way, and that enhances their collaboration as well.
Their collaborative efforts are going to improve each of their processes and the visibility they all have into the procure-to-pay process.

Gardner: Drew, of course we're seeing lots of advancements in the field around cloud computing, mobile devices, and social networks, where people are becoming more accustomed to having an input and saying what's going on along the way. Technically, how is collaboration being driven into what Ariba is doing specifically around this AP automation?

Hofler: It all revolves around visibility into information, and as you said, access to make decisions based on that from across silos inside of organizations. For example, one of our customers, Maxim Healthcare, had very little visibility into procurement, across AP, and into their suppliers. All three of these stakeholders had very little visibility into what was happening, once a PO went out the door and once an invoice came in. There were spot processes that happened, but they were in a black box.

They had no way to enforce compliance to contracts. So an invoice comes in but it's not connected to the original document which is essentially a contract that enforces, say, volume discounts on widgets or whatever it might be. By automating the P2P process, by bringing all of these things into a kind of a network solution, the various stakeholders are able to see what's going on.

From the procurement side, they can see the line items on the invoice, so they can do better spend management and better analysis on their spend.

From a contract compliance perspective, the AP department can automatically connect the data in the invoice to that contract, to ensure that they're actually paying what they should be paying, and not too much.

Increased visibility

And from a supplier perspective, they benefit both from being able to see their invoice approval status, and when they're planning on getting paid. They're also able to access early payment, as I mentioned. One of the interesting benefits of this to Maxim was actually an increase in their DPO, a working-capital metric.

Procurement and AP typically may not have an impact on working-capital metrics that's usually a treasury and finance function. But when they had full visibility into their invoices and their payment terms, Maxim found that they were actually able to pay suppliers on time, rather than the practice of paying them early, because they just didn't have visibility into when they were supposed to pay them.

For a lot of my customers, we find that when we look at their vendor master, they often will have a lot of immediate terms with suppliers that they didn't realize they had, and their DPO was low as a result. So just getting visibility into all that gives them the ability to enforce the terms that they already have, and the net of that is to increase their DPO as Maxim saw.

Gardner: Now of course, we're in the networked economy. We've been talking about this in the context of an individual enterprise or a small business, but when more visibility data and accessed information along with collaboration is perhaps exploited at an industry or vertical level, there are some other benefits.

So does collaboration go beyond just what we're doing as an internal process? What about getting more data about what's going on in the whole industry and applying that to some of these business activities and decisions?
That's definitely huge and I would agree that it's right over the horizon.

Patel: When you have trading partners on a network and a whole cluster of them in a specific industry, there’s tons and tons of data that can be collected on invoicing, payments, purchase orders, spending habits, spending behaviors, and certain commodities.

There is a whole host of data that's collected, that's maybe the next phase of where the supplier networks go and how they make use of information. To date, I think it's still a matter of getting the scale and getting the network to a size where that information is available and makes sense. That's probably the next phase of it.

Hofler: I definitely agree with that. It's really the promise of the network, as Vishal pointed too. As you get the network effect and you get the massive amounts of data, there is just a tremendous amount of data flowing through on a daily basis on the Ariba network.

That's one of the things that's very exciting about our recent acquisition by SAP. There’s a big data program called HANA that they're developing and pushing. That's going to blow out the market. The amount of data that we can bring into that, and then slice and dice to the various different uses that's required to get intelligence into some of the things that Vishal was talking about. That's definitely huge and I would agree that it's right over the horizon.

Metrics of success

Most of the companies that come onto the Ariba network to do invoice automation, we call it Smart Invoicing, are able to set up certain parameters so that by the time an invoice gets to them, it's very clean. The suppliers give an immediate feedback on things that need to be fixed, as the invoice is being submitted, and then they get it very clean.
The result of that is that we have many customers who have 95 percent, 98 percent straight-through processing. Invoice comes through, it goes straight into their back end system and it's scheduled for payment and they're ready to go.

One of our customers, Ecolab Inc., has employed this. They had a couple of big problems, for example, where they had no visibility into their shipment information from the supplier on the front end of the process and their suppliers again had no visibility into payment on the back end of the process.
There are benefits to thinking more long term about the entire process.

A very interesting thing happened. When they weren't able to get visibility into shipment, they couldn't invoice their customer until they knew they had received the shipment that was going to be part of what they are invoicing their customer for from their supplier.

That led to an extended DSO, which is not a positive. By getting visibility into this, they were able to invoice on shipment and lower their DSO. Traditionally procurement and AP would not play in terms of DSO, but now they're able to contribute to the more strategic level of the company by impacting DSO in a positive way.

Additionally, they had risk in their supply chain from their suppliers not knowing when they were going to get paid, and sometimes threatening to and carrying through withholding shipment until they received payment on a particular thing. Now, their suppliers can see exactly when they're going to get paid and that has increased satisfaction and lowered the risk for them as well.

Just by automating the process and approving invoices in time, Ecolab increased their capture of contracted early-pay discounts from somewhere around 25 percent or 30 percent that they were able to capture before, to upwards of 95 percent. So that's a huge benefit to them as well.

Gardner: Vishal, in closing out, how do organizations get started on this? What are some typical steps that they should take in order to avail themselves of some of these benefits that we've been discussing?

Patel: One of the key things is, when looking at an automation initiative in the procure-to-pay process, to think about the process holistically, instead of focusing on automating one part, one process in AP or in procurement. There are benefits to thinking more long term about the entire process, how it's going to integrate, what technologies are going to be used for each part of the process, and whether that's all done at once or over phases.

Best practices

Gardner: Drew any thoughts from your perspective on getting started, best practices, or even where to get more information?

Hofler: For more information, come to ariba.com and look at all of our solution pieces. For getting started, I would agree with Vishal. In the networked economy, it's all about sharing information across silos, across stakeholders, and doing so in an automated fashion.

There are a lot of pieces to that and a lot of steps and processes along the way, where that information can be captured and shared across these parties.

A lot of people take it all at once in P2P process. Other people will automate POs and then invoice automation and then early payment discounting. I say look at where your communication breaks down internally over these processes, and let's target that first with some automation that can bring visibility into that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba.

You may also be interested in:

Tuesday, November 6, 2012

Liberty Mutual Insurance melds regulatory compliance and security awareness to better protect assets, customers, and employees

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how Liberty Mutual Insurance is effectively building security more deeply into its overall business practices.

We'll see how the requirements of compliance and regulatory governance are aligning with security best practices to attain the higher goals of enterprise resiliency, and deliver greater responsiveness to all varieties of risk.

Here to explore these and other security-related enterprise IT issues, we're joined by our co-host Raf Los, Chief Security Evangelist at HP Software, and special guest John McKenna, Vice President and Chief Information Security Officer (CISO) for Liberty Mutual Insurance, based in Boston. The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why is security so important to your business now, and in what ways are you investing?

McKenna: It’s pretty clear to us that the world has changed in terms of the threats and in terms of the kinds of technologies that we're using these days to enable our business. Certainly, there's an obligation there, a responsibility to protect our customers’ information as well as making sure that our business operations can continue to support those customers.

John McKenna
So, as I said, it's the realization that we need to make sure we’re as secure as we need to be, and we can have a very deep discussion about how secure we need to be.

In addition to that, we have our own employees, who we feel we need to protect to enable them to work and get the job done to support our customers, while doing so in a very secure workplace environment.

Gardner: How do you think things are different now than, say, four or five years ago?

McKenna: I'll start with just the technology landscape itself. From mobility platforms and social networking to cloud computing, all of those are introducing different attack vectors, different opportunities for the bad guys to take advantage of.

Reducing the threat

We need to make sure that we can use those technologies and enable our business to use them effectively to grow our business and service our customers, while at the same time, protecting them so that we reduce the threat. We will never eliminate it, but we can reduce the opportunities for the bad guys to take advantage.

Los: John, you talk about for your customers. From a security perspective, your customers are your external customers as well as internal, correct?

McKenna: We absolutely have our internal customer as well. We have partners, vendors, agencies, and brokers that we're doing business with. They're all part of the supply chain. We have an obligation to make sure that whatever tools and technologies we are enabling them with, we’re protecting that as well.

Gardner: Liberty Mutual, of course, is a large and long-time leader in insurance. Help us understand the complexity that you're managing when it comes to bringing security across this full domain.

McKenna: We're a global company in the Fortune 100 list. We have $35 billion in revenue and we have about 45,000 employees worldwide. We offer products across the personal and commercial lines products, or P&C, and life insurance products. We’ve got somewhere in the range of 900-plus offices globally.

So we have lots of people. We have lots of connections and we have a lot of customers and suppliers who are all part of this business. It’s a very complex business operation, and there are a lot of challenges to make sure that we're supporting the customers, the business, and also the projects that are continually trying to build new technology and new capabilities.
In the past, security was really something that was delegated and was an afterthought in some respect.

Gardner: Raf, when we talk about what’s different in companies, one of the things is that in the past security was really something that was delegated and was an afterthought in some respect.

But security is now thought through right at the very beginning of planning for new services. Is that the case in your travels?

Los: That’s what I'm seeing, and there's still the maturation that’s happening across the enterprise spectrum where a lot of the organizations -- believe it or not, in 2012 -- are still standing up formalized security organizations.

Not a given

So security is not a given yet, where that the department exists, is well-funded, well-staffed, and well-respected.You're getting to that state where security is not simply an afterthought or as it was in an organization in my past job history a decade ago or so. In those types of companies, they would get it done and the say, "By the way, security, if you take a look at this before we launch it, make sure it’s given virtual thumbs up. You’ve got about 20 minutes to go."

Raf Los
If you can get away from that, it’s really about security teams stepping up and demonstrating that they understand the business model and that they're there to serve the organization, rather than simply dictate policy. It’s really a process of switching from this tight iron-grip on control to more of a risk model.

It's sort of a cliché, but IT technology risks understanding acceptance and guidance. I think that’s where it’s starting to win over the business leaders. It’s not that people don’t care about security. They do. They just don’t know they do. It’s up to us to make sure that they understand the context of their business.

Gardner: John, is that ringing true for you at Liberty Mutual?

McKenna: It absolutely is. It goes from the top on down. Our board certainly is reading the headlines every day. Where there are new breaches, their first question is, "Can this happen to us?"
As we're rolling out new capabilities, we have a responsibility to protect the brand and the reputation.

So it certainly starts there, but I think that there absolutely is an appreciation at our strategic business units, the leadership, as well as the IT folks that are supporting them, that as we're rolling out new capabilities, we have a responsibility to protect the brand and the reputation. So they're always thinking first about exactly what the threats and the vulnerabilities might be and what we have to do about it.

We’ve got a lot of programs under way in our security program to try to train our developers how to develop application, secure coding practices, and what those need to be. We’ve got lots of work related to our security awareness program, so that the entire population of 45,000 employees has an understanding of what their responsibilities are to protect our company's information assets.

I will use a term used by a colleague that Raf and I know. Our intent is not to secure the company 100 percent. That’s impossible, but we intend to provide responsible defenses to make sure that we are protecting the right assets in the right way.

Los: That’s very interesting. You mentioned something about how the board reads the headlines, and I want to get your take on this. I'm going to venture a guess. It’s not because you’ve managed to get them enough paper, reams of paper with reports that say we have a thousand vulnerabilities. It’s not why they care.

Quite a challenge

McKenna: Absolutely right. When I say they're reading the headlines, they're reading what’s happening to other companies. They're asking, "Can that happen to us?" It's quite a challenge -- a challenge to give them the view, the visibility that is right, that speaks to exactly what our vulnerabilities are and what we are going about it. At the same time, I'm not giving them a report of a hundred pages that lists every potential incident or vulnerability that we uncovered.

Los: In your organization, whose job is it? We’ve had triangulation between the technical nomenclature, technical language, the bits and bytes, and then the stuff at the board actually understands. I'm pretty sure SQL injection is not something that a board member would understand.

McKenna: It's my job and it's working with my CIO to make sure that we are communicating at the right levels and very meaningfully, and that we’ve, in fact, got the right perspective on this ourselves. You mentioned risk and moving to more of a risk model. We're all a bit challenged on maturing, what that model, that framework, and those metrics are.

When I think about how we should be investing in security at Liberty Mutual and making the business case, sometimes it's very difficult, but I think about it at the top level. If you think about any business model, one approach is a product approach, where you get specific products and you develop go-to-market strategies around those.

If you think about the bad guys and their products, either they're looking to steal customer information, they are looking to steal intellectual property (IP), or they're looking to just shut down systems and disable services. So at the high level, we need to figure out exactly where we fit in that food chain? How much bigger risk are we at at that product level?
It's working with my CIO to make sure that we are communicating at the right levels and very meaningfully.

Gardner: I've seen another on-ramp to getting the attention and creating enough emphasis on the importance of security through the compliance and regulation side of things, and certainly the payment card industry (PCI) comes to mind. Has this been something that's worked for you at Liberty Mutual, or you have certain compliance issues that perhaps spur along behaviors and patterns that can lead to longer-term security benefit?

McKenna: We're a highly-regulated industry, and PCI is perhaps a good example. For our personal insurance business unit, we've just achieved compliance through QSA. We’ve worked awfully hard at that. It’s been a convenient step for us to address some of these foundational security improvements that we needed to make.

We're not done yet. We need to extend that and now we're working on that, so that our entire systems have the same level of protections and controls that are required by PCI, but even beyond PCI. We're looking to extend those to all personal identifiable information, any sensitive information in the company, making sure that those assets have the same protections, the same controls that are essential.

Gardner: Raf, do you see that as well that the compliance issues are really on-ramp, or an accelerant, to some of these better security practices that we've been talking about?

Los: Absolutely. You can look at compliance in one of two ways. You can either look at a compliance from a peer’s security perspective and say compliance is hogwash, just a checkbox exercise. There’s simply no reason that it's ever going to improve security.

Being an optimist

Or you can be an optimist. I choose to be an optimist, and take my cue from a mentor of mine and say, "Look, it's a great way to demonstrate that you can do the minimum due diligence, satisfy the law and the regulation, while using it as a springboard to do other things."

And John has been talking about this too. Foundationally, I see things like PCI and other regulations, HIPAA, taking things that security would not ordinarily get involved in. For, example, fantastic asset management and change management and organization.

When we think security, the first thing that often we hear is probably not a good change management infrastructure. Because of regulations and certain industries being highly regulated, you have to know what's out there. You have to know what shape it's in.

If you know your environment, the changes that are being made, know your assets, your cycles, and where things fall, you can much more readily consider yourself better at security. Do you believe that?

McKenna: It's a great plan. I think a couple of things. First of all, about leveraging compliance, PCI specifically, to make improvements for your entire security posture.
Because of regulations and certain industries being highly regulated, you have to know what's out there. You have to know what shape it's in.

So we stepped back and considered, as a result of PCI mapped against the SANS Top 20 cyber security controls, where we made improvements. Then, we demonstrated that we made improvements in 16 of the 20 across the enterprise. So that's one point. We use compliance to help and improve the overall security posture.

As far as getting involved in other parts of the IT lifecycle, absolutely -- change management, asset management. Part of our method now for any new asset that's been introduced into production, the first question is, is this a PCI-related asset? And that requires certain controls and monitoring that we have to make sure are in place.

Level of sophistication

We're certainly dealing with a higher level of sophistication. We know that. We also know that there is a lot we don't know. We certainly are different from some industries. We don't see that we're necessarily a direct target of nation-states, but maybe an indirect. If we're part of a supply chain that is important, then we might still get targeted.

But my comment to that is that we've recognized the sophistication and we've recognized that we can't do this alone. So we've been very active, very involved in the industry, collaborating with other companies and even collaborating with universities.

An effort we've got underway is the Advanced Cyber Security Center, run out of Boston. It's a partnership across public and private sectors and university systems, trying to develop ways we can share intelligence, share information, and improve the overall talent-base of and knowledge base of our companies and industry.

Los: This is something that's been building. When we started many years ago, hacking was a curiosity. It moved into a mischief. It moved into individual gains and benefits. People were showing off to their girlfriend that they hacked a website and defaced it.
There are entire cultures, entire markets, and strata of organized crime that get into this.

Those elements have not gone away, by the way, but we've moved into a totally new level of sophistication. The reason for that is that organized crime got involved. The risk is a lot higher in person than it is over the Internet. Encrypting somebody's physical hard drive and threatening to never give it back, unless they pay you, is a lot easier when there is nobody physically standing in front of you who can pull a gun on you. It's just how it is.

Over the “Internet,” there is anonymity per se. There is a certain level of perceived anonymity and it's easier to be part of those organized crimes. There are entire cultures, entire markets, and strata of organized crime that get into this. I'm not even going to touch the whole thing on activism and that whole world, because that’s an entirely different ball of wax.

But absolutely, the threat has evolved. It's going to continue to evolve. To use a statement that was made earlier this morning in a keynote by Bruce Schneier, technology is often adapted by the bad guys much faster than it is with good guys.

The bad guys look at it and say, "Ooh, how do we utilize it?" Good guys look at a car and say, "I can procure it, do an RFP, and it will take me x number of months." Bad guys say, "That’s our getaway vehicle." It’s just the way it works. It's opportunity.

Insurance approach

Gardner: I want to go out on a limb a little bit here and only because Liberty Mutual is a large and established insurance company. One of the things that I’ve been curious about in the field of security is when an insurance approach to security might arise?

For example, when fire is a hazard, we have insurance companies that come to a building and say, "We'll insure you, but you have to do x, y and z. You have to subscribe to these practices and you have to put in place this sort of infrastructure. Then, we'll come up with an insurance policy for you." Is such a thing possible with security for enterprises. Maybe you’re not the right person, John, but I am going to try.

McKenna: It’s an interesting discussion, and we had some of that discussion internally. Why aren’t we leveraging some of the practices of our actuarial departments, or risk assessors that are out there working our insurance products?

I recently met with a company that, in fact, brokers cyber insurance, and we're trying to learn from them. This is certainly not a mature product yet or mature marketplace for cyber insurance. Yet they're applying the same types of risk assessments, risk analysis, and metrics to determine exactly what a company’s vulnerabilities might be, what their risk posture might be, and exactly how to price a cyber insurance product. We're trying to learn from that.
The fact that you don’t have the metrics is one side of this. It’s very difficult to price.

Los: As you were talking, I kept thinking that my life insurance company knows how much they charge me based on years and years and years and years of statistical data behind smokers, non-smokers, people who drive fast, people who are sedentary, people who workout, eat well, etc. Do we have enough data in the cyber world? I don’t think so, which means this is a really interesting game of risk.

McKenna: It’s absolutely an interesting point. The fact that you don’t have the metrics is one side of this. It’s very difficult to price. But the fact that they at least know what they should be measuring to come up with that price is part of it. You need to leverage that as a risk model and figure out what kind of assumptions you're making and what evidence can you produce to at least verify or invalidate the model.

Los: On the notion of insurance, I can just think of all the execs that have listened to that, if it’s that insurance,saying, "Great. That means we don’t have to do anything, and if something bad happens the insurance will cover it." I can just see that as a light bulb going on over somebody’s head.

McKenna: We're just trying to learn from it, to understand how we should be assessing our own risk posture and prioritizing where we think the security investment should be.

Away from the silo

Los: Security is going to continue to move away from being a silo in the enterprise. It's something that is fundamental, a thread through the fabric. The notion of a stand-alone security team is definitely becoming outdated. It’s a model that does not work. We demonstrated that it does not work.

It cannot be an afterthought and all the fun clichés to go with it. What you're going to start seeing more and more of are the nontraditional security things. Those include, as I said, like I said change management, log aggregation, getting more involved into business day to day, and actually understanding.

I can't tell you how many security people I talk to that I asked the question, "So what does your company do?" And I get that brief moment of blank stare. If you can’t tell me how your company survives, stays competitive, and makes money, then really what are you doing and what are you protecting, and more importantly, why?

That’s going to continue to evolve, it’s just going to separate the really good folks, like John, that get it from those who are simply pushing buttons and hoping for the best.

Gardner: I'm afraid we will have to leave it there. Please me join me in thanking our co-host, Raf Los, Chief Security Evangelist at HP Software, and our special guest John McKenna, Vice President and CISO for Liberty Mutual. You can gain more insights and information on the best of IT Performance Management at http://www.hp.com/go/discoverperformance.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, October 31, 2012

BMC's MyIT puts IT and business services into the hands of employees with app store ease

BMC Software this week launched MyIT, an enterprise IT help desk solution that empowers employees to take more personal control over their IT services and to get the right type of help they need -- anytime, anywhere, from any device. 
 
Frustration with company IT departments is a widely shared experience.  Forrester Research reports that just 35 percent of business decision-makers say IT provides “high quality, timely end user support.” What’s more, employees are increasingly circumventing their IT organizations in search of faster IT support and problem resolution.

Moreover, studies show that the friction between users and IT help capabilities saps as much as 20 percent of productivity away from workers. That's a day a week when things go wrong.

“The IT people and non-IT people sometimes talk two different languages, and it’s hard to cross that barrier. In fact, a lot of times there’s this unfounded fear of IT because the users typically don’t get the information they need, or don’t understand it when it is given to them,” said Robert Stinnett, senior analyst at Carfax.

What's largely been missing is a focus on the complete processes of IT help desk -- from the users' point of view. Too often help comes in the form of a technology fix for a specific product, leaving users in the role of integrator, if they can. Or, when they are able, they find that they manage their personal IT services better using online resources than their IT experiences provide at work. [Disclosure: BMC is a sponsor of BriefingsDirect podcasts.]

To improve on this, MyIT delivers a personalized portfolio of technology and services to each employee, including a content locker, mobile corporate app store, and other location-aware services and solutions. MyIT also integrates with BMC’s Remedy IT Service Management suites and will bring the power of the larger Business Service Management portfolio to workers.

The result is a merging of IT provisioning and access functions with the support information and help functions when things get dicey.  It makes a lot of sense to me that these functions overlap and come through similar, user-friendly interfaces and processes.

"This is a game-changing way of presenting data and services to end-users," said Jason Frye, Director, Office of the CTO, at BMC Software.

Gaining productive value

Today, in a powerful irony, an employee’s personal IT experience is much better than their IT experience at work, yet they’re forced to relinquish the productive value of their personal IT when they go to work,” said Kia Behnia, BMC’s CTO.  Employees want IT organizations that provide a modern 'store front' for IT services and information delivery and a 'genius bar' ability to manage and control the IT services and information they need to do their jobs.  IT organizations must respond to this change, and MyIT is the bridge that connects their industrialized infrastructure with the needs and expectations of their fellow employees.”

Among the features and benefits of MyIT:
  • The combination of self-service, process automation and the right employee-facing UI slash the IT costs associated with resolving trouble tickets – as much as 25 percent in large companies.
    IT organizations must respond to this change, and MyIT is the bridge that connects their industrialized infrastructure with the needs and expectations of their fellow employees.

  • MyIT allows employees to focus on productivity and value creation, rather than fixing IT problems.  Employees can specify and manage their own personalized IT service and information delivery.  Services and information required by individual employees are immediately updated as new information comes online or an employee’s location changes.
  • MyIT takes an employee’s positive experience with IT in their personal lives and extends it into their work life with immediate access to the right services and context-aware content, unhampered by old-line IT processes.
Speaking about the new solution, Abraham Galan, CIO at energy giant PEMEX, said: “PEMEX will be among the first companies in the world to deliver BMC Software’s MyIT solution – in our case, that means more than 75,000 IT users. Employees are demanding a much better service experience than many IT organizations have been able to provide. PEMEX has been a leader in this area, and we believe that BMC’s MyIT will reduce our cost of service delivery and enable us to compete more effectively, both for markets and for talent.”

The implications of the service also involve the cloud. MyIT can easily be delivered as an on-premises or as SaaS services. This sets the stage for IT to begin outsourcing more help desk functions that it makes sense to, but deliver them all with a singular front end. The MyIT services will come with web, as well as native mobile apps, when the service goes to beta in January. General availabilty is expected in April.

The timing is great, given the uptick in BYOD interest and use, too. I can also see where a social environment meshes well with MyIT, so that the "wall" interface and community-based help and knowledge are shared to more benefit of all. And this also takes the load off of IT while building a better knowledge base.

Lastly, the MyIT approach also fosters more of a two-way street, so that usage, problem and remediation data are being delivered back to the CMDB, the IT system of record, to build a continuous and integrated IT lifecycle capability. I can even imagine more automation and data-driven IT support from the IT systems themselves, a IT help cloud provider, or both, in the coming years.

For more information and to see a video of the live demo, go to http://www.bmc.com/products/myit/it-self-service.html.

You may also be interested in:

Friday, October 26, 2012

It's happening: Hadoop and SQL worlds are converging


This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

By Tony Baer

With Strata, IBM IOD, and Teradata Partners conferences all occurring this week, it’s not surprising that this is a big week for Hadoop-related announcements. The common thread of announcements is essentially, “We know that Hadoop is not known for performance, but we’re getting better at it, and we’re going to make it look more like SQL.” In essence, Hadoop and SQL worlds are converging, and you’re going to be able to perform interactive BI analytics on it.

Tony Baer
The opportunity and challenge of Big Data from new platforms such as Hadoop is that it opens a new range of analytics. On one hand, Big Data analytics have updated and revived programmatic access to data, which happened to be the norm prior to the advent of SQL. There are plenty of scenarios where taking programmatic approaches are far more efficient, such as dealing with time series data or graph analysis to map many-to-many relationships.

It also leverages in-memory data grids such as Oracle Coherence, IBM WebSphere eXtreme Scale, GigaSpaces and others, and, where programmatic development (usually in Java) proved more efficient for accessing highly changeable data for web applications where traditional paths to the database would have been I/O-constrained. Conversely Advanced SQL platforms such as Greenplum and Teradata Aster have provided support for MapReduce-like programming because, even with structured data, sometimes using a Java programmatic framework is a more efficient way to rapidly slice through volumes of data.
But when you talk analytics, you can’t simply write off the legions of SQL developers that populate enterprise IT shops.

Until now, Hadoop has not until now been for the SQL-minded. The initial path was, find someone to do data exploration inside Hadoop, but once you’re ready to do repeatable analysis, ETL (or ELT) it into a SQL data warehouse. That’s been the pattern with Oracle Big Data Appliance (use Oracle loader and data integration tools), and most Advanced SQL platforms; most data integration tools provide Hadoop connectors that spawn their own MapReduce programs to ferry data out of Hadoop. Some integration tool providers, like Informatica, offer tools to automate parsing of Hadoop data. Teradata Aster and Hortonworks have been talking up the potentials of HCatalog, in actuality an enhanced version of Hive with RESTful interfaces, cost optimizers, and so on, to provide a more SQL friendly view of data residing inside Hadoop.

But when you talk analytics, you can’t simply write off the legions of SQL developers that populate enterprise IT shops. And beneath the veneer of chaos, there is an implicit order to most so-called “unstructured” data that is within the reach programmatic transformation approaches that in the long run could likely be automated or packaged inside a tool.

At Ovum, we have long believed that for Big Data to crossover to the mainstream enterprise, that it must become a first-class citizen with IT and the data center. The early pattern of skunk works projects, led by elite, highly specialized teams of software engineers from Internet firms to solve Internet-style problems (e.g., ad placement, search optimization, customer online experience, etc.) are not the problems of mainstream enterprises. And neither is the model of recruiting high-priced talent to work exclusively on Hadoop sustainable for most organizations; such staffing models are not sustainable for mainstream enterprises. It means that Big Data must be consumable by the mainstream of SQL developers.

Making Hadoop more SQL-like is hardly new

Hive and Pig became Apache Hadoop projects because of the need for SQL-like metadata management and data transformation languages, respectively; HBase emerged because of the need for a table store to provide a more interactive face – although as a very sparse, rudimentary column store, does not provide the efficiency of an optimized SQL database (or the extreme performance of some columnar variants). Sqoop in turn provides a way to pipeline SQL data into Hadoop, a use case that will grow more common as organizations look to Hadoop to provide scalable and cheaper storage than commercial SQL. While these Hadoop subprojects that did not exactly make Hadoop look like SQL, they provided building blocks from which many of this week’s announcements leverage.

Progress marches on

One train of thought is that if Hadoop can look more like a SQL database, more operations could be performed inside Hadoop. That’s the theme behind Informatica’s long-awaited enhancement of its PowerCenter transformation tool to work natively inside Hadoop. Until now, PowerCenter could extract data from Hadoop, but the extracts would have to be moved to a staging server where the transformation would be performed for loading to the familiar SQL data warehouse target. The new offering, PowerCenter Big Data Edition, now supports an ELT pattern that uses the power of MapReduce processes inside Hadoop to perform transformations. The significance is that PowerCenter users now have a choice: load the transformed data to HBase, or continue loading to SQL.

There is growing support for packaging Hadoop inside a common hardware appliance with Advanced SQL. EMC Greenplum was the first out of gate with DCA (Data Computing Appliance) that bundles its own distribution of Apache Hadoop (not to be confused with Greenplum MR, a software only product that is accompanied by a MapR Hadoop distro).

Teradata Aster has just joined the fray with Big Analytics Appliance, bundling the Hortonworks Data Platform Hadoop; this move was hardly surprising given their growing partnership around HCatalog, an enhancement of the SQL-like Hive metadata layer of Hadoop that adds features such as a cost optimizer and RESTful interfaces that make the metadata accessible without the need to learn MapReduce or Java. With HCatalog, data inside Hadoop looks like another Aster data table.

Not coincidentally, there is a growing array of analytic tools that are designed to execute natively inside Hadoop. For now they are from emerging players like Datameer (providing a spreadsheet-like metaphor; which just announced an app store-like marketplace for developers), Karmasphere (providing an application develop tool for Hadoop analytic apps), or a more recent entry, Platfora (which caches subsets of Hadoop data in memory with an optimized, high performance fractal index).
Yet, even with Hadoop analytic tooling, there will still be a desire to disguise Hadoop as a SQL data store, and not just for data mapping purposes.

Yet, even with Hadoop analytic tooling, there will still be a desire to disguise Hadoop as a SQL data store, and not just for data mapping purposes. Hadapt has been promoting a variant where it squeezes SQL tables inside HDFS file structures – not exactly a no-brainer as it must shoehorn tables into a file system with arbitrary data block sizes. Hadapt’s approach sounds like the converse of object-relational stores, but in this case, it is dealing with a physical rather than a logical impedance mismatch.

Hadapt promotes the ability to query Hadoop directly using SQL. Now, so does Cloudera. It has just announced Impala, a SQL-based alternative to MapReduce for querying the SQL-like Hive metadata store, supporting most but not all forms of SQL processing (based on SQL 92; Impala lacks triggers, which Cloudera deems low priority). Both Impala and MapReduce rely on parallel processing, but that’s where the similarity ends. MapReduce is a blunt instrument, requiring Java or other programming languages; it splits a job into multiple, concurrently, pipelined tasks where, at each step along the way, reads data, processes it, and writes it back to disk and then passes it to the next task.

Conversely, Impala takes a shared nothing, MPP approach to processing SQL jobs against Hive; using HDFS, Cloudera claims roughly 4x performance against MapReduce; if the data is in HBase, Cloudera claims performance multiples up to a factor of 30. For now, Impala only supports row-based views, but with columnar (on Cloudera’s roadmap), performance could double. Cloudera plans to release a real-time query (RTQ) offering that, in effect, is a commercially supported version of Impala.

By contrast, Teradata Aster and Hortonworks promote a SQL MapReduce approach that leverages HCatalog, an incubating Apache project that is a superset of Hive that Cloudera does not currently include in its roadmap. For now, Cloudera claims bragging rights for performance with Impala; over time, Teradata Aster will promote the manageability of its single appliance, and with the appliance has the opportunity to counter with hardware optimization.

The road to SQL/programmatic convergence

Either way – and this is of interest only to purists – any SQL extension to Hadoop will be outside the Hadoop project. But again, that’s an argument for purists. What’s more important to enterprises is getting the right tool for the job – whether it is the flexibility of SQL or raw power of programmatic approaches.

SQL convergence is the next major battleground for Hadoop. Cloudera is for now shunning HCatalog, an approach backed by Hortonworks and partner Teradata Aster. The open question is whether Hortonworks can instigate a stampede of third parties to overcome Cloudera’s resistance. It appears that beyond Hive, the SQL face of Hadoop will become a vendor-differentiated layer.

Part of conversion will involve a mix of cross-training and tooling automation. Savvy SQL developers will cross train to pick up some of the Java- or Java-like programmatic frameworks that will be emerging. Tooling will help lower the bar, reducing the degree of specialized skills necessary.

And for programming frameworks, in the long run, MapReduce won’t be the only game in town. It will always be useful for large-scale jobs requiring brute force, parallel, sequential processing. But the emerging YARN framework, which deconstructs MapReduce to generalize the resource management function, will provide the management umbrella for ensuring that different frameworks don’t crash into one another by trying to grab the same resources. But YARN is not yet ready for primetime – for now it only supports the batch job pattern of MapReduce. And that means that YARN is not yet ready for Impala or vice versa.
Either way – and this is of interest only to purists – any SQL extension to Hadoop will be outside the Hadoop project. But again, that’s an argument for purists.

Of course, mainstreaming Hadoop – and Big Data platforms in general – is more than just a matter of making it all look like SQL. Big Data platforms must be manageable and operable by the people who are already in IT; they will need some new skills and grow accustomed to some new practices (like exploratory analytics), but the new platforms must also look and act familiar enough. Not all announcements this week were about SQL; for instance, MapR is throwing a gauntlet to the Apache usual suspects by extending its management umbrella beyond the proprietary NFS-compatible file system that is its core IP to the MapReduce framework and HBase, making a similar promise of high performance.

On the horizon, EMC Isilon and NetApp are proposing alternatives promising a more efficient file system but at the “cost” of separating the storage from the analytic processing. And at some point, the Hadoop vendor community will have to come to grips with capacity utilization issues, because in the mainstream enterprise world, no CFO will approve the purchase of large clusters or grids that get only 10 – 15 percent utilization. Keep an eye on VMware’s Project Serengeti.

They must be good citizens in data centers that need to maximize resource (e.g., virtualization, optimized storage); must comply with existing data stewardship policies and practices; and must fully support existing enterprise data and platform security practices. These are all topics for another day.

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

You may also be interested in: